Builder

class Builder

Properties

Link copied to clipboard

The action to take when harmful content is detected in the input. Supported values include:

Link copied to clipboard

Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

Link copied to clipboard

The input modalities selected for the guardrail content filter.

Link copied to clipboard

The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

Link copied to clipboard

The action to take when harmful content is detected in the output. Supported values include:

Link copied to clipboard

Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.

Link copied to clipboard

The output modalities selected for the guardrail content filter.

Link copied to clipboard

The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

Link copied to clipboard

The harmful category that the content filter is applied to.