Builder
Properties
The action to take when harmful content is detected in the input. Supported values include:
Indicates whether guardrail evaluation is enabled on the input. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
The input modalities selected for the guardrail content filter.
The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
The action to take when harmful content is detected in the output. Supported values include:
Indicates whether guardrail evaluation is enabled on the output. When disabled, you aren't charged for the evaluation. The evaluation doesn't appear in the response.
The output modalities selected for the guardrail content filter.
The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.
The harmful category that the content filter is applied to.