OrchestrationConfiguration

Settings for how the model processes the prompt prior to retrieval and generation.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

Additional model parameters and corresponding values not included in the textInferenceConfig structure for a knowledge base. This allows users to provide custom model parameters specific to the language model being used.

Link copied to clipboard

Configuration settings for inference when using RetrieveAndGenerate to generate responses while using a knowledge base as a source.

Link copied to clipboard

The latency configuration for the model.

Link copied to clipboard

Contains the template for the prompt that's sent to the model. Orchestration prompts must include the $conversation_history$ and $output_format_instructions$ variables. For more information, see Use placeholder variables in the user guide.

Link copied to clipboard

To split up the prompt and retrieve multiple sources, set the transformation type to QUERY_DECOMPOSITION.

Functions

Link copied to clipboard
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String