OrchestrationConfiguration
Settings for how the model processes the prompt prior to retrieval and generation.
Types
Properties
Additional model parameters and corresponding values not included in the textInferenceConfig structure for a knowledge base. This allows users to provide custom model parameters specific to the language model being used.
Configuration settings for inference when using RetrieveAndGenerate to generate responses while using a knowledge base as a source.
The latency configuration for the model.
Contains the template for the prompt that's sent to the model. Orchestration prompts must include the $conversation_history$
and $output_format_instructions$
variables. For more information, see Use placeholder variables in the user guide.
To split up the prompt and retrieve multiple sources, set the transformation type to QUERY_DECOMPOSITION
.