PromptModelInferenceConfiguration

Contains inference configurations related to model inference for a prompt. For more information, see Inference parameters.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

The maximum number of tokens to return in the response.

Link copied to clipboard

A list of strings that define sequences after which the model will stop generating.

Link copied to clipboard

Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

Link copied to clipboard
val topP: Float?

The percentage of most-likely candidates that the model considers for the next token.

Functions

Link copied to clipboard
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String