SemanticChunkingConfiguration

Settings for semantic document chunking for a data source. Semantic chunking splits a document into into smaller documents based on groups of similar content derived from the text with natural language processing.

With semantic chunking, each sentence is compared to the next to determine how similar they are. You specify a threshold in the form of a percentile, where adjacent sentences that are less similar than that percentage of sentence pairs are divided into separate chunks. For example, if you set the threshold to 90, then the 10 percent of sentence pairs that are least similar are split. So if you have 101 sentences, 100 sentence pairs are compared, and the 10 with the least similarity are split, creating 11 chunks. These chunks are further split if they exceed the max token size.

You must also specify a buffer size, which determines whether sentences are compared in isolation, or within a moving context window that includes the previous and following sentence. For example, if you set the buffer size to 1, the embedding for sentence 10 is derived from sentences 9, 10, and 11 combined.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

The dissimilarity threshold for splitting chunks.

Link copied to clipboard

The buffer size.

Link copied to clipboard

The maximum number of tokens that a chunk can contain.

Functions

Link copied to clipboard
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String