Class InfluxDBv3EnterpriseParameters
- All Implemented Interfaces:
Serializable
,SdkPojo
,ToCopyableBuilder<InfluxDBv3EnterpriseParameters.Builder,
InfluxDBv3EnterpriseParameters>
All the customer-modifiable InfluxDB v3 Enterprise parameters in Timestream for InfluxDB.
- See Also:
-
Nested Class Summary
Nested Classes -
Method Summary
Modifier and TypeMethodDescriptionbuilder()
final Duration
Defines how often the catalog synchronizes across cluster nodes.final Duration
Specifies how often the compactor checks for new compaction work to perform.final Duration
Specifies the amount of time that the compactor waits after finishing a compaction run to delete files marked as needing deletion during that compaction run.final Duration
Specifies the duration of the first level of compaction (gen2).final Integer
Sets the maximum number of files included in any compaction plan.final String
Specifies a comma-separated list of multiples defining the duration of each level of compaction.final Integer
Specifies the soft limit for the number of rows per file that the compactor writes.final String
Provides custom configuration to DataFusion as a comma-separated list of key:value pairs.final Integer
When multiple parquet files are required in a sorted way (deduplication for example), specifies the maximum fanout.final Integer
Sets the maximum number of DataFusion runtime threads to use.final Boolean
Disables the LIFO slot of the DataFusion runtime.final Integer
Sets the number of scheduler ticks after which the scheduler of the DataFusion tokio runtime polls for external events–for example: timers, I/O.final Integer
Sets the number of scheduler ticks after which the scheduler of the DataFusion runtime polls the global task queue.final Integer
Specifies the limit for additional threads spawned by the DataFusion runtime.final Integer
Configures the maximum number of events processed per tick by the tokio DataFusion runtime.final Duration
Sets a custom timeout for a thread in the blocking pool of the tokio DataFusion runtime.final Integer
Sets the thread priority for tokio DataFusion runtime workers.final DataFusionRuntimeType
Specifies the DataFusion tokio runtime type.final String
Specifies the DataFusion tokio runtime type.final Boolean
Uses a cached parquet loader when reading parquet files from the object store.final Boolean
Specifies if the compactor instance should be a standalone instance or not.final Duration
Specifies the grace period before permanently deleting data.final Boolean
Disables the in-memory Parquet cache.final Duration
Specifies the interval to evict expired entries from the distinct value cache, expressed as a human-readable duration–for example: 20s, 1m, 1h.final Boolean
Disables populating the distinct value cache from historical data.final boolean
final boolean
equalsBySdkFields
(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final PercentOrAbsoluteLong
Specifies the size of memory pool used during query execution.final PercentOrAbsoluteLong
Specifies the threshold for the internal memory buffer.final Duration
Specifies the duration that Parquet files are arranged into.final Duration
Specifies how far back to look when creating generation 1 Parquet files.final <T> Optional
<T> getValueForField
(String fieldName, Class<T> clazz) final Duration
Sets the default duration for hard deletion of data.final int
hashCode()
final Integer
Specifies number of instances in the DbCluster which can both ingest and query.final Duration
Specifies the interval to evict expired entries from the Last-N-Value cache, expressed as a human-readable duration–for example: 20s, 1m, 1h.final Boolean
Disables populating the last-N-value cache from historical data.final String
Sets the filter directive for logs.final LogFormats
Defines the message format for logs.final String
Defines the message format for logs.final Long
Specifies the maximum size of HTTP requests.final Duration
Sets the interval to check if the in-memory Parquet cache needs to be pruned.final Float
Specifies the percentage of entries to prune during a prune operation on the in-memory Parquet cache.final Duration
Specifies the time window for caching recent Parquet files in memory.final PercentOrAbsoluteLong
Specifies the size of the in-memory Parquet cache in megabytes or percentage of total available memory.final Duration
Specifies the interval to prefetch into the Parquet cache during compaction.final Integer
Limits the number of Parquet files a query can access.final Integer
Defines the size of the query log.final Integer
Specifies number of instances in the DbCluster which can only query.final Duration
Specifies the interval at which data replication occurs between cluster nodes.final Duration
The interval at which retention policies are checked and enforced.static Class
<? extends InfluxDBv3EnterpriseParameters.Builder> final Integer
Specifies the number of snapshotted WAL files to retain in the object store.final Integer
Limits the concurrency level for table index cache operations.final Integer
Specifies the maximum number of entries in the table index cache.Take this object and create a builder that contains all of the current property values of this object.final String
toString()
Returns a string representation of this object.final Integer
Specifies the maximum number of write requests that can be buffered before a flush must be executed and succeed.final Integer
Concurrency limit during WAL replay.final Boolean
Determines whether WAL replay should fail when encountering errors.final Integer
Defines the number of WAL files to attempt to remove in a snapshot.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
Method Details
-
queryFileLimit
Limits the number of Parquet files a query can access. If a query attempts to read more than this limit, InfluxDB 3 returns an error.
Default: 432
- Returns:
- Limits the number of Parquet files a query can access. If a query attempts to read more than this limit,
InfluxDB 3 returns an error.
Default: 432
-
queryLogSize
Defines the size of the query log. Up to this many queries remain in the log before older queries are evicted to make room for new ones.
Default: 1000
- Returns:
- Defines the size of the query log. Up to this many queries remain in the log before older queries are
evicted to make room for new ones.
Default: 1000
-
logFilter
Sets the filter directive for logs.
- Returns:
- Sets the filter directive for logs.
-
logFormat
Defines the message format for logs.
Default: full
If the service returns an enum value that is not available in the current SDK version,
logFormat
will returnLogFormats.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromlogFormatAsString()
.- Returns:
- Defines the message format for logs.
Default: full
- See Also:
-
logFormatAsString
Defines the message format for logs.
Default: full
If the service returns an enum value that is not available in the current SDK version,
logFormat
will returnLogFormats.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromlogFormatAsString()
.- Returns:
- Defines the message format for logs.
Default: full
- See Also:
-
dataFusionNumThreads
Sets the maximum number of DataFusion runtime threads to use.
- Returns:
- Sets the maximum number of DataFusion runtime threads to use.
-
dataFusionRuntimeType
Specifies the DataFusion tokio runtime type.
Default: multi-thread
If the service returns an enum value that is not available in the current SDK version,
dataFusionRuntimeType
will returnDataFusionRuntimeType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromdataFusionRuntimeTypeAsString()
.- Returns:
- Specifies the DataFusion tokio runtime type.
Default: multi-thread
- See Also:
-
dataFusionRuntimeTypeAsString
Specifies the DataFusion tokio runtime type.
Default: multi-thread
If the service returns an enum value that is not available in the current SDK version,
dataFusionRuntimeType
will returnDataFusionRuntimeType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromdataFusionRuntimeTypeAsString()
.- Returns:
- Specifies the DataFusion tokio runtime type.
Default: multi-thread
- See Also:
-
dataFusionRuntimeDisableLifoSlot
Disables the LIFO slot of the DataFusion runtime.
- Returns:
- Disables the LIFO slot of the DataFusion runtime.
-
dataFusionRuntimeEventInterval
Sets the number of scheduler ticks after which the scheduler of the DataFusion tokio runtime polls for external events–for example: timers, I/O.
- Returns:
- Sets the number of scheduler ticks after which the scheduler of the DataFusion tokio runtime polls for external events–for example: timers, I/O.
-
dataFusionRuntimeGlobalQueueInterval
Sets the number of scheduler ticks after which the scheduler of the DataFusion runtime polls the global task queue.
- Returns:
- Sets the number of scheduler ticks after which the scheduler of the DataFusion runtime polls the global task queue.
-
dataFusionRuntimeMaxBlockingThreads
Specifies the limit for additional threads spawned by the DataFusion runtime.
- Returns:
- Specifies the limit for additional threads spawned by the DataFusion runtime.
-
dataFusionRuntimeMaxIoEventsPerTick
Configures the maximum number of events processed per tick by the tokio DataFusion runtime.
- Returns:
- Configures the maximum number of events processed per tick by the tokio DataFusion runtime.
-
dataFusionRuntimeThreadKeepAlive
Sets a custom timeout for a thread in the blocking pool of the tokio DataFusion runtime.
- Returns:
- Sets a custom timeout for a thread in the blocking pool of the tokio DataFusion runtime.
-
dataFusionRuntimeThreadPriority
Sets the thread priority for tokio DataFusion runtime workers.
Default: 10
- Returns:
- Sets the thread priority for tokio DataFusion runtime workers.
Default: 10
-
dataFusionMaxParquetFanout
When multiple parquet files are required in a sorted way (deduplication for example), specifies the maximum fanout.
Default: 1000
- Returns:
- When multiple parquet files are required in a sorted way (deduplication for example), specifies the
maximum fanout.
Default: 1000
-
dataFusionUseCachedParquetLoader
Uses a cached parquet loader when reading parquet files from the object store.
- Returns:
- Uses a cached parquet loader when reading parquet files from the object store.
-
dataFusionConfig
Provides custom configuration to DataFusion as a comma-separated list of key:value pairs.
- Returns:
- Provides custom configuration to DataFusion as a comma-separated list of key:value pairs.
-
maxHttpRequestSize
Specifies the maximum size of HTTP requests.
Default: 10485760
- Returns:
- Specifies the maximum size of HTTP requests.
Default: 10485760
-
forceSnapshotMemThreshold
Specifies the threshold for the internal memory buffer. Supports either a percentage (portion of available memory) or absolute value in MB–for example: 70% or 100
Default: 70%
- Returns:
- Specifies the threshold for the internal memory buffer. Supports either a percentage (portion of
available memory) or absolute value in MB–for example: 70% or 100
Default: 70%
-
walSnapshotSize
Defines the number of WAL files to attempt to remove in a snapshot. This, multiplied by the interval, determines how often snapshots are taken.
Default: 600
- Returns:
- Defines the number of WAL files to attempt to remove in a snapshot. This, multiplied by the interval,
determines how often snapshots are taken.
Default: 600
-
walMaxWriteBufferSize
Specifies the maximum number of write requests that can be buffered before a flush must be executed and succeed.
Default: 100000
- Returns:
- Specifies the maximum number of write requests that can be buffered before a flush must be executed and
succeed.
Default: 100000
-
snapshottedWalFilesToKeep
Specifies the number of snapshotted WAL files to retain in the object store. Flushing the WAL files does not clear the WAL files immediately; they are deleted when the number of snapshotted WAL files exceeds this number.
Default: 300
- Returns:
- Specifies the number of snapshotted WAL files to retain in the object store. Flushing the WAL files does
not clear the WAL files immediately; they are deleted when the number of snapshotted WAL files exceeds
this number.
Default: 300
-
preemptiveCacheAge
Specifies the interval to prefetch into the Parquet cache during compaction.
Default: 3d
- Returns:
- Specifies the interval to prefetch into the Parquet cache during compaction.
Default: 3d
-
parquetMemCachePrunePercentage
Specifies the percentage of entries to prune during a prune operation on the in-memory Parquet cache.
Default: 0.1
- Returns:
- Specifies the percentage of entries to prune during a prune operation on the in-memory Parquet cache.
Default: 0.1
-
parquetMemCachePruneInterval
Sets the interval to check if the in-memory Parquet cache needs to be pruned.
Default: 1s
- Returns:
- Sets the interval to check if the in-memory Parquet cache needs to be pruned.
Default: 1s
-
disableParquetMemCache
Disables the in-memory Parquet cache. By default, the cache is enabled.
- Returns:
- Disables the in-memory Parquet cache. By default, the cache is enabled.
-
parquetMemCacheQueryPathDuration
Specifies the time window for caching recent Parquet files in memory.
Default: 5h
- Returns:
- Specifies the time window for caching recent Parquet files in memory.
Default: 5h
-
lastCacheEvictionInterval
Specifies the interval to evict expired entries from the Last-N-Value cache, expressed as a human-readable duration–for example: 20s, 1m, 1h.
Default: 10s
- Returns:
- Specifies the interval to evict expired entries from the Last-N-Value cache, expressed as a
human-readable duration–for example: 20s, 1m, 1h.
Default: 10s
-
distinctCacheEvictionInterval
Specifies the interval to evict expired entries from the distinct value cache, expressed as a human-readable duration–for example: 20s, 1m, 1h.
Default: 10s
- Returns:
- Specifies the interval to evict expired entries from the distinct value cache, expressed as a
human-readable duration–for example: 20s, 1m, 1h.
Default: 10s
-
gen1Duration
Specifies the duration that Parquet files are arranged into. Data timestamps land each row into a file of this duration. Supported durations are 1m, 5m, and 10m. These files are known as “generation 1” files, which the compactor can merge into larger generations.
Default: 10m
- Returns:
- Specifies the duration that Parquet files are arranged into. Data timestamps land each row into a file of
this duration. Supported durations are 1m, 5m, and 10m. These files are known as “generation 1” files,
which the compactor can merge into larger generations.
Default: 10m
-
execMemPoolBytes
Specifies the size of memory pool used during query execution. Can be given as absolute value in bytes or as a percentage of the total available memory–for example: 8000000000 or 10%.
Default: 20%
- Returns:
- Specifies the size of memory pool used during query execution. Can be given as absolute value in bytes or
as a percentage of the total available memory–for example: 8000000000 or 10%.
Default: 20%
-
parquetMemCacheSize
Specifies the size of the in-memory Parquet cache in megabytes or percentage of total available memory.
Default: 20%
- Returns:
- Specifies the size of the in-memory Parquet cache in megabytes or percentage of total available
memory.
Default: 20%
-
walReplayFailOnError
Determines whether WAL replay should fail when encountering errors.
Default: false
- Returns:
- Determines whether WAL replay should fail when encountering errors.
Default: false
-
walReplayConcurrencyLimit
Concurrency limit during WAL replay. Setting this number too high can lead to OOM. The default is dynamically determined.
Default: max(num_cpus, 10)
- Returns:
- Concurrency limit during WAL replay. Setting this number too high can lead to OOM. The default is
dynamically determined.
Default: max(num_cpus, 10)
-
tableIndexCacheMaxEntries
Specifies the maximum number of entries in the table index cache.
Default: 1000
- Returns:
- Specifies the maximum number of entries in the table index cache.
Default: 1000
-
tableIndexCacheConcurrencyLimit
Limits the concurrency level for table index cache operations.
Default: 8
- Returns:
- Limits the concurrency level for table index cache operations.
Default: 8
-
gen1LookbackDuration
Specifies how far back to look when creating generation 1 Parquet files.
Default: 24h
- Returns:
- Specifies how far back to look when creating generation 1 Parquet files.
Default: 24h
-
retentionCheckInterval
The interval at which retention policies are checked and enforced. Enter as a human-readable time–for example: 30m or 1h.
Default: 30m
- Returns:
- The interval at which retention policies are checked and enforced. Enter as a human-readable time–for
example: 30m or 1h.
Default: 30m
-
deleteGracePeriod
Specifies the grace period before permanently deleting data.
Default: 24h
- Returns:
- Specifies the grace period before permanently deleting data.
Default: 24h
-
hardDeleteDefaultDuration
Sets the default duration for hard deletion of data.
Default: 90d
- Returns:
- Sets the default duration for hard deletion of data.
Default: 90d
-
ingestQueryInstances
Specifies number of instances in the DbCluster which can both ingest and query.
- Returns:
- Specifies number of instances in the DbCluster which can both ingest and query.
-
queryOnlyInstances
Specifies number of instances in the DbCluster which can only query.
- Returns:
- Specifies number of instances in the DbCluster which can only query.
-
dedicatedCompactor
Specifies if the compactor instance should be a standalone instance or not.
- Returns:
- Specifies if the compactor instance should be a standalone instance or not.
-
compactionRowLimit
Specifies the soft limit for the number of rows per file that the compactor writes. The compactor may write more rows than this limit.
Default: 1000000
- Returns:
- Specifies the soft limit for the number of rows per file that the compactor writes. The compactor may
write more rows than this limit.
Default: 1000000
-
compactionMaxNumFilesPerPlan
Sets the maximum number of files included in any compaction plan.
Default: 500
- Returns:
- Sets the maximum number of files included in any compaction plan.
Default: 500
-
compactionGen2Duration
Specifies the duration of the first level of compaction (gen2). Later levels of compaction are multiples of this duration. This value should be equal to or greater than the gen1 duration.
Default: 20m
- Returns:
- Specifies the duration of the first level of compaction (gen2). Later levels of compaction are multiples
of this duration. This value should be equal to or greater than the gen1 duration.
Default: 20m
-
compactionMultipliers
Specifies a comma-separated list of multiples defining the duration of each level of compaction. The number of elements in the list determines the number of compaction levels. The first element specifies the duration of the first level (gen3); subsequent levels are multiples of the previous level.
Default: 3,4,6,5
- Returns:
- Specifies a comma-separated list of multiples defining the duration of each level of compaction. The
number of elements in the list determines the number of compaction levels. The first element specifies
the duration of the first level (gen3); subsequent levels are multiples of the previous level.
Default: 3,4,6,5
-
compactionCleanupWait
Specifies the amount of time that the compactor waits after finishing a compaction run to delete files marked as needing deletion during that compaction run.
Default: 10m
- Returns:
- Specifies the amount of time that the compactor waits after finishing a compaction run to delete files
marked as needing deletion during that compaction run.
Default: 10m
-
compactionCheckInterval
Specifies how often the compactor checks for new compaction work to perform.
Default: 10s
- Returns:
- Specifies how often the compactor checks for new compaction work to perform.
Default: 10s
-
lastValueCacheDisableFromHistory
Disables populating the last-N-value cache from historical data. If disabled, the cache is still populated with data from the write-ahead log (WAL).
- Returns:
- Disables populating the last-N-value cache from historical data. If disabled, the cache is still populated with data from the write-ahead log (WAL).
-
distinctValueCacheDisableFromHistory
Disables populating the distinct value cache from historical data. If disabled, the cache is still populated with data from the write-ahead log (WAL).
- Returns:
- Disables populating the distinct value cache from historical data. If disabled, the cache is still populated with data from the write-ahead log (WAL).
-
replicationInterval
Specifies the interval at which data replication occurs between cluster nodes.
Default: 250ms
- Returns:
- Specifies the interval at which data replication occurs between cluster nodes.
Default: 250ms
-
catalogSyncInterval
Defines how often the catalog synchronizes across cluster nodes.
Default: 10s
- Returns:
- Defines how often the catalog synchronizes across cluster nodes.
Default: 10s
-
toBuilder
Description copied from interface:ToCopyableBuilder
Take this object and create a builder that contains all of the current property values of this object.- Specified by:
toBuilder
in interfaceToCopyableBuilder<InfluxDBv3EnterpriseParameters.Builder,
InfluxDBv3EnterpriseParameters> - Returns:
- a builder for type T
-
builder
-
serializableBuilderClass
-
hashCode
-
equals
-
equalsBySdkFields
Description copied from interface:SdkPojo
Indicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojo
class, and is generated based on a service model.If an
SdkPojo
class does not have any inherited fields,equalsBySdkFields
andequals
are essentially the same.- Specified by:
equalsBySdkFields
in interfaceSdkPojo
- Parameters:
obj
- the object to be compared with- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
-
toString
-
getValueForField
-
sdkFields
-
sdkFieldNameToField
- Specified by:
sdkFieldNameToField
in interfaceSdkPojo
- Returns:
- The mapping between the field name and its corresponding field.
-