S3Settings

Settings for exporting data to Amazon S3.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

An optional parameter that, when set to true or y, you can use to add column name information to the .csv output file.

Link copied to clipboard

Use the S3 target endpoint setting AddTrailingPaddingCharacter to add padding on string data. The default value is false.

Link copied to clipboard

An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path <i>bucketFolder</i>/<i>schema_name</i>/<i>table_name</i>/. If this parameter isn't specified, then the path used is <i>schema_name</i>/<i>table_name</i>/.

Link copied to clipboard

The name of the S3 bucket.

Link copied to clipboard

A value that enables DMS to specify a predefined (canned) access control list for objects created in an Amazon S3 bucket as .csv or .parquet files. For more information about Amazon S3 canned ACLs, see Canned ACL in the Amazon S3 Developer Guide.

Link copied to clipboard

A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true or y, only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

Link copied to clipboard

A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

Link copied to clipboard

Maximum length of the interval, defined in seconds, after which to output a file to Amazon S3.

Link copied to clipboard

Minimum file size, defined in kilobytes, to reach for a file output to Amazon S3.

Link copied to clipboard

Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath is set, DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions to true, DMS verifies that you have set this parameter to a folder path on your S3 target where DMS can save the transaction order for the CDC load. DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder and BucketName.

Link copied to clipboard

An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

Link copied to clipboard

The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.

Link copied to clipboard

This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If UseCsvNoSupValue is set to true, specify a string value that you want DMS to use for all columns not included in the supplemental log. If you do not specify a string value, DMS uses the null value for these columns regardless of the UseCsvNoSupValue setting.

Link copied to clipboard

An optional parameter that specifies how DMS treats null values. While handling the null value, you can use this parameter to pass a user-defined string as null when writing to the target. For example, when target columns are nullable, you can use this option to differentiate between the empty string value and the null value. So, if you set this parameter value to the empty string ("" or ''), DMS treats the empty string as the null value instead of NULL.

Link copied to clipboard

The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n).

Link copied to clipboard

The format of the data that you want to use for output. You can choose one of the following:

Link copied to clipboard

The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

Link copied to clipboard

Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH. Use this parameter when DatePartitionedEnabled is set to true.

Link copied to clipboard

When set to true, this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false. For more information about date-based folder partitioning, see Using date-based folder partitioning.

Link copied to clipboard

Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD. Use this parameter when DatePartitionedEnabled is set to true.

Link copied to clipboard

When creating an S3 target endpoint, set DatePartitionTimezone to convert the current UTC time into a specified time zone. The conversion occurs when a date partition folder is created and a CDC filename is generated. The time zone format is Area/Location. Use this parameter when DatePartitionedEnabled is set to true, as shown in the following example:

Link copied to clipboard

The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

Link copied to clipboard

A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

Link copied to clipboard

The type of encoding you are using:

Link copied to clipboard

The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS.

Link copied to clipboard

To specify a bucket owner and prevent sniping, you can use the ExpectedBucketOwner endpoint setting.

Link copied to clipboard

Specifies how tables are defined in the S3 source files only.

Link copied to clipboard

When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets you use Athena to query your data.

Link copied to clipboard

When this value is set to 1, DMS ignores the first row header in a .csv file. A value of 1 turns on the feature; a value of 0 turns off the feature.

Link copied to clipboard

A value that enables a full load to write INSERT operations to the comma-separated value (.csv) or .parquet output files only to indicate how the rows were added to the source database.

Link copied to clipboard

A value that specifies the maximum size (in KB) of any .csv file to be created while migrating to an S3 target during full load.

Link copied to clipboard

A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

Link copied to clipboard

The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

Link copied to clipboard

If set to true, DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.

Link copied to clipboard

For an S3 source, when this value is set to true or y, each leading double quotation mark has to be followed by an ending double quotation mark. This formatting complies with RFC 4180. When this value is set to false or n, string literals are copied to the target as is. In this case, a delimiter (row or column) signals the end of the field. Thus, you can't use a delimiter as part of the string, because it signals the end of the value.

Link copied to clipboard

The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

Link copied to clipboard

If you are using SSE_KMS for the EncryptionMode, provide the KMS key ID. The key that you use needs an attached policy that enables Identity and Access Management (IAM) user permissions and allows use of the key.

Link copied to clipboard

The Amazon Resource Name (ARN) used by the service to access the IAM role. The role must allow the iam:PassRole action. It is a required parameter that enables DMS to write and read objects from an S3 bucket.

Link copied to clipboard

A value that when nonblank causes DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

Link copied to clipboard

This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true for columns not included in the supplemental log, DMS uses the value specified by CsvNoSupValue. If not set or set to false, DMS uses the null value for these columns.

Link copied to clipboard

When set to true, this parameter uses the task start time as the timestamp column value instead of the time data is written to target. For full load, when useTaskStartTimeForFullLoadTimestamp is set to true, each row of the timestamp column contains the task start time. For CDC loads, each row of the timestamp column contains the transaction commit time.

Functions

Link copied to clipboard
inline fun copy(block: S3Settings.Builder.() -> Unit = {}): S3Settings
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String