Class RedshiftSettings
- All Implemented Interfaces:
Serializable
,SdkPojo
,ToCopyableBuilder<RedshiftSettings.Builder,
RedshiftSettings>
Provides information that defines an Amazon Redshift endpoint.
- See Also:
-
Nested Class Summary
Nested Classes -
Method Summary
Modifier and TypeMethodDescriptionfinal Boolean
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error.final String
Code to run after connecting.final String
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.final String
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.static RedshiftSettings.Builder
builder()
final Boolean
If Amazon Redshift is configured to support case sensitive schema names, setCaseSensitiveNames
totrue
.final Boolean
If you setCompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty.final Integer
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.final String
The name of the Amazon Redshift data warehouse (service) that you are working with.final String
The date format that you are using.final Boolean
A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL.final EncryptionModeValue
The type of server-side encryption that you want to use for your data.final String
The type of server-side encryption that you want to use for your data.final boolean
final boolean
equalsBySdkFields
(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final Boolean
This setting is only valid for a full-load migration task.final Integer
The number of threads used to upload a single file.final <T> Optional
<T> getValueForField
(String fieldName, Class<T> clazz) final int
hashCode()
final Integer
The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.final Boolean
When true, lets Redshift migrate the boolean type as boolean.final Integer
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift.final String
password()
The password for the user named in theusername
property.final Integer
port()
The port number for Amazon Redshift.final Boolean
A value that specifies to remove surrounding quotation marks from strings in the incoming data.final String
A value that specifies to replaces the invalid characters specified inReplaceInvalidChars
, substituting the specified characters instead.final String
A list of characters that you want to replace.final String
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value inSecretsManagerSecret
.final String
The full ARN, partial ARN, or friendly name of theSecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.static Class
<? extends RedshiftSettings.Builder> final String
The name of the Amazon Redshift cluster you are using.final String
The KMS key ID.final String
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.final String
The time format that you want to use.Take this object and create a builder that contains all of the current property values of this object.final String
toString()
Returns a string representation of this object.final Boolean
A value that specifies to remove the trailing white space characters from a VARCHAR string.final Boolean
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column.final String
username()
An Amazon Redshift user name for a registered user.final Integer
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
Method Details
-
acceptAnyDate
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- Returns:
- A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to
be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
-
afterConnectScript
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
- Returns:
- Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
-
bucketFolder
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
- Returns:
- An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target
Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
-
bucketName
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
- Returns:
- The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
-
caseSensitiveNames
If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.- Returns:
- If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.
-
compUpdate
If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.- Returns:
- If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.
-
connectionTimeout
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- Returns:
- A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
-
databaseName
The name of the Amazon Redshift data warehouse (service) that you are working with.
- Returns:
- The name of the Amazon Redshift data warehouse (service) that you are working with.
-
dateFormat
The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.- Returns:
- The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.
-
emptyAsNull
A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.- Returns:
- A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.
-
encryptionMode
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
If the service returns an enum value that is not available in the current SDK version,
encryptionMode
will returnEncryptionModeValue.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromencryptionModeAsString()
.- Returns:
- The type of server-side encryption that you want to use for your data. This encryption type is part of
the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- See Also:
-
encryptionModeAsString
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
If the service returns an enum value that is not available in the current SDK version,
encryptionMode
will returnEncryptionModeValue.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromencryptionModeAsString()
.- Returns:
- The type of server-side encryption that you want to use for your data. This encryption type is part of
the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- See Also:
-
explicitIds
This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.- Returns:
- This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.
-
fileTransferUploadStreams
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.- Returns:
- The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It
defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.
-
loadTimeout
The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
- Returns:
- The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
-
maxFileSize
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
- Returns:
- The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
-
password
The password for the user named in the
username
property.- Returns:
- The password for the user named in the
username
property.
-
port
The port number for Amazon Redshift. The default value is 5439.
- Returns:
- The port number for Amazon Redshift. The default value is 5439.
-
removeQuotes
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.- Returns:
- A value that specifies to remove surrounding quotation marks from strings in the incoming data. All
characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.
-
replaceInvalidChars
A list of characters that you want to replace. Use with
ReplaceChars
.- Returns:
- A list of characters that you want to replace. Use with
ReplaceChars
.
-
replaceChars
A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.- Returns:
- A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.
-
serverName
The name of the Amazon Redshift cluster you are using.
- Returns:
- The name of the Amazon Redshift cluster you are using.
-
serviceAccessRoleArn
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.- Returns:
- The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role
must allow the
iam:PassRole
action.
-
serverSideEncryptionKmsKeyId
The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.- Returns:
- The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
-
timeFormat
The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.- Returns:
- The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.
-
trimBlanks
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.- Returns:
- A value that specifies to remove the trailing white space characters from a VARCHAR string. This
parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.
-
truncateColumns
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.- Returns:
- A value that specifies to truncate data in columns to the appropriate number of characters, so that the
data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and
rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.
-
username
An Amazon Redshift user name for a registered user.
- Returns:
- An Amazon Redshift user name for a registered user.
-
writeBufferSize
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
- Returns:
- The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
-
secretsManagerAccessRoleArn
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.- Returns:
- The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants
the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
-
secretsManagerSecretId
The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.- Returns:
- The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
-
mapBooleanAsBoolean
When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans as
varchar(1)
. You must set this setting on both the source and target endpoints for it to take effect.- Returns:
- When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans as
varchar(1)
. You must set this setting on both the source and target endpoints for it to take effect.
-
toBuilder
Description copied from interface:ToCopyableBuilder
Take this object and create a builder that contains all of the current property values of this object.- Specified by:
toBuilder
in interfaceToCopyableBuilder<RedshiftSettings.Builder,
RedshiftSettings> - Returns:
- a builder for type T
-
builder
-
serializableBuilderClass
-
hashCode
-
equals
-
equalsBySdkFields
Description copied from interface:SdkPojo
Indicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojo
class, and is generated based on a service model.If an
SdkPojo
class does not have any inherited fields,equalsBySdkFields
andequals
are essentially the same.- Specified by:
equalsBySdkFields
in interfaceSdkPojo
- Parameters:
obj
- the object to be compared with- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
-
toString
-
getValueForField
-
sdkFields
-
sdkFieldNameToField
- Specified by:
sdkFieldNameToField
in interfaceSdkPojo
- Returns:
- The mapping between the field name and its corresponding field.
-