Package-level declarations
Types
You don't have sufficient permissions to perform this action.
A structure that contains information about one CloudWatch Logs account policy.
This object defines one key that will be added with the addKeys processor.
Contains information about one anomaly detector in the account.
Base class for all service related exceptions thrown by the CloudWatchLogs client
A structure containing information about the deafult settings and available settings that you can use to configure a delivery or a delivery destination.
This structure contains the default values that are used for each configuration parameter when you use CreateDelivery to create a deliver under the current service type, resource type, and log type.
This operation attempted to create a resource that already exists.
This object defines one value to be copied with the copyValue processor.
The event was already logged.
This processor converts a datetime string into a format that you specify.
This processor deletes entries from a log event. These entries are key-value pairs.
This structure contains information about one delivery destination in your account. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, Firehose, and X-Ray are supported as delivery destinations.
A structure that contains information about one logs delivery destination.
This structure contains information about one delivery source in your account. A delivery source is an Amazon Web Services resource that sends logs to an Amazon Web Services destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose.
Represents a cross-account destination that receives subscription log events.
The method used to distribute log data to the destination, which can be either random or grouped by log stream.
Represents an export task.
Represents the status of an export task.
Represents the status of an export task.
This structure describes one log event field that is used as an index in at least one index policy in this account.
A structure containing the extracted fields from a log event. These fields are extracted based on the log format and can be used for structured querying and analysis.
Represents a matched event.
The parameters for the GetLogObject operation.
The response from the GetLogObject operation.
A stream of structured log data returned by the GetLogObject operation. This stream contains log events with their associated metadata and extracted fields.
This structure contains information about one field index policy in this account.
Represents a log event, which is a record of activity that was recorded by the application or resource being monitored.
This structure contains information about the integration configuration. For an integration with OpenSearch Service, this includes information about OpenSearch Service resources such as the collection, the workspace, and policies.
This structure contains information about one CloudWatch Logs integration. This structure is returned by a ListIntegrations operation.
An internal error occurred during the streaming of log data. This exception is thrown when there's an issue with the internal streaming mechanism used by the GetLogObject operation.
The operation is not valid on the specified resource.
A parameter is specified incorrectly.
The sequence token is not valid. You can get the correct sequence token in the expectedSequenceToken
field in the InvalidSequenceTokenException
message.
You have reached the maximum number of resources that can be created.
This object contains the information for one log event returned in a Live Tail stream.
This object contains the metadata for one LiveTailSessionUpdate
structure. It indicates whether that update includes only a sample of 500 log events out of a larger number of ingested log events, or if it contains all of the matching log events ingested during that second of time.
This object contains information about this Live Tail session, including the log groups included and the log stream filters, if any.
This object contains the log events and metadata for a Live Tail session.
The fields contained in log events found by a GetLogGroupFields
operation, along with the percentage of queried log events in which each field appears.
This structure contains information about one log group in your account.
This processor converts a string to lowercase.
The query string is not valid. Details about this error are displayed in a QueryCompileError
object. For more information, see QueryCompileError.
Metric filters express how CloudWatch Logs would extract metric observations from ingested log events and transform them into metric data in a CloudWatch metric.
Represents a matched event.
Indicates how to transform ingested log events to metric data in a CloudWatch metric.
This object defines one key that will be moved with the moveKey processor.
This structure contains information about the OpenSearch Service application used for this integration. An OpenSearch Service application is the web application created by the integration with CloudWatch Logs. It hosts the vended logs dashboards.
This structure contains information about the OpenSearch Service collection used for this integration. An OpenSearch Service collection is a logical grouping of one or more indexes that represent an analytics workload. For more information, see Creating and managing OpenSearch Service Serverless collections.
This structure contains information about the OpenSearch Service data access policy used for this integration. The access policy defines the access controls for the collection. This data access policy was automatically created as part of the integration setup. For more information about OpenSearch Service data access policies, see Data access control for Amazon OpenSearch Serverless in the OpenSearch Service Developer Guide.
This structure contains information about the OpenSearch Service data source used for this integration. This data source was created as part of the integration setup. An OpenSearch Service data source defines the source and destination for OpenSearch Service queries. It includes the role required to execute queries and write to collections.
This structure contains information about the OpenSearch Service encryption policy used for this integration. The encryption policy was created automatically when you created the integration. For more information, see Encryption policies in the OpenSearch Service Developer Guide.
This structure contains complete information about one CloudWatch Logs integration. This structure is returned by a GetIntegration operation.
This structure contains information about the OpenSearch Service data lifecycle policy used for this integration. The lifecycle policy determines the lifespan of the data in the collection. It was automatically created as part of the integration setup.
This structure contains information about the OpenSearch Service network policy used for this integration. The network policy assigns network access settings to collections. For more information, see Network policies in the OpenSearch Service Developer Guide.
This structure contains configuration details about an integration between CloudWatch Logs and OpenSearch Service.
This structure contains information about the status of an OpenSearch Service resource.
This structure contains information about the OpenSearch Service workspace used for this integration. An OpenSearch Service workspace is the collection of dashboards along with other OpenSearch Service tools. This workspace was created automatically as part of the integration setup. For more information, see Centralized OpenSearch user interface (Dashboards) with OpenSearch Service.
Multiple concurrent requests to update the same resource were in conflict.
Represents a log event.
This processor parses CloudFront vended logs, extract fields, and convert them into JSON format. Encoded field values are decoded. Values that are integers and doubles are treated as such. For more information about this processor including examples, see parseCloudfront
This processor parses a specified field in the original log event into key-value pairs.
Use this processor to parse RDS for PostgreSQL vended logs, extract fields, and and convert them into a JSON format. This processor always processes the entire log event message. For more information about this processor including examples, see parsePostGres.
Use this processor to parse Route 53 vended logs, extract fields, and and convert them into a JSON format. This processor always processes the entire log event message. For more information about this processor including examples, see parseRoute53.
This processor converts logs into Open Cybersecurity Schema Framework (OCSF) events.
A structure that contains information about one pattern token related to an anomaly.
Reserved.
Reserved.
This structure contains details about a saved CloudWatch Logs Insights query definition.
Contains the number of log events scanned by the query, the number of log events that matched the query criteria, and the total number of bytes in the log events that were scanned.
A structure that represents a valid record field header and whether it is mandatory.
If an entity is rejected when a PutLogEvents
request was made, this includes details about the reason for the rejection.
Represents the rejected events.
This object defines one key that will be renamed with the renameKey processor.
Use this processor to rename keys in a log event.
The specified resource already exists.
This structure contains configuration details about an integration between CloudWatch Logs and another entity.
The specified resource does not exist.
A policy enabling one or more entities to put logs to a log group in this account.
Contains one field from one log event returned by a CloudWatch Logs Insights query, along with the value of that field.
This structure contains delivery configurations that apply only when the delivery destination resource is an S3 bucket.
Represents the search status of a log stream.
This request exceeds a service quota.
The service cannot complete the request.
This exception is returned if an unknown error occurs during a Live Tail session.
This exception is returned in a Live Tail stream when the Live Tail session times out. Live Tail sessions time out after three hours.
Use this processor to split a field into an array of strings using a delimiting character.
This object defines one log field that will be split with the splitString processor.
This object includes the stream returned by your StartLiveTail request.
Represents a subscription filter.
This processor matches a key’s value against a regular expression and replaces all matches with a replacement string.
This object defines one log field key that will be replaced using the substituteString processor.
If you are suppressing an anomaly temporariliy, this structure defines how long the suppression period is to be.
The request was throttled because of quota limits.
A resource can have no more than 50 tags.
This structure contains information for one log event that has been processed by a log transformer.
Use this processor to remove leading and trailing whitespace.
Use this processor to convert a value type associated with the specified key to the specified type. It's a casting processor that changes the types of the specified fields. Values can be converted into one of the following datatypes: integer
, double
, string
and boolean
.
This object defines one value type that will be converted using the typeConverter processor.
The most likely cause is an Amazon Web Services access key ID or secret key that's not valid.
This processor converts a string field to uppercase.
One of the parameters for the request is not valid.