Package-level declarations
Types
You are not authorized to perform the action.
Provides face metadata for the faces that are associated to a specific UserID.
Metadata information about an audio stream. An array of AudioMetadata
objects for the audio streams found in a stored video is returned by GetSegmentDetection.
An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.
A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see StartSegmentDetection.
Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The left
(x-coordinate) and top
(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).
Information about a recognized celebrity.
Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
An ordered list of preferred challenge type and versions.
Provides face metadata for target image faces that are analyzed by CompareFaces
and RecognizeCelebrities
.
Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
Provides information about a face in a target image that matches the source image face analyzed by CompareFaces
. The Face
property contains the bounding box of the face in the target image. The Similarity
property is the confidence that the source image face matches the face in the bounding box.
A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. **
Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for CreateStreamProcessor. Including this setting in the CreateStreamProcessor
request enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.
The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.
Information about an inappropriate, unwanted, or offensive content label detection in a stored video.
Contains information regarding the confidence and name of a detected content type.
Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.
A session settings object. It contains settings for the operation to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.
Feature specific configuration for the training job. Configuration provided for the job must match the feature type parameter associated with project. If configuration and feature type do not match an InvalidParameterException is returned.
Configuration options for Content Moderation training.
A custom label detected in an image by a call to DetectCustomLabels.
Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the source-ref
field of the update entry with the source-ref
filed of the entry that you want to update. If the source-ref
field doesn't match an existing entry, the entry is added to dataset as a new entry.
A description for a dataset. For more information, see DescribeDataset.
Describes a dataset label. For more information, see ListDatasetLabels.
Statistics about a label used in a dataset. For more information, see DatasetLabelDescription.
Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see ProjectDescription.
The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the GroundTruthManifest
field. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset in DatasetArn
.
Provides statistics about a dataset. For more information, see DescribeDataset.
A set of parameters that allow you to filter out certain results from your returned results.
The background of the image with regard to image quality and dominant colors.
The foreground of the image with regard to image quality and dominant colors.
Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.
Settings for the IMAGE_PROPERTIES feature type.
The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.
Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.
A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter
looks at a word’s height, width, and minimum confidence. RegionOfInterest
lets you set a specific region of the image to look for text in.
Provides face metadata for the faces that are disassociated from a specific UserID.
A training dataset or a test dataset used in a dataset distribution operation. For more information, see DistributeDatasetEntries.
A description of the dominant colors in an image.
The API returns a prediction of an emotion based on a person's facial expressions, along with the confidence level for the predicted emotion. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally. The API is not intended to be used, and you may not use it, in a manner that violates the EU Artificial Intelligence Act or any other applicable law.
Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.
The evaluation results for the training of a model.
Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
Structure containing attributes of the face that the algorithm detected.
Information about a face detected in a video analysis request and the time the face was detected in the video.
FaceOccluded
should return "true" with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects. FaceOccluded
should return "false" with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others.
Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.
Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for CreateStreamProcessor.
Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual labels or entire label categories. To see a list of label categories, see Detecting Labels.
Contains metadata about a content moderation request, including the SortBy and AggregateBy options.
Contains metadata about a label detection request, including the SortBy and AggregateBy options.
The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.
Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
The number of in-progress human reviews you have has exceeded the number allowed.
A ClientRequestToken
input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.
Identifies face image brightness and sharpness.
The input image size exceeds the allowed limit. If you are calling DetectProtectiveEquipment, the image size or resolution exceeds the allowed limit. For more information, see Guidelines and quotas in Amazon Rekognition in the Amazon Rekognition Developer Guide.
Amazon Rekognition experienced a service issue. Try your call again.
The provided image format is not supported.
Indicates that a provided manifest file is empty or larger than the allowed limit.
Pagination token in the request is not valid.
Input parameter violated a constraint. Validate your parameter before calling the API operation again.
The supplied revision id for the project policy is invalid.
Amazon Rekognition is unable to access the S3 object specified in the request.
The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.
The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.
A list of enum string of possible gender values that Celebrity returns.
A potential alias of for a given label.
The category that applies to a given label.
Information about a label detected in a video analysis request and the time the label was detected in the video.
Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.
An Amazon Rekognition service limit was exceeded. For example, if you start too many jobs concurrently, subsequent calls to start operations (ex: StartLabelDetection
) will raise a LimitExceededException
exception (HTTP status code: 400) until the number of concurrently running jobs is below the Amazon Rekognition service limit.
Contains settings that specify the location of an Amazon S3 bucket used to store the output of a Face Liveness session. Note that the S3 bucket must be located in the caller's AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness system.
The format of the project policy document that you supplied to PutProjectPolicy
is incorrect.
Contains metadata for a UserID matched with a given face.
Configuration for Moderation Labels Detection.
Contains input information for a media analysis job.
Description for a media analysis job.
Details about the error that resulted in failure of the job.
Summary that provides statistics on input manifest and errors identified in the input manifest.
Object containing information about the model versions of selected features in a given job.
Configuration options for a media analysis job. Configuration is operation-specific.
Output configuration provided in the job creation request.
Contains the results for a media analysis job created with StartMediaAnalysisJob.
Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.
The S3 bucket and folder location where training output is placed.
Details about a person detected in a video analysis request.
Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection
objects with elements for each time a person's path is tracked in a video.
Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch
objects is returned by GetFaceSearch.
The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
A description of an Amazon Rekognition Custom Labels project. For more information, see DescribeProjects.
Describes a project policy in the response from ListProjectPolicies.
A description of a version of a Amazon Rekognition project version.
Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart
objects is returned for each person detected by DetectProtectiveEquipment
.
A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson
objects.
Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary
(ProtectiveEquipmentSummary) field of the response from DetectProtectiveEquipment
. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.
Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the SummarizationAttributes
(ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment
), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment
), and the persons in which a determination could not be made (PersonsIndeterminate
).
The number of requests exceeded your throughput limit. If you want to increase this limit, contact Amazon Rekognition.
Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a BoundingBox
or Polygon
to set a region of the screen.
Base class for all service related exceptions thrown by the Rekognition client
A resource with the specified ID already exists.
The specified resource is already being used.
The resource specified in the request cannot be found.
The requested resource isn't ready. For example, this exception occurs when you call DetectCustomLabels
with a model version that isn't deployed.
The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.
Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.
Contains data regarding the input face used for a search.
Contains metadata about a User searched for within a collection.
A technical cue or shot detection segment detected in a video. An array of SegmentDetection
objects containing all segments detected in a stored video is returned by GetSegmentDetection.
Information about the type of a segment requested in a call to StartSegmentDetection. An array of SegmentTypeInfo
objects is returned by the response from GetSegmentDetection.
The size of the collection exceeds the allowed limit. For more information, see Guidelines and quotas in Amazon Rekognition in the Amazon Rekognition Developer Guide.
Occurs when a given sessionId is not found.
Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
Filters for the shot detection segments returned by GetSegmentDetection
. For more information, see StartSegmentDetectionFilters.
Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter
looks at a word's height, width and minimum confidence. RegionOfInterest
lets you set a specific region of the screen to look for text in.
This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.
An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor
describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.
Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.
Information about the source streaming video.
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use FaceSearch
to recognize faces in a streaming video, or you can use ConnectedHome
to detect labels.
The stream processor settings that you want to update. ConnectedHome
settings can be updated to detect different labels with a different minimum confidence.
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
Information about a technical cue segment. For more information, see SegmentDetection.
The dataset used for testing. Optionally, if AutoCreate
is set, Amazon Rekognition uses the training dataset to create a test dataset with a temporary split of the training dataset.
Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
Information about a word or line of text detected by DetectText.
Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
Amazon Rekognition is temporarily unable to process the request. Try your call again.
The dataset used for training.
The data validation manifest created for the training dataset during model training.
A face that IndexFaces detected, but didn't index. Use the Reasons
response attribute to determine why a face wasn't indexed.
Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn't used for Search.
Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.
Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.
Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.
Contains the Amazon S3 bucket location of the validation data for a model training job.
Information about a video that Amazon Rekognition analyzed. Videometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.
The file size or duration of the supplied media is too large. The maximum file size is 10GB. The maximum duration is 6 hours.