Package-level declarations
Types
A list of possible alternative transcriptions for the input audio. Each alternative may contain one or more of Items
, Entities
, or Transcript
.
A wrapper for your audio chunks. Your audio stream consists of one or more audio events, which consist of one or more audio chunks.
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
One or more arguments to the StartStreamTranscription
, StartMedicalStreamTranscription
, or StartCallAnalyticsStreamTranscription
operation was not valid. For example, MediaEncoding
or LanguageCode
used unsupported values. Check the specified parameters and try your request again.
Contains entities identified as personally identifiable information (PII) in your transcription output, along with various associated attributes. Examples include category, confidence score, content, type, and start and end times.
A word, phrase, or punctuation mark in your Call Analytics transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
Contains detailed information about your real-time Call Analytics session. These details are provided in the UtteranceEvent
and CategoryEvent
objects.
Provides information on any TranscriptFilterType
categories that matched your transcription output. Matches are identified for each segment upon completion of that segment.
Makes it possible to specify which speaker is on which audio channel. For example, if your agent is the first participant to speak, you would set ChannelId
to 0
(to indicate the first channel) and ParticipantRole
to AGENT
(to indicate that it's the agent speaking).
Provides the location, using character count, in your transcript where a match is identified. For example, the location of an issue or a category match within a segment.
The details for clinical note generation, including status, and output locations for clinical note and aggregated transcript if the analytics completed, or failure reason if the analytics failed.
The output configuration for aggregated transcript and clinical note generation.
Allows you to set audio channel definitions and post-call analytics settings.
A new stream started with the same session ID. The current stream has been terminated.
A problem occurred while processing the audio. Amazon Transcribe terminated processing.
Lists the issues that were identified in your audio segment.
The language code that represents the language identified in your audio, including the associated confidence score. If you enabled channel identification in your request and each channel contained a different language, you will have more than one LanguageWithScore
result.
Your client has exceeded one of the Amazon Transcribe limits. This is typically the audio length limit. Break your audio stream into smaller chunks and try your request again.
A list of possible alternative transcriptions for the input audio. Each alternative may contain one or more of Items
, Entities
, or Transcript
.
Contains entities identified as personal health information (PHI) in your transcription output, along with various associated attributes. Examples include category, confidence score, type, stability score, and start and end times.
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
The Result
associated with a ``.
A wrapper for your audio chunks
Makes it possible to specify which speaker is on which channel. For example, if the clinician is the first participant to speak, you would set the ChannelId
of the first ChannelDefinition
in the list to 0
(to indicate the first channel) and ParticipantRole
to CLINICIAN
(to indicate that it's the clinician speaking). Then you would set the ChannelId
of the second ChannelDefinition
in the list to 1
(to indicate the second channel) and ParticipantRole
to PATIENT
(to indicate that it's the patient speaking).
Specify details to configure the streaming session, including channel definitions, encryption settings, post-stream analytics settings, resource access role ARN and vocabulary settings.
Contains encryption related settings to be used for data encryption with Key Management Service, including KmsEncryptionContext and KmsKeyId. The KmsKeyId is required, while KmsEncryptionContext is optional for additional layer of security.
An encoded stream of events. The stream is encoded as HTTP/2 data frames.
Contains details for the result of post-stream analytics.
The settings for post-stream analytics.
Result stream where you will receive the output events. The details are provided in the MedicalScribeTranscriptEvent
object.
Specify the lifecycle of your streaming session.
Contains details about a Amazon Web Services HealthScribe streaming session.
The event associated with MedicalScribeResultStream
.
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
Contains a set of transcription results, along with additional information of the segment.
The MedicalTranscript
associated with a ``.
The MedicalTranscriptEvent
associated with a MedicalTranscriptResultStream
.
Contains detailed information about your streaming session.
Contains the timestamps of matched categories.
Allows you to specify additional settings for your Call Analytics post-call request, including output locations for your redacted transcript, which IAM role to use, and which encryption key to use.
The request references a resource which doesn't exist.
The service is currently unavailable. Try your request later.
Contains the timestamp range (start time through end time) of a matched category.
Base class for all service related exceptions thrown by the TranscribeStreaming client
The Transcript
associated with a ``.
The TranscriptEvent
associated with a TranscriptResultStream
.
Contains detailed information about your streaming session.
Contains set of transcription results from one or more audio segments, along with additional information about the parameters included in your request. For example, channel definitions, partial result stabilization, sentiment, and issue detection.