Audio
ModelsExpand Collapse
AudioModel = "whisper-1" or "gpt-4o-transcribe" or "gpt-4o-mini-transcribe" or 2 more
AudioResponseFormat = "json" or "text" or "srt" or 3 moreThe format of the output, in one of these options: json, text, srt, verbose_json, vtt, or diarized_json. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json. For gpt-4o-transcribe-diarize, the supported formats are json, text, and diarized_json, with diarized_json required to receive speaker annotations.
The format of the output, in one of these options: json, text, srt, verbose_json, vtt, or diarized_json. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json. For gpt-4o-transcribe-diarize, the supported formats are json, text, and diarized_json, with diarized_json required to receive speaker annotations.
AudioTranscriptions
Create transcription
ModelsExpand Collapse
Transcription = object { text, logprobs, usage } Represents a transcription response returned by model, based on the provided input.
Represents a transcription response returned by model, based on the provided input.
The transcribed text.
logprobs: optional array of object { token, bytes, logprob } The log probabilities of the tokens in the transcription. Only returned with the models gpt-4o-transcribe and gpt-4o-mini-transcribe if logprobs is added to the include array.
The log probabilities of the tokens in the transcription. Only returned with the models gpt-4o-transcribe and gpt-4o-mini-transcribe if logprobs is added to the include array.
The token in the transcription.
The bytes of the token.
The log probability of the token.
usage: optional object { input_tokens, output_tokens, total_tokens, 2 more } or object { seconds, type } Token usage statistics for the request.
Token usage statistics for the request.
TokenUsage = object { input_tokens, output_tokens, total_tokens, 2 more } Usage statistics for models billed by token usage.
Usage statistics for models billed by token usage.
Number of input tokens billed for this request.
Number of output tokens generated.
Total number of tokens used (input + output).
The type of the usage object. Always tokens for this variant.
input_token_details: optional object { audio_tokens, text_tokens } Details about the input tokens billed for this request.
Details about the input tokens billed for this request.
Number of audio tokens billed for this request.
Number of text tokens billed for this request.
DurationUsage = object { seconds, type } Usage statistics for models billed by audio input duration.
Usage statistics for models billed by audio input duration.
Duration of the input audio in seconds.
The type of the usage object. Always duration for this variant.
TranscriptionDiarized = object { duration, segments, task, 2 more } Represents a diarized transcription response returned by the model, including the combined transcript and speaker-segment annotations.
Represents a diarized transcription response returned by the model, including the combined transcript and speaker-segment annotations.
Duration of the input audio in seconds.
Segments of the transcript annotated with timestamps and speaker labels.
Segments of the transcript annotated with timestamps and speaker labels.
Unique identifier for the segment.
End timestamp of the segment in seconds.
Speaker label for this segment. When known speakers are provided, the label matches known_speaker_names[]. Otherwise speakers are labeled sequentially using capital letters (A, B, ...).
Start timestamp of the segment in seconds.
Transcript text for this segment.
The type of the segment. Always transcript.text.segment.
The type of task that was run. Always transcribe.
The concatenated transcript text for the entire audio input.
usage: optional object { input_tokens, output_tokens, total_tokens, 2 more } or object { seconds, type } Token or duration usage statistics for the request.
Token or duration usage statistics for the request.
Tokens = object { input_tokens, output_tokens, total_tokens, 2 more } Usage statistics for models billed by token usage.
Usage statistics for models billed by token usage.
Number of input tokens billed for this request.
Number of output tokens generated.
Total number of tokens used (input + output).
The type of the usage object. Always tokens for this variant.
input_token_details: optional object { audio_tokens, text_tokens } Details about the input tokens billed for this request.
Details about the input tokens billed for this request.
Number of audio tokens billed for this request.
Number of text tokens billed for this request.
Duration = object { seconds, type } Usage statistics for models billed by audio input duration.
Usage statistics for models billed by audio input duration.
Duration of the input audio in seconds.
The type of the usage object. Always duration for this variant.
TranscriptionDiarizedSegment = object { id, end, speaker, 3 more } A segment of diarized transcript text with speaker metadata.
A segment of diarized transcript text with speaker metadata.
Unique identifier for the segment.
End timestamp of the segment in seconds.
Speaker label for this segment. When known speakers are provided, the label matches known_speaker_names[]. Otherwise speakers are labeled sequentially using capital letters (A, B, ...).
Start timestamp of the segment in seconds.
Transcript text for this segment.
The type of the segment. Always transcript.text.segment.
TranscriptionSegment = object { id, avg_logprob, compression_ratio, 7 more }
Unique identifier of the segment.
Average logprob of the segment. If the value is lower than -1, consider the logprobs failed.
Compression ratio of the segment. If the value is greater than 2.4, consider the compression failed.
End time of the segment in seconds.
Probability of no speech in the segment. If the value is higher than 1.0 and the avg_logprob is below -1, consider this segment silent.
Seek offset of the segment.
Start time of the segment in seconds.
Temperature parameter used for generating the segment.
Text content of the segment.
Array of token IDs for the text content.
TranscriptionStreamEvent = TranscriptionTextSegmentEvent { id, end, speaker, 3 more } or TranscriptionTextDeltaEvent { delta, type, logprobs, segment_id } or TranscriptionTextDoneEvent { text, type, logprobs, usage } Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
TranscriptionTextSegmentEvent = object { id, end, speaker, 3 more } Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
Unique identifier for the segment.
End timestamp of the segment in seconds.
Speaker label for this segment.
Start timestamp of the segment in seconds.
Transcript text for this segment.
The type of the event. Always transcript.text.segment.
TranscriptionTextDeltaEvent = object { delta, type, logprobs, segment_id } Emitted when there is an additional text delta. This is also the first event emitted when the transcription starts. Only emitted when you create a transcription with the Stream parameter set to true.
Emitted when there is an additional text delta. This is also the first event emitted when the transcription starts. Only emitted when you create a transcription with the Stream parameter set to true.
The text delta that was additionally transcribed.
The type of the event. Always transcript.text.delta.
logprobs: optional array of object { token, bytes, logprob } The log probabilities of the delta. Only included if you create a transcription with the include[] parameter set to logprobs.
The log probabilities of the delta. Only included if you create a transcription with the include[] parameter set to logprobs.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
Identifier of the diarized segment that this delta belongs to. Only present when using gpt-4o-transcribe-diarize.
TranscriptionTextDoneEvent = object { text, type, logprobs, usage } Emitted when the transcription is complete. Contains the complete transcription text. Only emitted when you create a transcription with the Stream parameter set to true.
Emitted when the transcription is complete. Contains the complete transcription text. Only emitted when you create a transcription with the Stream parameter set to true.
The text that was transcribed.
The type of the event. Always transcript.text.done.
logprobs: optional array of object { token, bytes, logprob } The log probabilities of the individual tokens in the transcription. Only included if you create a transcription with the include[] parameter set to logprobs.
The log probabilities of the individual tokens in the transcription. Only included if you create a transcription with the include[] parameter set to logprobs.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
usage: optional object { input_tokens, output_tokens, total_tokens, 2 more } Usage statistics for models billed by token usage.
Usage statistics for models billed by token usage.
Number of input tokens billed for this request.
Number of output tokens generated.
Total number of tokens used (input + output).
The type of the usage object. Always tokens for this variant.
input_token_details: optional object { audio_tokens, text_tokens } Details about the input tokens billed for this request.
Details about the input tokens billed for this request.
Number of audio tokens billed for this request.
Number of text tokens billed for this request.
TranscriptionTextDeltaEvent = object { delta, type, logprobs, segment_id } Emitted when there is an additional text delta. This is also the first event emitted when the transcription starts. Only emitted when you create a transcription with the Stream parameter set to true.
Emitted when there is an additional text delta. This is also the first event emitted when the transcription starts. Only emitted when you create a transcription with the Stream parameter set to true.
The text delta that was additionally transcribed.
The type of the event. Always transcript.text.delta.
logprobs: optional array of object { token, bytes, logprob } The log probabilities of the delta. Only included if you create a transcription with the include[] parameter set to logprobs.
The log probabilities of the delta. Only included if you create a transcription with the include[] parameter set to logprobs.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
Identifier of the diarized segment that this delta belongs to. Only present when using gpt-4o-transcribe-diarize.
TranscriptionTextDoneEvent = object { text, type, logprobs, usage } Emitted when the transcription is complete. Contains the complete transcription text. Only emitted when you create a transcription with the Stream parameter set to true.
Emitted when the transcription is complete. Contains the complete transcription text. Only emitted when you create a transcription with the Stream parameter set to true.
The text that was transcribed.
The type of the event. Always transcript.text.done.
logprobs: optional array of object { token, bytes, logprob } The log probabilities of the individual tokens in the transcription. Only included if you create a transcription with the include[] parameter set to logprobs.
The log probabilities of the individual tokens in the transcription. Only included if you create a transcription with the include[] parameter set to logprobs.
The token that was used to generate the log probability.
The bytes that were used to generate the log probability.
The log probability of the token.
usage: optional object { input_tokens, output_tokens, total_tokens, 2 more } Usage statistics for models billed by token usage.
Usage statistics for models billed by token usage.
Number of input tokens billed for this request.
Number of output tokens generated.
Total number of tokens used (input + output).
The type of the usage object. Always tokens for this variant.
input_token_details: optional object { audio_tokens, text_tokens } Details about the input tokens billed for this request.
Details about the input tokens billed for this request.
Number of audio tokens billed for this request.
Number of text tokens billed for this request.
TranscriptionTextSegmentEvent = object { id, end, speaker, 3 more } Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
Emitted when a diarized transcription returns a completed segment with speaker information. Only emitted when you create a transcription with stream set to true and response_format set to diarized_json.
Unique identifier for the segment.
End timestamp of the segment in seconds.
Speaker label for this segment.
Start timestamp of the segment in seconds.
Transcript text for this segment.
The type of the event. Always transcript.text.segment.
TranscriptionVerbose = object { duration, language, text, 3 more } Represents a verbose json transcription response returned by model, based on the provided input.
Represents a verbose json transcription response returned by model, based on the provided input.
The duration of the input audio.
The language of the input audio.
The transcribed text.
Segments of the transcribed text and their corresponding details.
Segments of the transcribed text and their corresponding details.
Unique identifier of the segment.
Average logprob of the segment. If the value is lower than -1, consider the logprobs failed.
Compression ratio of the segment. If the value is greater than 2.4, consider the compression failed.
End time of the segment in seconds.
Probability of no speech in the segment. If the value is higher than 1.0 and the avg_logprob is below -1, consider this segment silent.
Seek offset of the segment.
Start time of the segment in seconds.
Temperature parameter used for generating the segment.
Text content of the segment.
Array of token IDs for the text content.
usage: optional object { seconds, type } Usage statistics for models billed by audio input duration.
Usage statistics for models billed by audio input duration.
Duration of the input audio in seconds.
The type of the usage object. Always duration for this variant.
Extracted words and their corresponding timestamps.
Extracted words and their corresponding timestamps.
End time of the word in seconds.
Start time of the word in seconds.
The text content of the word.
TranscriptionWord = object { end, start, word }
End time of the word in seconds.
Start time of the word in seconds.
The text content of the word.
AudioTranslations
Create translation
ModelsExpand Collapse
Translation = object { text }
TranslationVerbose = object { duration, language, text, segments }
The duration of the input audio.
The language of the output translation (always english).
The translated text.
Segments of the translated text and their corresponding details.
Segments of the translated text and their corresponding details.
Unique identifier of the segment.
Average logprob of the segment. If the value is lower than -1, consider the logprobs failed.
Compression ratio of the segment. If the value is greater than 2.4, consider the compression failed.
End time of the segment in seconds.
Probability of no speech in the segment. If the value is higher than 1.0 and the avg_logprob is below -1, consider this segment silent.
Seek offset of the segment.
Start time of the segment in seconds.
Temperature parameter used for generating the segment.
Text content of the segment.
Array of token IDs for the text content.