Create transcription
Transcribes audio into the input language.
Parameters
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
Additional information to include in the transcription response.
logprobs will return the log probabilities of the tokens in the
response to understand the model's confidence in the transcription.
logprobs only works with response_format set to json and only with
the models gpt-4o-transcribe, gpt-4o-mini-transcribe, and gpt-4o-mini-transcribe-2025-12-15. This field is not supported when using gpt-4o-transcribe-diarize.
Optional list of speaker names that correspond to the audio samples provided in known_speaker_references[]. Each entry should be a short identifier (for example customer or agent). Up to 4 speakers are supported.
Optional list of audio samples (as data URLs) that contain known speaker references matching known_speaker_names[]. Each sample must be between 2 and 10 seconds, and can use any of the same input audio formats supported by file.
The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.
An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language. This field is not supported when using gpt-4o-transcribe-diarize.
If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section of the Speech-to-Text guide for more information.
Note: Streaming is not supported for the whisper-1 model and will be ignored.
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
Returns
Create transcription
require "openai"
openai = OpenAI::Client.new(api_key: "My API Key")
transcription = openai.audio.transcriptions.create(file: Pathname(__FILE__), model: :"gpt-4o-transcribe")
puts(transcription){
"text": "text",
"logprobs": [
{
"token": "token",
"bytes": [
0
],
"logprob": 0
}
],
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"total_tokens": 0,
"type": "tokens",
"input_token_details": {
"audio_tokens": 0,
"text_tokens": 0
}
}
}Returns Examples
{
"text": "text",
"logprobs": [
{
"token": "token",
"bytes": [
0
],
"logprob": 0
}
],
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"total_tokens": 0,
"type": "tokens",
"input_token_details": {
"audio_tokens": 0,
"text_tokens": 0
}
}
}