curl --request GET \
--url https://api.gladia.io/v2/transcription/{id} \
--header 'x-gladia-key: <api-key>'{
"id": "45463597-20b7-4af7-b3b3-f5fb778203ab",
"request_id": "G-45463597",
"version": 2,
"kind": "pre-recorded",
"created_at": "2023-12-28T09:04:17.210Z",
"status": "queued",
"file": {
"id": "f0dcZE10-23d8-47f0-a25d-74a6eed88721",
"filename": "split_infinity.wav",
"source": "http://files.gladia.io/example/audio-transcription/split_infinity.wav",
"audio_duration": 20,
"number_of_channels": 1
},
"request_params": {
"audio_url": "http://files.gladia.io/example/audio-transcription/split_infinity.wav",
"subtitles": false,
"diarization": false,
"translation": false,
"summarization": false,
"sentences": false,
"moderation": false,
"named_entity_recognition": false,
"name_consistency": false,
"speaker_reidentification": false,
"custom_spelling": false,
"structured_data_extraction": false,
"chapterization": false,
"sentiment_analysis": false,
"display_mode": false,
"audio_enhancer": false,
"language_config": {
"code_switching": false,
"languages": [
"fr",
"en"
]
},
"accurate_words_timestamps": false,
"diarization_enhanced": false,
"punctuation_enhanced": false
},
"completed_at": null,
"custom_metadata": null,
"error_code": null,
"result": null
}(Deprecated) Prefer the more specific pre-recorded endpoint.
Get transcription’s status, parameters and result.
curl --request GET \
--url https://api.gladia.io/v2/transcription/{id} \
--header 'x-gladia-key: <api-key>'{
"id": "45463597-20b7-4af7-b3b3-f5fb778203ab",
"request_id": "G-45463597",
"version": 2,
"kind": "pre-recorded",
"created_at": "2023-12-28T09:04:17.210Z",
"status": "queued",
"file": {
"id": "f0dcZE10-23d8-47f0-a25d-74a6eed88721",
"filename": "split_infinity.wav",
"source": "http://files.gladia.io/example/audio-transcription/split_infinity.wav",
"audio_duration": 20,
"number_of_channels": 1
},
"request_params": {
"audio_url": "http://files.gladia.io/example/audio-transcription/split_infinity.wav",
"subtitles": false,
"diarization": false,
"translation": false,
"summarization": false,
"sentences": false,
"moderation": false,
"named_entity_recognition": false,
"name_consistency": false,
"speaker_reidentification": false,
"custom_spelling": false,
"structured_data_extraction": false,
"chapterization": false,
"sentiment_analysis": false,
"display_mode": false,
"audio_enhancer": false,
"language_config": {
"code_switching": false,
"languages": [
"fr",
"en"
]
},
"accurate_words_timestamps": false,
"diarization_enhanced": false,
"punctuation_enhanced": false
},
"completed_at": null,
"custom_metadata": null,
"error_code": null,
"result": null
}Your personal Gladia API key
Id of the transcription job
"45463597-20b7-4af7-b3b3-f5fb778203ab"
The transcription job's metadata
Id of the job
"45463597-20b7-4af7-b3b3-f5fb778203ab"
Debug id
"G-45463597"
API version
2
"queued": the job has been queued. "processing": the job is being processed. "done": the job has been processed and the result is available. "error": an error occurred during the job's processing.
queued, processing, done, error Creation date
"2023-12-28T09:04:17.210Z"
For debugging purposes, send data that could help to identify issues
pre-recorded "pre-recorded"
Completion date when status is "done" or "error"
"2023-12-28T09:04:37.210Z"
Custom metadata given in the initial request
{ "user": "John Doe" }HTTP status code of the error if status is "error"
400 <= x <= 599500
The file data you uploaded. Can be null if status is "error"
Show child attributes
The file id
The name of the uploaded file
The link used to download the file if audio_url was used
Duration of the audio file
3600
Number of channels in the audio file
x >= 11
Parameters used for this pre-recorded transcription. Can be null if status is "error"
Show child attributes
[Deprecated] Context to feed the transcription model with for possible better accuracy
[Beta] Can be either boolean to enable custom_vocabulary for this audio or an array with specific vocabulary list to feed the transcription model with
[Beta] Custom vocabulary configuration, if custom_vocabulary is enabled
Show child attributes
Specific vocabulary list to feed the transcription model with. Each item can be a string or an object with the following properties: value, intensity, pronunciations, language.
Show child attributes
The text used to replace in the transcription.
"Gladia"
The global intensity of the feature.
0 <= x <= 10.5
The pronunciations used in the transcription.
Specify the language in which it will be pronounced when sound comparison occurs. Default to transcription language.
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh "en"
[
"Westeros",
{ "value": "Stark" },
{
"value": "Night's Watch",
"pronunciations": ["Nightz Watch"],
"intensity": 0.4,
"language": "en"
}
]Default intensity for the custom vocabulary
0 <= x <= 10.5
[Deprecated] Use language_config instead. Detect the language from the given audio
[Deprecated] Use language_config instead.Detect multiple languages in the given audio
[Deprecated] Use language_config instead. Specify the configuration for code switching
Show child attributes
Specify the languages you want to use when detecting multiple languages
Specify the language in which it will be pronounced when sound comparison occurs. Default to transcription language.
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh [Deprecated] Use language_config instead. Set the spoken language for the given audio (ISO 639 standard)
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh "en"
[Deprecated] Use callback/callback_config instead. Callback URL we will do a POST request to with the result of the transcription
"http://callback.example"
Enable callback for this transcription. If true, the callback_config property will be used to customize the callback behaviour
Customize the callback behaviour (url and http method)
Show child attributes
The URL to be called with the result of the transcription
"http://callback.example"
The HTTP method to be used. Allowed values are POST or PUT (default: POST)
POST, PUT "POST"
Enable subtitles generation for this transcription
Configuration for subtitles generation if subtitles is enabled
Show child attributes
Subtitles formats you want your transcription to be formatted to
1Subtitles formats you want your transcription to be formatted to
srt, vtt ["srt"]Minimum duration of a subtitle in seconds
x >= 0Maximum duration of a subtitle in seconds
1 <= x <= 30Maximum number of characters per row in a subtitle
x >= 1Maximum number of rows per caption
1 <= x <= 5Style of the subtitles. Compliance mode refers to : https://loc.gov/preservation/digital/formats//fdd/fdd000569.shtml#:~:text=SRT%20files%20are%20basic%20text,alongside%2C%20example%3A%20%22MyVideo123
default, compliance Enable speaker recognition (diarization) for this audio
Speaker recognition configuration, if diarization is enabled
Show child attributes
Exact number of speakers in the audio
x >= 13
Minimum number of speakers in the audio
x >= 01
Maximum number of speakers in the audio
x >= 02
[Beta] Enable translation for this audio
[Beta] Translation configuration, if translation is enabled
Show child attributes
Target language in iso639-1 format you want the transcription translated to
1Target language in iso639-1 format you want the transcription translated to
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, wo, yi, yo, zh ["en"]Model you want the translation model to use to translate
base, enhanced Align translated utterances with the original ones
Whether to apply lipsync to the translated transcription.
Enables or disables context-aware translation features that allow the model to adapt translations based on provided context.
Context information to improve translation accuracy
Forces the translation to use informal language forms when available in the target language.
[Beta] Enable summarization for this audio
[Alpha] Enable moderation for this audio
[Alpha] Enable named entity recognition for this audio
[Alpha] Enable chapterization for this audio
[Alpha] Enable names consistency for this audio
[Alpha] Enable custom spelling for this audio
[Alpha] Custom spelling configuration, if custom_spelling is enabled
[Alpha] Enable structured data extraction for this audio
[Alpha] Structured data extraction configuration, if structured_data_extraction is enabled
Show child attributes
The list of classes to extract from the audio transcription
1["Persons", "Organizations"]Enable sentiment analysis for this audio
[Alpha] Enable audio to llm processing for this audio
[Alpha] Audio to llm configuration, if audio_to_llm is enabled
Show child attributes
The list of prompts applied on the audio transcription
1[
"Extract the key points from the transcription"
]Enable sentences for this audio
[Alpha] Allows to change the output display_mode for this audio. The output will be reordered, creating new utterances when speakers overlapped
[Alpha] Use enhanced punctuation for this audio
Specify the language configuration
Show child attributes
If one language is set, it will be used for the transcription. Otherwise, language will be auto-detected by the model.
Specify the language in which it will be pronounced when sound comparison occurs. Default to transcription language.
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh If true, language will be auto-detected on each utterance. Otherwise, language will be auto-detected on first utterance and then used for the rest of the transcription. If one language is set, this option will be ignored.
Pre-recorded transcription's result when status is "done"
Show child attributes
Metadata for the given transcription & audio file
Show child attributes
Duration of the transcribed audio file
3600
Number of distinct channels in the transcribed audio file
x >= 11
Billed duration in seconds (audio_duration * number_of_distinct_channels)
3600
Duration of the transcription in seconds
20
Transcription of the audio speech
Show child attributes
All transcription on text format without any other information
All the detected languages in the audio sorted from the most detected to the less detected
Specify the language in which it will be pronounced when sound comparison occurs. Default to transcription language.
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh ["en"]Transcribed speech utterances present in the audio
Show child attributes
Start timestamp in seconds of this utterance
End timestamp in seconds of this utterance
Confidence on the transcribed utterance (1 = 100% confident)
Audio channel of where this utterance has been transcribed from
x >= 0List of words of the utterance, split by timestamp
Show child attributes
Spoken word
Start timestamps in seconds of the spoken word
End timestamps in seconds of the spoken word
Confidence on the transcribed word (1 = 100% confident)
Transcription for this utterance
Spoken language in this utterance
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh "en"
If diarization enabled, speaker identification number
x >= 0If sentences has been enabled, sentences results
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
Show child attributes
If sentences has been enabled, transcription as sentences.
If translation has been enabled, translation of the audio speech transcription
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
List of translated transcriptions, one for each target_languages
Show child attributes
Contains the error details of the failed addon
All transcription on text format without any other information
All the detected languages in the audio sorted from the most detected to the less detected
Target language in iso639-1 format you want the transcription translated to
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, wo, yi, yo, zh ["en"]Transcribed speech utterances present in the audio
Show child attributes
Start timestamp in seconds of this utterance
End timestamp in seconds of this utterance
Confidence on the transcribed utterance (1 = 100% confident)
Audio channel of where this utterance has been transcribed from
x >= 0List of words of the utterance, split by timestamp
Show child attributes
Spoken word
Start timestamps in seconds of the spoken word
End timestamps in seconds of the spoken word
Confidence on the transcribed word (1 = 100% confident)
Transcription for this utterance
Spoken language in this utterance
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh "en"
If diarization enabled, speaker identification number
x >= 0If sentences has been enabled, sentences results for this translation
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
Show child attributes
Status code of the addon error
500
Reason of the addon error
Detailed message of the addon error
If sentences has been enabled, transcription as sentences.
If subtitles has been enabled, subtitles results for this translation
If summarization has been enabled, summarization of the audio speech transcription
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If summarization has been enabled, summary of the transcription
If moderation has been enabled, moderation of the audio speech transcription
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If moderation has been enabled, moderated transcription
If named_entity_recognition has been enabled, the detected entities
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
Show child attributes
If named_entity_recognition has been enabled, the detected entities.
If name_consistency has been enabled, Gladia will improve consistency of the names accross the transcription
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If name_consistency has been enabled, Gladia will improve the consistency of the names across the transcription
If speaker_reidentification has been enabled, results of the AI speaker reidentification.
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
Show child attributes
If speaker_reidentification has been enabled, results of the AI speaker reidentification.
If structured_data_extraction has been enabled, structured data extraction results
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
Show child attributes
If structured_data_extraction has been enabled, results of the AI structured data extraction for the defined classes.
If sentiment_analysis has been enabled, sentiment analysis of the audio speech transcription
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If sentiment_analysis has been enabled, Gladia will analyze the sentiments and emotions of the audio
If audio_to_llm has been enabled, audio to llm results of the audio speech transcription
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If audio_to_llm has been enabled, results of the AI custom analysis
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If sentences has been enabled, sentences of the audio speech transcription. Deprecated: content will move to the transcription object.
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If sentences has been enabled, transcription as sentences.
If display_mode has been enabled, the output will be reordered, creating new utterances when speakers overlapped
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If display_mode has been enabled, proposes an alternative display output.
If chapterization has been enabled, will generate chapters name for different parts of the given audio.
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
If chapterization has been enabled, will generate chapters name for different parts of the given audio.
If diarization has been requested and an error has occurred, the result will appear here
Show child attributes
The audio intelligence model succeeded to get a valid output
The audio intelligence model returned an empty value
Time audio intelligence model took to complete the task
null if success is true. Contains the error details of the failed model
[Deprecated] If diarization has been enabled, the diarization result will appear here
Show child attributes
Start timestamp in seconds of this utterance
End timestamp in seconds of this utterance
Confidence on the transcribed utterance (1 = 100% confident)
Audio channel of where this utterance has been transcribed from
x >= 0List of words of the utterance, split by timestamp
Show child attributes
Spoken word
Start timestamps in seconds of the spoken word
End timestamps in seconds of the spoken word
Confidence on the transcribed word (1 = 100% confident)
Transcription for this utterance
Spoken language in this utterance
af, am, ar, as, az, ba, be, bg, bn, bo, br, bs, ca, cs, cy, da, de, el, en, es, et, eu, fa, fi, fo, fr, gl, gu, ha, haw, he, hi, hr, ht, hu, hy, id, is, it, ja, jw, ka, kk, km, kn, ko, la, lb, ln, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, nn, no, oc, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, sn, so, sq, sr, su, sv, sw, ta, te, tg, th, tk, tl, tr, tt, uk, ur, uz, vi, yi, yo, zh "en"
If diarization enabled, speaker identification number
x >= 0