General flow changes

In the first version of Gladia API, to get your transcription through an HTTP call, you had to send everything (audio file/url, parameters, etc) in a single HTTP call, and then keep the connection open until you get your result.

This was not ideal for many scenarios that could lead to longer waiting time to get your results, or in case of connection errors, not getting your results at all despite the transcription being successful.

In V2, we addressed this by decomposing the process in multiple steps, and have merged both audio & video endpoints:

1

Upload your file

This step is optional if you are already working with audio URLs.

If you’re working with audio or video files, you’ll need to upload it first using our /upload endpoint with multipart/form-data content-type since Gladia /v2/pre-recorded endpoint only accept audio URLs now.

More information about this step in the API Reference

curl --request POST \
  --url https://api.gladia.io/v2/upload \
  --header 'Content-Type: multipart/form-data' \
  --header 'x-gladia-key: YOUR_GLADIA_API_TOKEN' \
  --form audio=@/path/to/your/audio/conversation.wav

Example response :

{
  "audio_url": "https://api.gladia.io/file/636c70f6-92c1-4026-a8b6-0dfe3ecf826f",
  "audio_metadata": {
    "id": "636c70f6-92c1-4026-a8b6-0dfe3ecf826f",
    "filename": "conversation.wav",
    "extension": "wav",
    "size": 99515383,
    "audio_duration": 4146.468542,
    "number_of_channels": 2
  }
}

We will now proceed to the next steps using the returned audio_url.

2

Transcribe

We’ll now make the transcription request to Gladia’s API.

Instead of /audio/text/audio-transcription now we’ll use /v2/pre-recorded

Since /v2/pre-recorded does not accept any audio file, Content-Type is not multipart/form-data anymore, but application/json.

More information about this step in the API Reference

  • Old V1 : The HTTP connection is kept opened until you get your transcription result, and there’s no third step.

  • New V2 : You get an instant response from the request with an id and a result_url.
    The id is your transcription ID that you will use to get your transcription result once it’s done. You don’t have to keep any HTTP connection open on your side.
    result_url is returned for convenience. This is a pre-built url with your transcription id in it that you can use to get your result in the next step.

3

Get the transcription result

As on V1 you get the transcription results in the previous step, this step is only relevant for V2.

You can get your transcription results in 3 different ways:

Transcription Input & Output changes

In addition to the transcription flow changes, input & output also changed. To get the exhaustive documentation of the V2 input/output, please refer to the API Reference part of the documentation.

Input changes

The most efficient way to get the new inputs list is to check the API Reference. But here’s a quick recap table about the most used parameters changes :

V1V2
toggle_diarizationdiarization
language_behaviourdetect_language, enable_code_switching, language
output_formatsubtitles + subtitles_config
webhook_urlcallback_url

Output changes

Here is a general changelog for the output part of the transcription’s core features:

To dive deeper into the V2 version of the API, please take a look at those next: