Understand the workflow for a live transcription session
encoding
, sample_rate
, bit_depth
and channels
as we need them to parse your audio chunks.
messages_config
for WebSocket messages and callback_config
for callback messages.
Here’s an example of how to read a transcript
message received through a WebSocket:
interleaveAudio
functionchannel
field that indicates which audio channel (and thus which participant) the transcription belongs to:
channelsData
array:
stop_recording
message. We will process remaining audio chunks and start the post-processing phase, in which we put together the final audio file and results with the add-ons you requested.
You’ll receive a message at every step of the process in the WebSocket, or in the callback if configured. Once the post-processing is done, the WebSocket is closed with a code 1000.
stop_recording
message, you can also close the WebSocket with the code 1000.
We will still do the post-processing in background and send you the messages through the callback you defined.GET /v2/live/:id
endpoint with the id
you received from the initial request.