Audio Intelligence
Content Moderation
Detect potentially inappropriate content in your audio
This feature is on Alpha state.
We’re looking for feedbacks to improve this feature, share yours here.
The Content Moderation model scans your transcription and finds out if it contains words or expressions that potentially need to be moderated, while classifying their type and severity.
To use this feature, follow this example:
With this code, your output will look like this if there is no need to moderate anything:
{
"success": true,
"is_empty": false,
"results": [],
"exec_time": 1.5126123428344727,
"error": null
}
On the other hand, if profanities were detected, your result would look like this :
{
"success": true,
"is_empty": false,
"results": [
{
"success": true,
"is_empty": false,
"results": [
{
"utterance_id": 0,
"text": "****.",
"type": "HATEFUL",
"severity": "MEDIUM",
"classifications": [
"INSULT",
"VULGARITY"
]
},
{
"utterance_id": 3,
"text": "****",
"type": "HATEFUL",
"severity": "MEDIUM",
"classifications": [
"INSULT"
]
}
],
"exec_time": 0.23801589012145996,
"error": null
}],
"exec_time": 0.27121686935424805,
"error": null
}
Was this page helpful?