- Summary
- One-line gist
- Headline
- Start and end timestamps
Auto Chapters and Summarization
You can only enable one of the Auto Chapters and Summarization models in the same transcription.Quickstart
Enable Auto Chapters by settingauto_chapters to true in the transcription config. punctuate must be enabled to use Auto Chapters (punctuate is enabled by default).
Example output
API reference
Request
| Key | Type | Description |
|---|---|---|
auto_chapters | boolean | Enable Auto Chapters. |
Response
| Key | Type | Description |
|---|---|---|
chapters | array | An array of temporally sequential chapters for the audio file. |
chapters[i].gist | string | An short summary in a few words of the content spoken in the i-th chapter. |
chapters[i].headline | string | A single sentence summary of the content spoken during the i-th chapter. |
chapters[i].summary | string | A one paragraph summary of the content spoken during the i-th chapter. |
chapters[i].start | number | The starting time, in milliseconds, for the i-th chapter. |
chapters[i].end | number | The ending time, in milliseconds, for the i-th chapter. |
Frequently asked questions
Can I specify the number of chapters to be generated by the Auto Chapters model?
Can I specify the number of chapters to be generated by the Auto Chapters model?
No, the number of chapters generated by the Auto Chapters model isn’t configurable by the user. The model automatically segments the audio file into logical chapters as the topic of conversation changes.
Troubleshooting
Why am I not getting any chapter predictions for my audio file?
Why am I not getting any chapter predictions for my audio file?
One possible reason is that the audio file doesn’t contain enough variety in topic or tone for the model to identify separate chapters. Another reason could be due to background noise or low-quality audio interfering with the model’s analysis.