POST
/
lemur
/
v3
/
generate
/
summary
Summarize a transcript using LeMUR
curl --request POST \
  --url https://api.assemblyai.com/lemur/v3/generate/summary \
  --header 'Authorization: <api-key>' \
  --header 'Content-Type: application/json' \
  --data '{
  "transcript_ids": [
    "47b95ba5-8889-44d8-bc80-5de38306e582"
  ],
  "context": "This is an interview about wildfires.",
  "final_model": "anthropic/claude-3-5-sonnet",
  "temperature": 0,
  "max_output_size": 3000
}'
{
  "request_id": "5e1b27c2-691f-4414-8bc5-f14678442f9e",
  "usage": {
    "input_tokens": 27,
    "output_tokens": 3
  },
  "response": "- Wildfires in Canada are sending smoke and air pollution across parts of the US, triggering air quality alerts from Maine to Minnesota. Concentrations of particulate matter have exceeded safety levels.\n\n- Weather systems are channeling the smoke through Pennsylvania into the Mid-Atlantic and Northeast regions. New York City has canceled outdoor activities to keep children and vulnerable groups indoors.\n\n- Very small particulate matter can enter the lungs and impact respiratory, cardiovascular and neurological health. Young children, the elderly and those with preexisting conditions are most at risk.\n\n- The conditions causing the poor air quality could get worse or shift to different areas in coming days depending on weather patterns. More wildfires may also contribute to higher concentrations.\n\n- Climate change is leading to longer and more severe fire seasons. Events of smoke traveling long distances and affecting air quality over wide areas will likely become more common in the future.\"\n"
}

Authorizations

Authorization
string
header
required

Body

application/json

Params to generate the summary

transcript_ids
string<uuid>[]

A list of completed transcripts with text. Up to a maximum of 100 files or 100 hours, whichever is lower. Use either transcript_ids or input_text as input into LeMUR.

input_text
string

Custom formatted transcript data. Maximum size is the context limit of the selected model, which defaults to 100000. Use either transcript_ids or input_text as input into LeMUR.

context

Context to provide the model. This can be a string or a free-form JSON value.

final_model
default:default

The model that is used for the final prompt after compression is performed.

Available options:
anthropic/claude-3-5-sonnet,
anthropic/claude-3-opus,
anthropic/claude-3-haiku,
anthropic/claude-3-sonnet,
anthropic/claude-2-1,
anthropic/claude-2,
default,
anthropic/claude-instant-1-2,
basic,
assemblyai/mistral-7b
max_output_size
integer
default:2000

Max output size in tokens, up to 4000

temperature
number
default:0

The temperature to use for the model. Higher values result in answers that are more creative, lower values are more conservative. Can be any value between 0.0 and 1.0 inclusive.

Required range: 0 <= x <= 1
answer_format
string

How you want the summary to be returned. This can be any text. Examples: "TLDR", "bullet points"

Response

LeMUR summary response

response
string
required

The response generated by LeMUR.

request_id
string<uuid>
required

The ID of the LeMUR request

usage
object
required

The usage numbers for the LeMUR request