LLMReply
Summarizes LLM metrics.
Indicates if the request was successful.
The number of tokens in the generated content.
The time in milliseconds to produce the first token of generated content.
The time in milliseconds to produce the last token of generated content.
Available options:
LLMChunk
, LLMReply
The user input.
The number of words in the generated content.
The generated content in English, provided by Sutra models when input is not English.
Indicates if this is the final chunk.