A legacy text completions response choice.

interface CompletionsChoice {
    finish_reason?: string;
    index?: number;
    logprobs?: CompletionsLogProbResult;
    text?: string;
}

Properties

finish_reason?: string

The reason the model stopped generating tokens. This will be:

  • "stop" if the model hit a natural stop point or a provided stop sequence
  • "length" if the maximum number of tokens specified in the request was reached
  • "content_filter" if content was omitted due to a flag from our content filters.
index?: number

Index of the choice in the response.

Log probabilities associated with the generated tokens.

text?: string

Text generated by the model for the choice.