Optional finish_The reason the model stopped generating tokens. This will be:
"stop" if the model hit a natural stop point or a provided stop sequence"length" if the maximum number of tokens specified in the request was reached"content_filter" if content was omitted due to a flag from our content filters.Optional indexIndex of the choice in the response.
Optional logprobsLog probabilities associated with the generated tokens.
Optional textText generated by the model for the choice.
A legacy text completions response choice.