Indicates whether the request was cached.
A list of chat completion choices. Can be more than one if n
is greater than 1
in the request.
The UNIX timestamp (in seconds) of when the chat completion was created.
The unique identifier for the chat completion.
The ID of the model used for the chat completion.
Object is the response object's type, which should always be "chat.completion"
.
Contains content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.
(Azure OpenAI provider model requests only.).
The service tier used for processing a request.
Optional
system_Backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
Usage information for a model request.
A chat completion response generated by a model.