A chat completion response generated by a model.

interface ChatsResponse {
    cached: boolean;
    choices: ChatsChoice[];
    created: number;
    id: string;
    model: string;
    object: string;
    prompt_filter_results: ChatsPromptFilterResult[];
    service_tier: string;
    system_fingerprint?: string;
    usage: Usage;
}

Properties

cached: boolean

Indicates whether the request was cached.

choices: ChatsChoice[]

A list of chat completion choices. Can be more than one if n is greater than 1 in the request.

created: number

The UNIX timestamp (in seconds) of when the chat completion was created.

id: string

The unique identifier for the chat completion.

model: string

The ID of the model used for the chat completion.

object: string

Object is the response object's type, which should always be "chat.completion".

prompt_filter_results: ChatsPromptFilterResult[]

Contains content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.

(Azure OpenAI provider model requests only.).

service_tier: string

The service tier used for processing a request.

system_fingerprint?: string

Backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

usage: Usage

Usage information for a model request.