TextGenResult.

interface TextGenResult {
    generated_text: string;
    generated_token_count?: number;
    generated_tokens?: WatsonXAI.TextGenTokenInfo[];
    input_token_count?: number;
    input_tokens?: WatsonXAI.TextGenTokenInfo[];
    moderations?: WatsonXAI.ModerationResults;
    seed?: number;
    stop_reason: string;
}

Properties

generated_text: string

The text that was generated by the model.

generated_token_count?: number

The number of generated tokens.

generated_tokens?: WatsonXAI.TextGenTokenInfo[]

The list of individual generated tokens. Extra token information is included based on the other flags in the return_options of the request.

input_token_count?: number

The number of input tokens consumed.

input_tokens?: WatsonXAI.TextGenTokenInfo[]

The list of input tokens. Extra token information is included based on the other flags in the return_options of the request, but for decoder-only models.

The result of any detected moderations.

seed?: number

The seed used, if it exists.

stop_reason: string

The reason why the call stopped, can be one of:

  • not_finished - Possibly more tokens to be streamed.
  • max_tokens - Maximum requested tokens reached.
  • eos_token - End of sequence token encountered.
  • cancelled - Request canceled by the client.
  • time_limit - Time limit reached.
  • stop_sequence - Stop sequence encountered.
  • token_limit - Token limit reached.
  • error - Error encountered.

Note that these values will be lower-cased so test for values case insensitive.