The text that was generated by the model.
Optional generated_The number of generated tokens.
Optional generated_The list of individual generated tokens. Extra token information is included based on the other flags in the
return_options of the request.
Optional input_The number of input tokens consumed.
Optional input_The list of input tokens. Extra token information is included based on the other flags in the
return_options of the request, but for decoder-only models.
Optional moderationsThe result of any detected moderations.
Optional seedThe seed used, if it exists.
The reason why the call stopped, can be one of:
Note that these values will be lower-cased so test for values case insensitive.
TextGenResult.