genai.text.chat.chat_generation_service module#

pydantic model genai.text.chat.chat_generation_service.BaseServices[source]#

Bases: BaseServiceServices

Config:
  • extra: str = forbid

  • validate_assignment: bool = True

  • validate_default: bool = True

field RequestService: type[RequestService] = <class 'genai.request.request_service.RequestService'>#
class genai.text.chat.chat_generation_service.ChatService[source]#

Bases: BaseService[BaseServiceConfig, BaseServices]

Services#

alias of BaseServices

__init__(*, api_client, config=None, services=None)[source]#
Parameters:
  • api_client (ApiClient) –

  • config (BaseServiceConfig | dict | None) –

  • services (BaseServices | None) –

create(*, conversation_id=None, model_id=None, messages=None, moderations=None, parameters=None, parent_id=None, prompt_id=None, prompt_template_id=None, trim_method=None, use_conversation_parameters=None)[source]#

Example:

from genai import Client, Credentials
from genai.text.chat import HumanMessage, TextGenerationParameters

client = Client(credentials=Credentials.from_env())

# Create a new conversation
response = client.text.chat.create(
    model_id="meta-llama/llama-3-70b-instruct",
    messages=[HumanMessage(content="Describe the game Chess?")],
    parameters=TextGenerationParameters(max_token_limit=100)
)
conversation_id = response.conversation_id
print(f"Response: {response.results[0].generated_text}")

# Continue in the conversation
response = client.text.chat.create(
    conversation_id=conversation_id,
    use_conversation_parameters=True,
    messages=[HumanMessage(content="Who is the best player of that game?")]
)
print(f"Response: {response.results[0].generated_text}")
Raises:
Parameters:
  • conversation_id (str | None) –

  • model_id (str | None) –

  • messages (Sequence[BaseMessage] | None) –

  • moderations (dict | ModerationParameters | None) –

  • parameters (dict | TextGenerationParameters | None) –

  • parent_id (str | None) –

  • prompt_id (str | None) –

  • prompt_template_id (str | None) –

  • trim_method (str | TrimMethod | None) –

  • use_conversation_parameters (bool | None) –

Return type:

TextChatCreateResponse

create_stream(*, model_id=None, conversation_id=None, messages=None, moderations=None, parameters=None, parent_id=None, prompt_id=None, prompt_template_id=None, trim_method=None, use_conversation_parameters=None)[source]#

Example:

from genai import Client, Credentials
from genai.text.chat import HumanMessage, TextGenerationParameters

client = Client(credentials=Credentials.from_env())

# Create a new conversation
for response in client.text.chat.create_stream(
        model_id="meta-llama/llama-3-70b-instruct",
        messages=[HumanMessage(content="Describe the game Chess?")],
        parameters=TextGenerationParameters(max_token_limit=100)
    ):
    print(f"Chunk retrieved: {response.results[0].generated_text}")
Raises:
Parameters:
  • model_id (str | None) –

  • conversation_id (str | None) –

  • messages (list[BaseMessage] | None) –

  • moderations (dict | ModerationParameters | None) –

  • parameters (dict | TextGenerationParameters | None) –

  • parent_id (str | None) –

  • prompt_id (str | None) –

  • prompt_template_id (str | None) –

  • trim_method (str | TrimMethod | None) –

  • use_conversation_parameters (bool | None) –

Return type:

Generator[TextChatStreamCreateResponse, None, None]