genai.extensions.llama_index.llm module#

class genai.extensions.llama_index.llm.IBMGenAILlamaIndex[source]#

Bases: LLM

__init__(*, client, model_id, callback_manager=None, **kwargs)[source]#

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Parameters:
  • client (Client) –

  • model_id (str) –

  • callback_manager (CallbackManager | None) –

  • kwargs (Any) –

async achat(messages, **kwargs)#

Async chat endpoint for LLM.

Parameters:
  • messages (Sequence[ChatMessage]) – Sequence of chat messages.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

Returns:

Chat response from the LLM.

Return type:

ChatResponse

Examples

```python from llama_index.core.llms import ChatMessage

response = await llm.achat([ChatMessage(role=”user”, content=”Hello”)]) print(response.content) ```

async acomplete(*args, **kwargs)#

Async completion endpoint for LLM.

If the LLM is a chat model, the prompt is transformed into a single user message.

Parameters:
  • prompt (str) – Prompt to send to the LLM.

  • formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

  • args (Any) –

Returns:

Completion response from the LLM.

Return type:

CompletionResponse

Examples

`python response = await llm.acomplete("your prompt") print(response.text) `

async astream_chat(messages, **kwargs)#

Async streaming chat endpoint for LLM.

Parameters:
  • messages (Sequence[ChatMessage]) – Sequence of chat messages.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

Yields:

ChatResponse – An async generator of ChatResponse objects, each containing a new token of the response.

Return type:

Any

Examples

```python from llama_index.core.llms import ChatMessage

gen = await llm.astream_chat([ChatMessage(role=”user”, content=”Hello”)]) async for response in gen:

print(response.delta, end=””, flush=True)

```

async astream_complete(*args, **kwargs)#

Async streaming completion endpoint for LLM.

If the LLM is a chat model, the prompt is transformed into a single user message.

Parameters:
  • prompt (str) – Prompt to send to the LLM.

  • formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

  • args (Any) –

Yields:

CompletionResponse – An async generator of CompletionResponse objects, each containing a new token of the response.

Return type:

Any

Examples

```python gen = await llm.astream_complete(“your prompt”) async for response in gen:

print(response.text, end=””, flush=True)

```

callback_manager: CallbackManager#
chat(messages, **kwargs)#

Chat endpoint for LLM.

Parameters:
  • messages (Sequence[ChatMessage]) – Sequence of chat messages.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

Returns:

Chat response from the LLM.

Return type:

ChatResponse

Examples

```python from llama_index.core.llms import ChatMessage

response = llm.chat([ChatMessage(role=”user”, content=”Hello”)]) print(response.content) ```

classmethod class_name()[source]#

Get the class name, used as a unique ID in serialization.

This provides a key that makes serialization robust against actual class name changes.

Return type:

str

client: Client#
complete(*args, **kwargs)#

Completion endpoint for LLM.

If the LLM is a chat model, the prompt is transformed into a single user message.

Parameters:
  • prompt (str) – Prompt to send to the LLM.

  • formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

  • args (Any) –

Returns:

Completion response from the LLM.

Return type:

CompletionResponse

Examples

`python response = llm.complete("your prompt") print(response.text) `

completion_to_prompt: Callable#
conversation_id: str | None#
data: PromptTemplateData | None#
messages_to_prompt: Callable#
property metadata: LLMMetadata#

LLM metadata.

Returns:

LLM metadata containing various information about the LLM.

Return type:

LLMMetadata

model_id: str#
moderations: ModerationParameters | None#
output_parser: BaseOutputParser | None#
parameters: TextGenerationParameters | None#
parent_id: str | None#
prompt_id: str | None#
prompt_template_id: str | None#
pydantic_program_mode: PydanticProgramMode#
query_wrapper_prompt: BasePromptTemplate | None#
stream_chat(messages, **kwargs)#

Streaming chat endpoint for LLM.

Parameters:
  • messages (Sequence[ChatMessage]) – Sequence of chat messages.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

Yields:

ChatResponse – A generator of ChatResponse objects, each containing a new token of the response.

Return type:

Any

Examples

```python from llama_index.core.llms import ChatMessage

gen = llm.stream_chat([ChatMessage(role=”user”, content=”Hello”)]) for response in gen:

print(response.delta, end=””, flush=True)

```

stream_complete(*args, **kwargs)#

Streaming completion endpoint for LLM.

If the LLM is a chat model, the prompt is transformed into a single user message.

Parameters:
  • prompt (str) – Prompt to send to the LLM.

  • formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.

  • kwargs (Any) – Additional keyword arguments to pass to the LLM.

  • _self (Any) –

  • args (Any) –

Yields:

CompletionResponse – A generator of CompletionResponse objects, each containing a new token of the response.

Return type:

Any

Examples

```python gen = llm.stream_complete(“your prompt”) for response in gen:

print(response.text, end=””, flush=True)

```

system_prompt: str | None#
trim_method: str | TrimMethod | None#
use_conversation_parameters: bool | None#
genai.extensions.llama_index.llm.to_genai_message(message)[source]#
Parameters:

message (ChatMessage) –

Return type:

BaseMessage

genai.extensions.llama_index.llm.to_genai_messages(messages)[source]#
Parameters:

messages (Sequence[ChatMessage]) –

Return type:

list[BaseMessage]