genai.extensions.llama_index package¶
Extension for LLamaIndex library
- class genai.extensions.llama_index.IBMGenAILlamaIndex[source]¶
 Bases:
LLM- __init__(*, client, model_id, callback_manager=None, **kwargs)[source]¶
 Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Parameters:
 client (Client)
model_id (str)
callback_manager (CallbackManager | None)
kwargs (Any)
- async achat(messages, **kwargs)[source]¶
 Async chat endpoint for LLM.
- Parameters:
 messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Returns:
 Chat response from the LLM.
- Return type:
 ChatResponse
- async acomplete(*args, **kwargs)[source]¶
 Async completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
 prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Returns:
 Completion response from the LLM.
- Return type:
 CompletionResponse
Examples
`python response = await llm.acomplete("your prompt") print(response.text) `
- async astream_chat(messages, **kwargs)[source]¶
 Async streaming chat endpoint for LLM.
- Parameters:
 messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Yields:
 ChatResponse – An async generator of ChatResponse objects, each containing a new token of the response.
- Return type:
 AsyncGenerator[ChatResponse, None]
- async astream_complete(*args, **kwargs)[source]¶
 Async streaming completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
 prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Yields:
 CompletionResponse – An async generator of CompletionResponse objects, each containing a new token of the response.
- Return type:
 AsyncGenerator[CompletionResponse, None]
- chat(messages, **kwargs)[source]¶
 Chat endpoint for LLM.
- Parameters:
 messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Returns:
 Chat response from the LLM.
- Return type:
 ChatResponse
- classmethod class_name()[source]¶
 Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
- Return type:
 str
- complete(*args, **kwargs)[source]¶
 Completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
 prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Returns:
 Completion response from the LLM.
- Return type:
 CompletionResponse
Examples
`python response = llm.complete("your prompt") print(response.text) `
- conversation_id: str | None¶
 
- data: PromptTemplateData | None¶
 
- property metadata: LLMMetadata¶
 LLM metadata.
- Returns:
 LLM metadata containing various information about the LLM.
- Return type:
 LLMMetadata
- model_id: str¶
 
- moderations: ModerationParameters | None¶
 
- parameters: TextGenerationParameters | None¶
 
- parent_id: str | None¶
 
- prompt_id: str | None¶
 
- prompt_template_id: str | None¶
 
- stream_chat(messages, **kwargs)[source]¶
 Streaming chat endpoint for LLM.
- Parameters:
 messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Yields:
 ChatResponse – A generator of ChatResponse objects, each containing a new token of the response.
- Return type:
 Generator[ChatResponse, None, None]
- stream_complete(*args, **kwargs)[source]¶
 Streaming completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
 prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
- Yields:
 CompletionResponse – A generator of CompletionResponse objects, each containing a new token of the response.
- Return type:
 Generator[CompletionResponse, None, None]
- trim_method: str | TrimMethod | None¶
 
- use_conversation_parameters: bool | None¶
 
- class genai.extensions.llama_index.IBMGenAILlamaIndexEmbedding[source]¶
 Bases:
BaseEmbedding- classmethod class_name()[source]¶
 Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
- Return type:
 str
- embed_batch_size: int¶
 
- execution_options: dict | CreateExecutionOptions | None¶
 
- model_id: str¶
 
- parameters: dict | TextEmbeddingParameters | None¶
 
Submodules¶
- genai.extensions.llama_index.embeddings module
IBMGenAILlamaIndexEmbeddingIBMGenAILlamaIndexEmbedding.callback_managerIBMGenAILlamaIndexEmbedding.class_name()IBMGenAILlamaIndexEmbedding.clientIBMGenAILlamaIndexEmbedding.embed_batch_sizeIBMGenAILlamaIndexEmbedding.execution_optionsIBMGenAILlamaIndexEmbedding.model_idIBMGenAILlamaIndexEmbedding.model_nameIBMGenAILlamaIndexEmbedding.num_workersIBMGenAILlamaIndexEmbedding.parameters
 - genai.extensions.llama_index.llm module
IBMGenAILlamaIndexIBMGenAILlamaIndex.__init__()IBMGenAILlamaIndex.achat()IBMGenAILlamaIndex.acomplete()IBMGenAILlamaIndex.astream_chat()IBMGenAILlamaIndex.astream_complete()IBMGenAILlamaIndex.callback_managerIBMGenAILlamaIndex.chat()IBMGenAILlamaIndex.class_name()IBMGenAILlamaIndex.clientIBMGenAILlamaIndex.complete()IBMGenAILlamaIndex.completion_to_promptIBMGenAILlamaIndex.conversation_idIBMGenAILlamaIndex.dataIBMGenAILlamaIndex.messages_to_promptIBMGenAILlamaIndex.metadataIBMGenAILlamaIndex.model_idIBMGenAILlamaIndex.moderationsIBMGenAILlamaIndex.output_parserIBMGenAILlamaIndex.parametersIBMGenAILlamaIndex.parent_idIBMGenAILlamaIndex.prompt_idIBMGenAILlamaIndex.prompt_template_idIBMGenAILlamaIndex.pydantic_program_modeIBMGenAILlamaIndex.query_wrapper_promptIBMGenAILlamaIndex.stream_chat()IBMGenAILlamaIndex.stream_complete()IBMGenAILlamaIndex.system_promptIBMGenAILlamaIndex.trim_methodIBMGenAILlamaIndex.use_conversation_parameters
to_genai_message()to_genai_messages()