genai.extensions.llama_index package#
Extension for LLamaIndex library
- class genai.extensions.llama_index.IBMGenAILlamaIndex[source]#
Bases:
LLM
- __init__(*, client, model_id, callback_manager=None, **kwargs)[source]#
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- Parameters:
client (Client) –
model_id (str) –
callback_manager (CallbackManager | None) –
kwargs (Any) –
- async achat(messages, **kwargs)#
Async chat endpoint for LLM.
- Parameters:
messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
- Returns:
Chat response from the LLM.
- Return type:
ChatResponse
- async acomplete(*args, **kwargs)#
Async completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
args (Any) –
- Returns:
Completion response from the LLM.
- Return type:
CompletionResponse
Examples
`python response = await llm.acomplete("your prompt") print(response.text) `
- async astream_chat(messages, **kwargs)#
Async streaming chat endpoint for LLM.
- Parameters:
messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
- Yields:
ChatResponse – An async generator of ChatResponse objects, each containing a new token of the response.
- Return type:
Any
- async astream_complete(*args, **kwargs)#
Async streaming completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
args (Any) –
- Yields:
CompletionResponse – An async generator of CompletionResponse objects, each containing a new token of the response.
- Return type:
Any
- chat(messages, **kwargs)#
Chat endpoint for LLM.
- Parameters:
messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
- Returns:
Chat response from the LLM.
- Return type:
ChatResponse
- classmethod class_name()[source]#
Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
- Return type:
str
- complete(*args, **kwargs)#
Completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
args (Any) –
- Returns:
Completion response from the LLM.
- Return type:
CompletionResponse
Examples
`python response = llm.complete("your prompt") print(response.text) `
- conversation_id: str | None#
- data: PromptTemplateData | None#
- property metadata: LLMMetadata#
LLM metadata.
- Returns:
LLM metadata containing various information about the LLM.
- Return type:
LLMMetadata
- model_id: str#
- moderations: ModerationParameters | None#
- parameters: TextGenerationParameters | None#
- parent_id: str | None#
- prompt_id: str | None#
- prompt_template_id: str | None#
- stream_chat(messages, **kwargs)#
Streaming chat endpoint for LLM.
- Parameters:
messages (Sequence[ChatMessage]) – Sequence of chat messages.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
- Yields:
ChatResponse – A generator of ChatResponse objects, each containing a new token of the response.
- Return type:
Any
- stream_complete(*args, **kwargs)#
Streaming completion endpoint for LLM.
If the LLM is a chat model, the prompt is transformed into a single user message.
- Parameters:
prompt (str) – Prompt to send to the LLM.
formatted (bool, optional) – Whether the prompt is already formatted for the LLM, by default False.
kwargs (Any) – Additional keyword arguments to pass to the LLM.
_self (Any) –
args (Any) –
- Yields:
CompletionResponse – A generator of CompletionResponse objects, each containing a new token of the response.
- Return type:
Any
- trim_method: str | TrimMethod | None#
- use_conversation_parameters: bool | None#
- class genai.extensions.llama_index.IBMGenAILlamaIndexEmbedding[source]#
Bases:
BaseEmbedding
- classmethod class_name()[source]#
Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
- Return type:
str
- embed_batch_size: int#
- execution_options: dict | CreateExecutionOptions | None#
- model_id: str#
- parameters: dict | TextEmbeddingParameters | None#
Submodules#
- genai.extensions.llama_index.llm module
IBMGenAILlamaIndex
IBMGenAILlamaIndex.__init__()
IBMGenAILlamaIndex.achat()
IBMGenAILlamaIndex.acomplete()
IBMGenAILlamaIndex.astream_chat()
IBMGenAILlamaIndex.astream_complete()
IBMGenAILlamaIndex.callback_manager
IBMGenAILlamaIndex.chat()
IBMGenAILlamaIndex.class_name()
IBMGenAILlamaIndex.client
IBMGenAILlamaIndex.complete()
IBMGenAILlamaIndex.completion_to_prompt
IBMGenAILlamaIndex.conversation_id
IBMGenAILlamaIndex.data
IBMGenAILlamaIndex.messages_to_prompt
IBMGenAILlamaIndex.metadata
IBMGenAILlamaIndex.model_id
IBMGenAILlamaIndex.moderations
IBMGenAILlamaIndex.output_parser
IBMGenAILlamaIndex.parameters
IBMGenAILlamaIndex.parent_id
IBMGenAILlamaIndex.prompt_id
IBMGenAILlamaIndex.prompt_template_id
IBMGenAILlamaIndex.pydantic_program_mode
IBMGenAILlamaIndex.query_wrapper_prompt
IBMGenAILlamaIndex.stream_chat()
IBMGenAILlamaIndex.stream_complete()
IBMGenAILlamaIndex.system_prompt
IBMGenAILlamaIndex.trim_method
IBMGenAILlamaIndex.use_conversation_parameters
to_genai_message()
to_genai_messages()