genai.extensions.langchain.chat_llm module#
Wrapper around IBM GENAI APIs for use in Langchain
- class genai.extensions.langchain.chat_llm.LangChainChatInterface[source]#
Bases:
BaseChatModel
Class representing the LangChainChatInterface for interacting with the LangChain chat API.
Example:
from genai import Client, Credentials from genai.extensions.langchain import LangChainChatInterface from langchain_core.messages import HumanMessage, SystemMessage from genai.schema import TextGenerationParameters client = Client(credentials=Credentials.from_env()) llm = LangChainChatInterface( client=client, model_id="meta-llama/llama-3-70b-instruct", parameters=TextGenerationParameters( max_new_tokens=250, ) ) response = chat_model.generate(messages=[HumanMessage(content="Hello world!")]) print(response)
- cache: BaseCache | bool | None#
Whether to cache the response.
If true, will use the global cache.
If false, will not use a cache
If None, will use the global cache if it’s set, otherwise no cache.
If instance of BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
- callback_manager: BaseCallbackManager | None#
[DEPRECATED] Callback manager to add to the run trace.
- callbacks: Callbacks#
Callbacks to add to the run trace.
- conversation_id: str | None#
- get_num_tokens(text)[source]#
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
- Parameters:
text (str) – The string input to tokenize.
- Returns:
The integer number of tokens in the text.
- Return type:
int
- get_num_tokens_from_messages(messages)[source]#
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
- Parameters:
messages (list[BaseMessage]) – The message inputs to tokenize.
- Returns:
The sum of the number of tokens across the messages.
- Return type:
int
- get_token_ids(text)[source]#
Return the ordered ids of the tokens in a text.
- Parameters:
text (str) – The string input to tokenize.
- Returns:
- A list of ids corresponding to the tokens in the text, in order they occur
in the text.
- Return type:
list[int]
- property lc_secrets: dict[str, str]#
A map of constructor argument names to secret ids.
- For example,
{“openai_api_key”: “OPENAI_API_KEY”}
- classmethod load_from_file(file, *, client)[source]#
- Parameters:
file (str | Path) –
client (Client) –
- metadata: Dict[str, Any] | None#
Metadata to add to the run trace.
- model_id: str#
- moderations: ModerationParameters | None#
- parameters: TextGenerationParameters | None#
- parent_id: str | None#
- prompt_id: str | None#
- prompt_template_id: str | None#
- streaming: bool | None#
- tags: List[str] | None#
Tags to add to the run trace.
- trim_method: str | TrimMethod | None#
- use_conversation_parameters: bool | None#
- verbose: bool#
Whether to print out response text.