ModelInference for Deployments#

This section shows how to use ModelInference module with created deployment.

There are two ways to infer text using the deployments module or using ModelInference module .

Infer text with deployments#

You can directly query generate_text using the deployments module.

client.deployments.generate_text(
    prompt="Example prompt",
    deployment_id=deployment_id)

Creating ModelInference instance#

At the beginning, it is recommended to define parameters (later used by module).

from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams

generate_params = {
    GenParams.MAX_NEW_TOKENS: 25,
    GenParams.STOP_SEQUENCES: ["\n"]
}

Create the ModelInference itself, using credentials and project_id / space_id or the previously initialized APIClient (see APIClient initialization).

from ibm_watson_machine_learning.foundation_models import ModelInference

deployed_model = ModelInference(
    deployment_id=deployment_id,
    params=generate_params,
    credentials=credentials,
    project_id=project_id
)

# OR

deployed_model = ModelInference(
    deployment_id=deployment_id,
    params=generate_params,
    api_client=client
)

You can directly query generate_text using the ModelInference object.

deployed_model.generate_text(prompt="Example prompt")

Generate methods#

The detailed explanation of available generate methods with exact parameters can be found in the ModelInferece class.

With previously created deployed_model object, it is possible to generate a text stream (generator) using defined inference and generate_text_stream() method.

for token in deployed_model.generate_text_stream(prompt=input_prompt):
    print(token, end="")
'$10 Powerchill Leggings'

And also receive more detailed result with generate().

details = deployed_model.generate(prompt=input_prompt, params=gen_params)
print(details)
{
    'model_id': 'google/flan-t5-xl',
    'created_at': '2023-11-17T15:32:57.401Z',
    'results': [
        {
        'generated_text': '$10 Powerchill Leggings',
        'generated_token_count': 8,
        'input_token_count': 73,
        'stop_reason': 'eos_token'
        }
    ],
    'system': {'warnings': []}
}