Tune Experiment¶
TuneExperiment¶
- class ibm_watsonx_ai.experiment.fm_tune.TuneExperiment(credentials, project_id=None, space_id=None, verify=None)[source]¶
Bases:
BaseExperiment
The TuneExperiment class for tuning models with prompts.
- Parameters:
credentials (Credentials or dict) – credentials for the Watson Machine Learning instance
project_id (str, optional) – ID of the Watson Studio project
space_id (str, optional) – ID of the Watson Studio space
verify (bool or str, optional) –
You can pass one of following as verify:
the path to a CA_BUNDLE file
the path of directory with certificates of trusted CAs
True - default path to truststore will be taken
False - no verification will be made
Example:
from ibm_watsonx_ai import Credentials from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment( credentials=Credentials(...), project_id="...", space_id="...")
- fine_tuner(name, task_id, base_model=None, description=None, num_epochs=None, learning_rate=None, batch_size=None, max_seq_length=None, accumulate_steps=None, verbalizer=None, response_template=None, gpu=None, auto_update_model=True, group_by_name=False)[source]¶
Initialize a FineTuner module.
- Parameters:
name (str) – name for the FineTuner
base_model (str) – model id of the base model for this fine-tuning.
task_id (str) – task that is targeted for this model.
description (str, optional) – description
num_epochs (int, optional) – number of epochs to tune the fine vectors, this affects the quality of the trained model. Possible values: 1 ≤ value ≤ 50, default value: 20
learning_rate (float, optional) – learning rate to be used while tuning prompt vectors. Possible values: 0.01 ≤ value ≤ 0.5, default value: 0.3
batch_size (int, optional) – The batch size is a number of samples processed before the model is updated. Possible values: 1 ≤ value ≤ 16, default value: 16
max_seq_length (int, optional)
accumulate_steps (int, optional) – Number of steps to be used for gradient accumulation. Gradient accumulation refers to a method of collecting gradient for configured number of steps instead of updating the model variables at every step and then applying the update to model variables. This can be used as a tool to overcome smaller batch size limitation. Often also referred in conjunction with “effective batch size”. Possible values: 1 ≤ value ≤ 128, default value: 16
verbalizer (str, optional) – Verbalizer template to be used for formatting data at train and inference time.
response_template (str, optional) – Separator for the prediction/response in the single sequence to train on completions only.
gpu (dict, optional) – The name and number of GPUs used for the FineTuning job.
auto_update_model (bool, optional) – define if model should be automatically updated, default value: True
group_by_name (bool, optional) – define if tunings should be grouped by name, default value: False
Examples
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(...) prompt_tuner = experiment.fine_tuner( name="fine-tuning name", base_model='bigscience/bloom-560m', task_id="generation", num_epochs=3, learning_rate=0.2, batch_size=5, max_seq_length=1024, accumulate_steps=5, verbalizer='### Input: {{input}} \n\n### Response: {{output}}', response_template='\n### Response:', auto_update_model=False)
- prompt_tuner(name, task_id, description=None, base_model=None, accumulate_steps=None, batch_size=None, init_method=None, init_text=None, learning_rate=None, max_input_tokens=None, max_output_tokens=None, num_epochs=None, verbalizer=None, tuning_type=None, auto_update_model=True, group_by_name=False)[source]¶
Initialize a PromptTuner module.
- Parameters:
name (str) – name for the PromptTuner
task_id (str) –
task that is targeted for this model. Example: experiment.Tasks.CLASSIFICATION
Possible values:
experiment.Tasks.CLASSIFICATION: ‘classification’ (default)
experiment.Tasks.QUESTION_ANSWERING: ‘question_answering’
experiment.Tasks.SUMMARIZATION: ‘summarization’
experiment.Tasks.RETRIEVAL_AUGMENTED_GENERATION: ‘retrieval_augmented_generation’
experiment.Tasks.GENERATION: ‘generation’
experiment.Tasks.CODE_GENERATION_AND_CONVERSION: ‘code’
experiment.Tasks.EXTRACTION: ‘extraction
description (str, optional) – description
base_model (str, optional) – model ID of the base model for this prompt tuning. Example: google/flan-t5-xl
accumulate_steps (int, optional) – Number of steps to be used for gradient accumulation. Gradient accumulation refers to the method of collecting gradient for a configured number of steps instead of updating the model variables at every step and then applying the update to model variables. This can be used as a tool to overcome smaller batch size limitation. Often also referred in conjunction with “effective batch size”. Possible values: 1 ≤ value ≤ 128, default value: 16
batch_size (int, optional) – The batch size is the number of samples processed before the model is updated. Possible values: 1 ≤ value ≤ 16, default value: 16
init_method (str, optional) – text method requires init_text to be set. Allowable values: [random, text], default value: random
init_text (str, optional) – initialization text to be used if init_method is set to text, otherwise this will be ignored.
learning_rate (float, optional) – learning rate to be used while tuning prompt vectors. Possible values: 0.01 ≤ value ≤ 0.5, default value: 0.3
max_input_tokens (int, optional) – maximum length of input tokens being considered. Possible values: 1 ≤ value ≤ 256, default value: 256
max_output_tokens (int, optional) – maximum length of output tokens being predicted. Possible values: 1 ≤ value ≤ 128 default value: 128
num_epochs (int, optional) – number of epochs to tune the prompt vectors, this affects the quality of the trained model. Possible values: 1 ≤ value ≤ 50, default value: 20
verbalizer (str, optional) – verbalizer template to be used for formatting data at train and inference time. This template may use brackets to indicate where fields from the data model must be rendered. The default value is “{{input}}” which means use the raw text, default value: Input: {{input}} Output:
tuning_type (str, optional) – type of Peft (Parameter-Efficient Fine-Tuning) config to build. Allowable values: [experiment.PromptTuningTypes.PT], default value: experiment.PromptTuningTypes.PT
auto_update_model (bool, optional) – define if model should be automatically updated, default value: True
group_by_name (bool, optional) – define if tunings should be grouped by name, default value: False
Examples
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(...) prompt_tuner = experiment.prompt_tuner( name="prompt tuning name", task_id=experiment.Tasks.CLASSIFICATION, base_model='google/flan-t5-xl', accumulate_steps=32, batch_size=16, learning_rate=0.2, max_input_tokens=256, max_output_tokens=2, num_epochs=6, tuning_type=experiment.PromptTuningTypes.PT, verbalizer="Extract the satisfaction from the comment. Return simple '1' for satisfied customer or '0' for unsatisfied. Input: {{input}} Output: ", auto_update_model=True)
- runs(*, filter)[source]¶
Get historical tuning runs with the name filter.
- Parameters:
filter (str) – filter, choose which runs specifying the tuning name to fetch
Examples
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(...) experiment.runs(filter='prompt tuning name').list()
Tune Runs¶
- class ibm_watsonx_ai.experiment.fm_tune.TuneRuns(client, filter=None, limit=50)[source]¶
Bases:
object
The TuneRuns class is used to work with historical PromptTuner and FineTuner runs.
- Parameters:
client (APIClient) – APIClient to handle service operations
filter (str, optional) – filter, choose which runs specifying the tuning name to fetch
limit (int) – int number of records to be returned
- get_run_details(run_id=None, include_metrics=False)[source]¶
Get run details. If run_id is not supplied, the last run will be taken.
- Parameters:
run_id (str, optional) – ID of the run
include_metrics (bool, optional) – indicates to include metrics in the training details output
- Returns:
configuration parameters of the run
- Return type:
dict
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) experiment.runs.get_run_details(run_id='02bab973-ae83-4283-9d73-87b9fd462d35') experiment.runs.get_run_details()
- get_tuner(run_id)[source]¶
Create an instance of PromptTuner or FineTuner based on a tuning run with a specific run_id.
- Parameters:
run_id (str) – ID of the run
- Returns:
prompt tuner | fine tuner object
- Return type:
PromptTuner | FineTuner class instance
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) historical_tuner = experiment.runs.get_tuner(run_id='02bab973-ae83-4283-9d73-87b9fd462d35')
- list()[source]¶
Lists historical runs with their status. If you have a lot of runs stored in the service, it might take a longer time to fetch all the information. If there is no limit set, it gets the last 50 records.
- Returns:
Pandas DataFrame with run IDs and status
- Return type:
pandas.DataFrame
Examples
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(...) df = experiment.runs.list()
Prompt Tuner¶
- class ibm_watsonx_ai.foundation_models.PromptTuner(name, task_id, *, description=None, base_model=None, accumulate_steps=None, batch_size=None, init_method=None, init_text=None, learning_rate=None, max_input_tokens=None, max_output_tokens=None, num_epochs=None, verbalizer=None, tuning_type=None, auto_update_model=True, group_by_name=None)[source]¶
Bases:
object
- cancel_run(hard_delete=False)[source]¶
Cancel or delete a Prompt Tuning run.
- Parameters:
hard_delete (bool, optional) – if True, the completed or cancelled prompt tuning run is deleted, if False, the current run is canceled. Default: False
- get_data_connections()[source]¶
- Create DataConnection objects for further usage
(eg. to handle data storage connection).
- Returns:
list of DataConnections
- Return type:
list[‘DataConnection’]
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run(...) data_connections = prompt_tuner.get_data_connections()
- get_model_id()[source]¶
Get the model ID.
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run(...) prompt_tuner.get_model_id()
- get_params()[source]¶
Get configuration parameters of PromptTuner.
- Returns:
PromptTuner parameters
- Return type:
dict
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.get_params() # Result: # # {'base_model': {'name': 'google/flan-t5-xl'}, # 'task_id': 'summarization', # 'name': 'Prompt Tuning of Flan T5 model', # 'auto_update_model': False, # 'group_by_name': False}
- get_run_details(include_metrics=False)[source]¶
Get details of a prompt tuning run.
- Parameters:
include_metrics (bool, optional) – indicates to include metrics in the training details output
- Returns:
details of the prompt tuning
- Return type:
dict
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run(...) prompt_tuner.get_run_details()
- get_run_status()[source]¶
Check the status/state of an initialized prompt tuning run if it was run in background mode.
- Returns:
status of the Prompt Tuning run
- Return type:
str
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run(...) prompt_tuner.get_run_details() # Result: # 'completed'
- plot_learning_curve()[source]¶
Plot learning curves.
Note
Available only for Jupyter notebooks.
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run(...) prompt_tuner.plot_learning_curve()
- run(training_data_references, training_results_reference=None, background_mode=False)[source]¶
Run a prompt tuning process of a foundation model on top of the training data referenced by DataConnection.
- Parameters:
training_data_references (list[DataConnection]) – data storage connection details to inform where the training data is stored
training_results_reference (DataConnection, optional) – data storage connection details to store pipeline training results
background_mode (bool, optional) – indicator if the fit() method will run in the background, async or sync
- Returns:
run details
- Return type:
dict
Example:
from ibm_watsonx_ai.experiment import TuneExperiment from ibm_watsonx_ai.helpers import DataConnection, S3Location experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run( training_data_references=[DataConnection( connection_asset_id=connection_id, location=S3Location( bucket='prompt_tuning_data', path='pt_train_data.json') ) )] background_mode=False)
- summary(scoring='loss')[source]¶
Print the details of PromptTuner models (prompt-tuned models).
- Parameters:
scoring (string, optional) – scoring metric for sorting pipelines, when not provided, uses loss one
- Returns:
computed models and metrics
- Return type:
pandas.DataFrame
Example:
from ibm_watsonx_ai.experiment import TuneExperiment experiment = TuneExperiment(credentials, ...) prompt_tuner = experiment.prompt_tuner(...) prompt_tuner.run(...) prompt_tuner.summary() # Result: # Enhancements Base model ... loss # Model Name # Prompt_tuned_M_1 [prompt_tuning] google/flan-t5-xl ... 0.449197
Enums¶
- class ibm_watsonx_ai.foundation_models.utils.enums.PromptTuningTypes[source]¶
Bases:
object
- PT = 'prompt_tuning'¶
- class ibm_watsonx_ai.foundation_models.utils.enums.PromptTuningInitMethods[source]¶
Bases:
object
Supported methods for prompt initialization in prompt tuning.
- RANDOM = 'random'¶
- TEXT = 'text'¶
- class ibm_watsonx_ai.foundation_models.utils.enums.TuneExperimentTasks(value)[source]¶
Bases:
Enum
An enumeration.
- CLASSIFICATION = 'classification'¶
- CODE_GENERATION_AND_CONVERSION = 'code'¶
- EXTRACTION = 'extraction'¶
- GENERATION = 'generation'¶
- QUESTION_ANSWERING = 'question_answering'¶
- RETRIEVAL_AUGMENTED_GENERATION = 'retrieval_augmented_generation'¶
- SUMMARIZATION = 'summarization'¶
- class PromptTunableModels¶
Bases:
StrEnum
This represents a dynamically generated Enum for Prompt Tunable Models.
Example of getting PromptTunableModels:
# GET PromptTunableModels ENUM client.foundation_models.PromptTunableModels # PRINT dict of Enums client.foundation_models.PromptTunableModels.show()
Example Output:
{'FLAN_T5_XL': 'google/flan-t5-xl', 'GRANITE_13B_INSTRUCT_V2': 'ibm/granite-13b-instruct-v2', 'LLAMA_2_13B_CHAT': 'meta-llama/llama-2-13b-chat'}