Properties to control the prompt tuning.

interface PromptTuning {
    accumulate_steps?: number;
    base_model?: WatsonXAI.BaseModel;
    batch_size?: number;
    init_method?: string;
    init_text?: string;
    learning_rate?: number;
    max_input_tokens?: number;
    max_output_tokens?: number;
    num_epochs?: number;
    task_id: string;
    tuning_type?: string;
    verbalizer?: string;
}

Properties

accumulate_steps?: number

Number of steps to be used for gradient accumulation. Gradient accumulation refers to a method of collecting gradient for configured number of steps instead of updating the model variables at every step and then applying the update to model variables. This can be used as a tool to overcome smaller batch size limitation. Often also referred in conjunction with "effective batch size".

base_model?: WatsonXAI.BaseModel

The model id of the base model for this job.

batch_size?: number

The batch size is a number of samples processed before the model is updated.

init_method?: string

The text method requires init_text to be set.

init_text?: string

Initialization text to be used if init_method is set to text otherwise this will be ignored.

learning_rate?: number

Learning rate to be used while tuning prompt vectors.

max_input_tokens?: number

Maximum length of input tokens being considered.

max_output_tokens?: number

Maximum length of output tokens being predicted.

num_epochs?: number

Number of epochs to tune the prompt vectors, this affects the quality of the trained model.

task_id: string

The task that is targeted for this model.

tuning_type?: string

Type of Peft (Parameter-Efficient Fine-Tuning) config to build.

verbalizer?: string

Verbalizer template to be used for formatting data at train and inference time. This template may use brackets to indicate where fields from the data model must be rendered.