This field must not be set while creating a fine tuning job with InstructLab.

The parameters for the job. Note that if verbalizer is provided then response_template must also be provided (and vice versa).

interface FineTuningParameters {
    accumulate_steps?: number;
    base_model: WatsonXAI.BaseModel;
    batch_size?: number;
    gpu?: GPU;
    learning_rate?: number;
    max_seq_length?: number;
    num_epochs?: number;
    response_template?: string;
    task_id?: string;
    verbalizer?: string;
}

Properties

accumulate_steps?: number

Number of updates steps to accumulate the gradients for, before performing a backward/update pass.

base_model: WatsonXAI.BaseModel

The model id of the base model for this job.

batch_size?: number

The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training.

gpu?: GPU

The name and number of GPUs used for the Fine Tuning job.

learning_rate?: number

The initial learning rate for AdamW optimizer.

max_seq_length?: number

Maximum sequence length in terms of number of tokens. Any sequence beyond this maximum length will be truncated.

num_epochs?: number

Total number of training epochs to perform.

response_template?: string

Separator for the prediction/response in the single sequence to train on completions only.

task_id?: string

The task that is targeted for this model.

verbalizer?: string

Verbalizer template to be used for formatting data at train and inference time.

This template may use brackets to indicate where fields from the data model must be rendered.