Optional
accumulate_Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
The model id of the base model for this job.
Optional
batch_The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training.
Optional
gpuThe name and number of GPUs used for the Fine Tuning job.
Optional
learning_The initial learning rate for AdamW optimizer.
Optional
max_Maximum sequence length in terms of number of tokens. Any sequence beyond this maximum length will be truncated.
Optional
num_Total number of training epochs to perform.
Optional
response_Separator for the prediction/response in the single sequence to train on completions only.
Optional
task_The task that is targeted for this model.
Optional
verbalizerVerbalizer template to be used for formatting data at train and inference time.
This template may use brackets to indicate where fields from the data model must be rendered.
This field must not be set while creating a fine tuning job with InstructLab.
The parameters for the job. Note that if
verbalizer
is provided thenresponse_template
must also be provided (and vice versa).