Parameters to be set when running a Fine Tuning job with LoRA/QLoRA.

interface FineTuningPeftParameters {
    lora_alpha?: number;
    lora_dropout?: number;
    rank?: number;
    target_modules?: string[];
    type?: string;
}

Properties

lora_alpha?: number

This field must not be set while creating a fine tuning job with InstructLab.

The alpha parameter for Lora scaling.

lora_dropout?: number

This field must not be set while creating a fine tuning job with InstructLab.

The dropout probability for Lora layers.

rank?: number

This field must not be set while creating a fine tuning job with InstructLab.

The Lora attention dimension (the "rank").

target_modules?: string[]

This field must not be set while creating a fine tuning job with InstructLab.

The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. Please specify modules as per model architecture. If the value is ["all-linear"], then LORA selects all linear and Conv1D modules as per model architecture, except for the output layer.

type?: string

This field must not be set while creating a fine tuning job with InstructLab.

The type specification for a LoRA or QLoRA Fine Tuning job. If type is set to none, no other parameters in this object need to be specified.