Optional
accumulate_Number of steps to be used for gradient accumulation. Gradient accumulation refers to a method of collecting gradient for configured number of steps instead of updating the model variables at every step and then applying the update to model variables. This can be used as a tool to overcome smaller batch size limitation. Often also referred in conjunction with 'effective batch size'.
Optional
base_The model id of the base model for this prompt tuning.
Optional
batch_The batch size is a number of samples processed before the model is updated.
Optional
init_The text
method requires init_text
to be set.
Optional
init_Initialization text to be used if init_method
is set to text
otherwise this will be ignored.
Optional
learning_Learning rate to be used while tuning prompt vectors.
Optional
max_Maximum length of input tokens being considered.
Optional
max_Maximum length of output tokens being predicted.
Optional
num_Number of epochs to tune the prompt vectors, this affects the quality of the trained model.
The task that is targeted for this model.
Optional
tuning_Type of Peft (Parameter-Efficient Fine-Tuning) config to build.
Optional
verbalizerVerbalizer template to be used for formatting data at train and inference time. This template may use brackets to indicate where fields from the data model must be rendered.
Properties to control the prompt tuning.