Trainer¶
supervised_trainer¶
-
class
seq2seq.trainer.supervised_trainer.
SupervisedTrainer
(expt_dir='experiment', loss=<seq2seq.loss.loss.NLLLoss object>, batch_size=64, random_seed=None, checkpoint_every=100, print_every=100)¶ The SupervisedTrainer class helps in setting up a training framework in a supervised setting.
Parameters: - expt_dir (optional, str) – experiment Directory to store details of the experiment, by default it makes a folder in the current directory to store the details (default: experiment).
- loss (seq2seq.loss.loss.Loss, optional) – loss for training, (default: seq2seq.loss.NLLLoss)
- batch_size (int, optional) – batch size for experiment, (default: 64)
- checkpoint_every (int, optional) – number of batches to checkpoint after, (default: 100)
-
train
(model, data, num_epochs=5, resume=False, dev_data=None, optimizer=None, teacher_forcing_ratio=0)¶ Run training for a given model.
Parameters: - model (seq2seq.models) – model to run training on, if resume=True, it would be overwritten by the model loaded from the latest checkpoint.
- data (seq2seq.dataset.dataset.Dataset) – dataset object to train on
- num_epochs (int, optional) – number of epochs to run (default 5)
- resume (bool, optional) – resume training with the latest checkpoint, (default False)
- dev_data (seq2seq.dataset.dataset.Dataset, optional) – dev Dataset (default None)
- optimizer (seq2seq.optim.Optimizer, optional) – optimizer for training (default: Optimizer(pytorch.optim.Adam, max_grad_norm=5))
- teacher_forcing_ratio (float, optional) – teaching forcing ratio (default 0)
Returns: trained model.
Return type: model (seq2seq.models)