Plotter
pyjet.callbacks.Plotter(monitor, scale='linear', plot_during_train=True, save_to_file=None, block_on_end=True)
MetricLogger
pyjet.callbacks.MetricLogger(log_fname)
ReduceLROnPlateau
pyjet.callbacks.ReduceLROnPlateau(optimizer, monitor, monitor_val=True, factor=0.1, patience=10, verbose=0, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0)
Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced. Example
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
model.fit(X_train, Y_train, callbacks=[reduce_lr])
Arguments
- optimizer: the pytorch optimizer to modify
- monitor: quantity to be monitored.
- monitor_val: whether or not to monitor the validation quantity.
- factor: factor by which the learning rate will be reduced. new_lr = lr * factor
- patience: number of epochs with no improvement after which learning rate will be reduced.
- verbose: int. 0: quiet, 1: update messages.
- mode: one of {auto, min, max}. In
minmode, lr will be reduced when the quantity monitored has stopped decreasing; inmaxmode it will be reduced when the quantity monitored has stopped increasing; inautomode, the direction is automatically inferred from the name of the monitored quantity. - epsilon: threshold for measuring the new optimum, to only focus on significant changes.
- cooldown: number of epochs to wait before resuming normal operation after lr has been reduced.
- min_lr: lower bound on the learning rate.
LRScheduler
pyjet.callbacks.LRScheduler(optimizer, schedule, verbose=0)
Callback
pyjet.callbacks.Callback()
Abstract base class used to build new callbacks. Properties
- params: dict. Training parameters (eg. verbosity, batch size, number of epochs...).
- model: instance of
keras.models.Model. Reference of the model being trained. Thelogsdictionary that callback methods take as argument will contain keys for quantities relevant to the current batch or epoch. Currently, the.fit()method of theSequentialmodel class will include the following quantities in thelogsthat it passes to its callbacks: - on_epoch_end: logs include
accandloss, and optionally includeval_loss(if validation is enabled infit), andval_acc(if validation and accuracy monitoring are enabled). - on_batch_begin: logs include
size, the number of samples in the current batch. - on_batch_end: logs include
loss, and optionallyacc(if accuracy monitoring is enabled).
ModelCheckpoint
pyjet.callbacks.ModelCheckpoint(filepath, monitor, monitor_val=True, verbose=0, save_best_only=False, mode='auto', period=1)
Save the model after every epoch.
filepath can contain named formatting options,
which will be filled the value of epoch and
keys in logs (passed in on_epoch_end).
For example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5,
then the model checkpoints will be saved with the epoch number and
the validation loss in the filename.
Arguments
- filepath: string, path to save the model file.
- monitor: quantity to monitor.
- monitor_val: whether or not to monitor the validation quantity.
- verbose: verbosity mode, 0 or 1.
- save_best_only: if
save_best_only=True, the latest best model according to the quantity monitored will not be overwritten. - mode: one of {auto, min, max}.
If
save_best_only=True, the decision to overwrite the current save file is made based on either the maximization or the minimization of the monitored quantity. Forval_acc, this should bemax, forval_lossthis should bemin, etc. Inautomode, the direction is automatically inferred from the name of the monitored quantity. - save_weights_only: if True, then only the model's weights will be
saved (
model.save_weights(filepath)), else the full model is saved (model.save(filepath)). - period: Interval (number of epochs) between checkpoints.