|
Mila
Deep Neural Network Library
|
Configuration for training a model. More...
Public Attributes | |
| size_t | batch_size = 16 |
| Batch size for training. | |
| float | beta1 = 0.9f |
| Beta1 for Adam optimizer. | |
| float | beta2 = 0.999f |
| Beta2 for Adam optimizer. | |
| std::string | checkpoint_dir = "" |
| Directory to save checkpoints. | |
| size_t | early_stopping = 0 |
| Stop after N epochs with no improvement (0 = disabled) | |
| size_t | epochs = 10 |
| Number of epochs to train. | |
| float | epsilon = 1e-8f |
| Epsilon for Adam optimizer. | |
| float | learning_rate = 1e-3f |
| Learning rate for optimization. | |
| bool | save_best_only = true |
| Save only the best model. | |
| size_t | validation_interval = 1 |
| Validate every N epochs. | |
| bool | verbose = true |
| Print training progress. | |
| float | weight_decay = 0.0f |
| Weight decay (L2 regularization) | |
Configuration for training a model.
| size_t Mila::Dnn::TrainingConfig::batch_size = 16 |
Batch size for training.
| float Mila::Dnn::TrainingConfig::beta1 = 0.9f |
Beta1 for Adam optimizer.
| float Mila::Dnn::TrainingConfig::beta2 = 0.999f |
Beta2 for Adam optimizer.
| std::string Mila::Dnn::TrainingConfig::checkpoint_dir = "" |
Directory to save checkpoints.
| size_t Mila::Dnn::TrainingConfig::early_stopping = 0 |
Stop after N epochs with no improvement (0 = disabled)
| size_t Mila::Dnn::TrainingConfig::epochs = 10 |
Number of epochs to train.
| float Mila::Dnn::TrainingConfig::epsilon = 1e-8f |
Epsilon for Adam optimizer.
| float Mila::Dnn::TrainingConfig::learning_rate = 1e-3f |
Learning rate for optimization.
| bool Mila::Dnn::TrainingConfig::save_best_only = true |
Save only the best model.
| size_t Mila::Dnn::TrainingConfig::validation_interval = 1 |
Validate every N epochs.
| bool Mila::Dnn::TrainingConfig::verbose = true |
Print training progress.
| float Mila::Dnn::TrainingConfig::weight_decay = 0.0f |
Weight decay (L2 regularization)