Mila
Deep Neural Network Library
Loading...
Searching...
No Matches
Mila::Dnn::DropoutConfig Class Referenceexport

Configuration class for Dropout module. More...

Inheritance diagram for Mila::Dnn::DropoutConfig:
Collaboration diagram for Mila::Dnn::DropoutConfig:

Public Member Functions

 DropoutConfig ()=default
 Default constructor.
 
 DropoutConfig (float probability)
 Constructor with dropout probability.
 
float getProbability () const
 Get the configured dropout probability.
 
bool scalesDuringInference () const
 Check if scaling during inference is enabled.
 
bool usesSameMaskPerBatch () const
 Check if the same mask is used for all elements in a batch.
 
void validate () const
 Validate configuration parameters.
 
DropoutConfigwithProbability (float probability)
 Configure the dropout probability.
 
DropoutConfigwithSameMaskPerBatch (bool use_same_mask_per_batch)
 Configure whether to use the same dropout mask for all elements in a batch.
 
DropoutConfigwithScalingDuringInference (bool scale_during_inference)
 Configure whether to apply scaling during inference.
 
- Public Member Functions inherited from Mila::Dnn::ComponentConfig
virtual ~ComponentConfig ()=default
 Virtual destructor to support proper polymorphic destruction.
 
const std::string & getName () const
 Gets the configured component name.
 
ComputePrecision::Policy getPrecision () const
 Gets the configured precision policy.
 
bool isTraining () const
 Gets the configured training mode.
 
template<typename Self >
auto & withName (this Self &&self, std::string name)
 Sets the name of the component with fluent interface.
 
template<typename Self >
auto & withPrecision (this Self &&self, ComputePrecision::Policy policy)
 Sets the compute precision policy with fluent interface.
 
template<typename Self >
auto & withTraining (this Self &&self, bool is_training)
 Sets the training mode with fluent interface.
 

Private Attributes

float probability_ { 0.5f }
 The probability of zeroing elements.
 
bool scale_during_inference_ { false }
 Whether to apply scaling during inference.
 
bool use_same_mask_per_batch_ { false }
 Whether to use the same mask for entire batch.
 

Additional Inherited Members

- Protected Attributes inherited from Mila::Dnn::ComponentConfig
bool is_training_ = false
 Training mode flag, defaults to false (inference mode)
 
std::string name_ = "unnamed"
 Component name, defaults to "unnamed" if not explicitly set.
 
ComputePrecision::Policy precision_ = ComputePrecision::Policy::Auto
 Precision policy for computation, defaults to Auto.
 

Detailed Description

Configuration class for Dropout module.

Provides a type-safe fluent interface for configuring Dropout modules.

Constructor & Destructor Documentation

◆ DropoutConfig() [1/2]

Mila::Dnn::DropoutConfig::DropoutConfig ( )
default

Default constructor.

◆ DropoutConfig() [2/2]

Mila::Dnn::DropoutConfig::DropoutConfig ( float  probability)
inlineexplicit

Constructor with dropout probability.

Parameters
probabilityThe dropout probability (0.0 to 1.0)

Member Function Documentation

◆ getProbability()

float Mila::Dnn::DropoutConfig::getProbability ( ) const
inline

Get the configured dropout probability.

Returns
float The dropout probability
Here is the caller graph for this function:

◆ scalesDuringInference()

bool Mila::Dnn::DropoutConfig::scalesDuringInference ( ) const
inline

Check if scaling during inference is enabled.

Returns
bool Whether scaling during inference is enabled
Here is the caller graph for this function:

◆ usesSameMaskPerBatch()

bool Mila::Dnn::DropoutConfig::usesSameMaskPerBatch ( ) const
inline

Check if the same mask is used for all elements in a batch.

Returns
bool Whether the same mask is used per batch
Here is the caller graph for this function:

◆ validate()

void Mila::Dnn::DropoutConfig::validate ( ) const
inlinevirtual

Validate configuration parameters.

Exceptions
std::invalid_argumentIf validation fails

Reimplemented from Mila::Dnn::ComponentConfig.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ withProbability()

DropoutConfig & Mila::Dnn::DropoutConfig::withProbability ( float  probability)
inline

Configure the dropout probability.

Parameters
probabilityThe probability of zeroing elements (0.0 to 1.0)
Returns
DropoutConfig& Reference to this for method chaining

◆ withSameMaskPerBatch()

DropoutConfig & Mila::Dnn::DropoutConfig::withSameMaskPerBatch ( bool  use_same_mask_per_batch)
inline

Configure whether to use the same dropout mask for all elements in a batch.

Parameters
use_same_mask_per_batchWhether to use the same mask for entire batch
Returns
DropoutConfig& Reference to this for method chaining

◆ withScalingDuringInference()

DropoutConfig & Mila::Dnn::DropoutConfig::withScalingDuringInference ( bool  scale_during_inference)
inline

Configure whether to apply scaling during inference.

When true, outputs during inference will be scaled by 1/(1-p) to maintain the same expected value between training and inference. When false, dropout is completely disabled during inference.

Parameters
scale_during_inferenceWhether to apply scaling during inference
Returns
DropoutConfig& Reference to this for method chaining

Member Data Documentation

◆ probability_

float Mila::Dnn::DropoutConfig::probability_ { 0.5f }
private

The probability of zeroing elements.

◆ scale_during_inference_

bool Mila::Dnn::DropoutConfig::scale_during_inference_ { false }
private

Whether to apply scaling during inference.

◆ use_same_mask_per_batch_

bool Mila::Dnn::DropoutConfig::use_same_mask_per_batch_ { false }
private

Whether to use the same mask for entire batch.


The documentation for this class was generated from the following file: