Configuration class for Layer Normalization module.
More...
|
| LayerNormConfig ()=default |
| Default constructor.
|
|
| LayerNormConfig (size_t normalized_dim) |
| Constructor with normalized dimension size.
|
|
int64_t | getAxis () const |
| Get the configured normalization axis.
|
|
float | getEpsilon () const |
| Get the configured epsilon value.
|
|
const std::vector< size_t > & | getInputShape () const |
| Get the configured input shape.
|
|
bool | hasBias () const |
| Check if bias is enabled.
|
|
void | validate () const |
| Validate configuration parameters.
|
|
LayerNormConfig & | withAxis (int64_t axis) |
| Set the normalization axis.
|
|
LayerNormConfig & | withBias (bool has_bias) |
| Set whether the layer should use bias.
|
|
LayerNormConfig & | withEpsilon (float epsilon) |
| Set the epsilon value for numerical stability.
|
|
LayerNormConfig & | withInputShape (const std::vector< size_t > &input_shape) |
| Set the input shape for the layer normalization.
|
|
virtual | ~ComponentConfig ()=default |
| Virtual destructor to support proper polymorphic destruction.
|
|
const std::string & | getName () const |
| Gets the configured component name.
|
|
ComputePrecision::Policy | getPrecision () const |
| Gets the configured precision policy.
|
|
bool | isTraining () const |
| Gets the configured training mode.
|
|
template<typename Self > |
auto & | withName (this Self &&self, std::string name) |
| Sets the name of the component with fluent interface.
|
|
template<typename Self > |
auto & | withPrecision (this Self &&self, ComputePrecision::Policy policy) |
| Sets the compute precision policy with fluent interface.
|
|
template<typename Self > |
auto & | withTraining (this Self &&self, bool is_training) |
| Sets the training mode with fluent interface.
|
|
|
int64_t | axis_ { -1 } |
| The axis along which to normalize (default: -1 for last dimension)
|
|
float | epsilon_ { 1e-5f } |
| Small constant added to variance for numerical stability.
|
|
bool | has_bias_ { true } |
| Whether to include a learnable bias term.
|
|
std::vector< size_t > | input_shape_ {} |
| Shape of the input tensor [batch_size, sequence_length, channels].
|
|
Configuration class for Layer Normalization module.
Provides a type-safe fluent interface for configuring LayerNorm modules.
◆ LayerNormConfig() [1/2]
Mila::Dnn::LayerNormConfig::LayerNormConfig |
( |
| ) |
|
|
default |
◆ LayerNormConfig() [2/2]
Mila::Dnn::LayerNormConfig::LayerNormConfig |
( |
size_t |
normalized_dim | ) |
|
|
inlineexplicit |
Constructor with normalized dimension size.
- Parameters
-
normalized_dim | The dimension size to normalize |
◆ getAxis()
int64_t Mila::Dnn::LayerNormConfig::getAxis |
( |
| ) |
const |
|
inline |
Get the configured normalization axis.
- Returns
- int64_t The axis along which to normalize
◆ getEpsilon()
float Mila::Dnn::LayerNormConfig::getEpsilon |
( |
| ) |
const |
|
inline |
Get the configured epsilon value.
- Returns
- float The epsilon value for numerical stability
◆ getInputShape()
const std::vector< size_t > & Mila::Dnn::LayerNormConfig::getInputShape |
( |
| ) |
const |
|
inline |
Get the configured input shape.
- Returns
- const std::vector<size_t>& The input tensor shape
◆ hasBias()
bool Mila::Dnn::LayerNormConfig::hasBias |
( |
| ) |
const |
|
inline |
Check if bias is enabled.
- Returns
- bool Whether the layer has bias enabled
◆ validate()
void Mila::Dnn::LayerNormConfig::validate |
( |
| ) |
const |
|
inlinevirtual |
Validate configuration parameters.
- Exceptions
-
std::invalid_argument | If validation fails |
Reimplemented from Mila::Dnn::ComponentConfig.
◆ withAxis()
Set the normalization axis.
- Parameters
-
axis | The axis along which to normalize (default is -1, the last dimension) |
- Returns
- LayerNormConfig& Reference to this for method chaining
◆ withBias()
Set whether the layer should use bias.
- Parameters
-
has_bias | Whether to include a learnable bias term |
- Returns
- LayerNormConfig& Reference to this for method chaining
◆ withEpsilon()
Set the epsilon value for numerical stability.
- Parameters
-
epsilon | Small constant added to variance for numerical stability |
- Returns
- LayerNormConfig& Reference to this for method chaining
◆ withInputShape()
LayerNormConfig & Mila::Dnn::LayerNormConfig::withInputShape |
( |
const std::vector< size_t > & |
input_shape | ) |
|
|
inline |
Set the input shape for the layer normalization.
- Parameters
-
input_shape | The input tensor shape [batch_size, sequence_length, channels] |
- Returns
- LayerNormConfig& Reference to this for method chaining
◆ axis_
int64_t Mila::Dnn::LayerNormConfig::axis_ { -1 } |
|
private |
The axis along which to normalize (default: -1 for last dimension)
◆ epsilon_
float Mila::Dnn::LayerNormConfig::epsilon_ { 1e-5f } |
|
private |
Small constant added to variance for numerical stability.
◆ has_bias_
bool Mila::Dnn::LayerNormConfig::has_bias_ { true } |
|
private |
Whether to include a learnable bias term.
◆ input_shape_
std::vector<size_t> Mila::Dnn::LayerNormConfig::input_shape_ {} |
|
private |
Shape of the input tensor [batch_size, sequence_length, channels].
The documentation for this class was generated from the following file: