Mila
Deep Neural Network Library
Loading...
Searching...
No Matches
Mila::Dnn::Softmax< TDeviceType, TInput, TOutput > Class Template Referenceexport

Softmax module for neural networks. More...

Inheritance diagram for Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >:
Collaboration diagram for Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >:

Public Types

using ModuleBase = Module< TDeviceType, TInput, TOutput >
 Alias for base module type.
 
using MR = std::conditional_t< TDeviceType==DeviceType::Cuda, CudaMemoryResource, CpuMemoryResource >
 Memory resource type used for tensors, selected based on device type.
 
- Public Types inherited from Mila::Dnn::Module< TDeviceType, TInput, TOutput >
using MR = std::conditional_t< TDeviceType==DeviceType::Cuda, CudaMemoryResource, CpuMemoryResource >
 

Public Member Functions

 Softmax (const std::string &device_name, const SoftmaxConfig &config)
 Constructs a new Softmax module with a device name.
 
 Softmax (std::shared_ptr< DeviceContext > device_context, const SoftmaxConfig &config)
 Constructs a new Softmax module with a provided device context.
 
void backward (const Tensor< TInput, MR > &input, const Tensor< TOutput, MR > &output_grad, Tensor< TInput, MR > &input_grad)
 Performs the backward pass of the Softmax operation.
 
void forward (const Tensor< TInput, MR > &input, Tensor< TOutput, MR > &output)
 Performs the forward pass of the softmax operation.
 
int64_t getAxis () const
 Gets the axis used for softmax computation.
 
void load (ModelArchive &archive) override
 Deserializes the module state from a ZIP archive.
 
size_t parameterCount () const override
 Gets the number of trainable parameters in this module.
 
void save (ModelArchive &zip) const override
 Serializes the module state to a ZIP archive.
 
std::string toString () const override
 Generates a string representation of this module's configuration.
 
- Public Member Functions inherited from Mila::Dnn::Module< TDeviceType, TInput, TOutput >
 Module (const std::string &device_name, const ComponentConfig &config)
 Constructor with device name.
 
 Module (std::shared_ptr< DeviceContext > context, const ComponentConfig &config)
 Constructor with a specific device context.
 
virtual ~Module ()=default
 Virtual destructor for proper cleanup in derived classes.
 
std::shared_ptr< Compute::DeviceContextgetDeviceContext () const
 Get the device context for this module.
 
Compute::DeviceType getDeviceType () const
 Get the device type of the current device context.
 
std::string getName () const
 Get the name of the module.
 
const auto & getParameterTensors () const
 Get the parameter tensors of this module.
 
const ComputePrecision::PolicygetPrecision () const
 
const auto & getStateTensors () const
 Get the state tensors of this module.
 
bool isTraining () const
 Check if the module is in training mode.
 
virtual void setTraining (bool is_training)
 Set the training mode of this module.
 

Private Member Functions

void createOperation ()
 Creates the appropriate softmax operation for the current device.
 

Private Attributes

OperationAttributes attributes_
 Operation attributes and configuration.
 
SoftmaxConfig config_
 Configuration for the Softmax module.
 
std::shared_ptr< UnaryOperation< TDeviceType, TInput, TOutput > > operation_ { nullptr }
 The operation that implements the softmax calculation.
 
std::vector< std::shared_ptr< Tensor< TOutput, MR > > > output_state_
 Collection of output state tensors for caching.
 
std::vector< std::shared_ptr< Tensor< TInput, MR > > > parameters_
 Collection of parameters for this module (empty for Softmax).
 

Additional Inherited Members

- Protected Member Functions inherited from Mila::Dnn::Module< TDeviceType, TInput, TOutput >
const std::string parametersToString () const
 Helper method to convert parameters to string representation.
 
const std::string stateToString () const
 Helper method to convert state tensors to string representation.
 
- Protected Attributes inherited from Mila::Dnn::Module< TDeviceType, TInput, TOutput >
std::unordered_map< std::string, std::shared_ptr< Tensor< TOutput, MR > > > parameter_map_ = {}
 Map of parameter names to parameter tensors.
 
std::unordered_map< std::string, std::shared_ptr< Tensor< TOutput, MR > > > state_map_ = {}
 Map of state names to state tensors.
 

Detailed Description

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
requires ValidFloatTensorTypes<TInput, TOutput>
class Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >

Softmax module for neural networks.

This class implements the softmax function, which is often used in the final layer of a neural network to convert raw scores into probabilities. The softmax operation normalizes the input values by applying:

softmax(x_i) = exp(x_i) / sum(exp(x_j)) for all j

where the sum is computed over the specified axis. This normalization ensures all values sum to 1, allowing them to be interpreted as probabilities for classification tasks.

Template Parameters
TDeviceTypeThe device type (CPU or CUDA) on which to perform computations.
TInputThe data type of the input tensor elements.
TOutputThe data type of the output tensor elements, defaults to TInput.

Member Typedef Documentation

◆ ModuleBase

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
using Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::ModuleBase = Module<TDeviceType, TInput, TOutput>
export

Alias for base module type.

◆ MR

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
using Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::MR = std::conditional_t<TDeviceType == DeviceType::Cuda, CudaMemoryResource, CpuMemoryResource>
export

Memory resource type used for tensors, selected based on device type.

Constructor & Destructor Documentation

◆ Softmax() [1/2]

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::Softmax ( const std::string &  device_name,
const SoftmaxConfig config 
)
inlineexplicitexport

Constructs a new Softmax module with a device name.

Creates a new DeviceContext internally using the provided device name. This constructor is useful for creating standalone modules without pre-existing device contexts.

Parameters
device_nameThe name of the device to use (e.g., "CPU", "CUDA:0").
configConfiguration parameters for the Softmax module.
Exceptions
std::invalid_argumentIf the device name is invalid or the configuration is invalid
std::runtime_errorIf device type doesn't match template parameter TDeviceType
Here is the call graph for this function:

◆ Softmax() [2/2]

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::Softmax ( std::shared_ptr< DeviceContext device_context,
const SoftmaxConfig config 
)
inlineexplicitexport

Constructs a new Softmax module with a provided device context.

Uses a pre-existing DeviceContext instance. This constructor is useful when integrating the module into a larger network that shares device contexts across modules.

Parameters
device_contextThe device context to use for this module.
configConfiguration parameters for the Softmax module.
Exceptions
std::invalid_argumentIf device_context is null or configuration is invalid
std::runtime_errorIf device context type doesn't match template parameter TDeviceType
Here is the call graph for this function:

Member Function Documentation

◆ backward()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
void Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::backward ( const Tensor< TInput, MR > &  input,
const Tensor< TOutput, MR > &  output_grad,
Tensor< TInput, MR > &  input_grad 
)
inlineexport

Performs the backward pass of the Softmax operation.

Computes the gradient of the softmax function with respect to its inputs. The gradient of softmax is more complex than most activations because each output depends on all inputs in the same dimension.

Parameters
inputThe input tensor from the forward pass.
output_gradThe gradient of loss with respect to the output.
input_gradThe tensor to store gradients with respect to input.

◆ createOperation()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
void Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::createOperation ( )
inlineexportprivate

Creates the appropriate softmax operation for the current device.

Instantiates either a CPU or CUDA softmax operation based on the device type. Sets the axis attribute needed by the operation to properly apply softmax along the specified dimension.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ forward()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
void Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::forward ( const Tensor< TInput, MR > &  input,
Tensor< TOutput, MR > &  output 
)
inlineexport

Performs the forward pass of the softmax operation.

Computes the softmax of the input tensor along the specified axis and writes the result to the output tensor. The operation exponentiates each element and then normalizes by the sum of all exponentiated values along the specified axis.

Parameters
inputThe input tensor to apply softmax to.
outputThe tensor where softmax results will be stored.

◆ getAxis()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
int64_t Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::getAxis ( ) const
inlineexport

Gets the axis used for softmax computation.

Returns
int64_t The axis along which softmax is applied.
Here is the call graph for this function:

◆ load()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
void Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::load ( ModelArchive archive)
inlineoverrideexportvirtual

Deserializes the module state from a ZIP archive.

Implementation of the Module interface for deserialization. Since Softmax has no learnable parameters, this is a no-op implementation.

Parameters
zipZIP archive for deserialization

Implements Mila::Dnn::Module< TDeviceType, TInput, TOutput >.

◆ parameterCount()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
size_t Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::parameterCount ( ) const
inlineoverrideexportvirtual

Gets the number of trainable parameters in this module.

The Softmax module has no trainable parameters as it's a fixed mathematical operation.

Returns
size_t Always returns 0 as Softmax has no parameters.

Implements Mila::Dnn::Module< TDeviceType, TInput, TOutput >.

◆ save()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
void Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::save ( ModelArchive zip) const
inlineoverrideexportvirtual

Serializes the module state to a ZIP archive.

Implementation of the Module interface for serialization. Since Softmax has no learnable parameters, this is a no-op implementation.

Parameters
zipZIP archive for serialization

Implements Mila::Dnn::Module< TDeviceType, TInput, TOutput >.

◆ toString()

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
std::string Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::toString ( ) const
inlineoverrideexportvirtual

Generates a string representation of this module's configuration.

Returns
std::string A formatted string with module name, axis, device, and precision info

Implements Mila::Dnn::Module< TDeviceType, TInput, TOutput >.

Here is the call graph for this function:

Member Data Documentation

◆ attributes_

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
OperationAttributes Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::attributes_
exportprivate

Operation attributes and configuration.

◆ config_

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
SoftmaxConfig Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::config_
exportprivate

Configuration for the Softmax module.

◆ operation_

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
std::shared_ptr<UnaryOperation<TDeviceType, TInput, TOutput> > Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::operation_ { nullptr }
exportprivate

The operation that implements the softmax calculation.

◆ output_state_

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
std::vector<std::shared_ptr<Tensor<TOutput, MR> > > Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::output_state_
exportprivate

Collection of output state tensors for caching.

◆ parameters_

template<DeviceType TDeviceType = DeviceType::Cuda, typename TInput = float, typename TOutput = TInput>
std::vector<std::shared_ptr<Tensor<TInput, MR> > > Mila::Dnn::Softmax< TDeviceType, TInput, TOutput >::parameters_
exportprivate

Collection of parameters for this module (empty for Softmax).


The documentation for this class was generated from the following file: