Mila
Deep Neural Network Library
Loading...
Searching...
No Matches
Mila::Dnn::Compute::CpuLinearOp Class Referenceexport

CPU implementation of the Fully Connected operation for neural networks. More...

Inheritance diagram for Mila::Dnn::Compute::CpuLinearOp:
Collaboration diagram for Mila::Dnn::Compute::CpuLinearOp:

Public Types

using MR = typename CpuDevice::MR
 
using OperationBase = UnaryOperation< DeviceType::Cpu, float >
 
- Public Types inherited from Mila::Dnn::Compute::UnaryOperation< DeviceType::Cpu, float >
using MR = std::conditional_t< TDeviceType==DeviceType::Cuda, CudaMemoryResource, HostMemoryResource >
 Memory resource type based on device type.
 

Public Member Functions

 CpuLinearOp (const LinearConfig &config)
 Constructs a new CPU Fully Connected operation with the default device context.
 
 CpuLinearOp (std::shared_ptr< DeviceContext > context, const LinearConfig &config)
 Constructs a new CPU Fully Connected operation with a specific device context.
 
void backward (Tensor< float, MR > &input_grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameter_grads, const Tensor< float, MR > &output_grad, const Tensor< float, MR > input, const Tensor< float, MR > weight, int B, int T, int C, int OC)
 Performs the backward pass of the Fully Connected operation.
 
void forward (const Tensor< float, MR > &input, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, const OperationAttributes &properties, Tensor< float, MR > &output, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const override
 Performs the forward pass of the Linear operation.
 
std::string getName () const override
 Gets the name of this operation.
 
- Public Member Functions inherited from Mila::Dnn::Compute::UnaryOperation< DeviceType::Cpu, float >
 UnaryOperation (OperationType operation_type)
 Constructs a UnaryOperation with the specified operation type.
 
 UnaryOperation (OperationType operation_type, std::shared_ptr< DeviceContext > context)
 Constructs a UnaryOperation with the specified operation type and device context.
 
virtual ~UnaryOperation ()=default
 Virtual destructor for proper cleanup of derived classes.
 
virtual void backward (const Tensor< float, MR > &grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_grads) const
 Executes the backward pass of a unary operation.
 
virtual void backward (const Tensor< float, MR > &input, const Tensor< float, MR > &output_grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, std::vector< std::shared_ptr< Tensor< float, MR > > > &parameter_grads, Tensor< float, MR > &input_grad, const OperationAttributes &properties, const std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const
 Executes the comprehensive backward pass of a unary operation.
 
virtual void forward (const Tensor< float, MR > &input, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, const OperationAttributes &properties, Tensor< float, MR > &output, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const=0
 Executes the forward pass of a unary operation.
 
- Public Member Functions inherited from Mila::Dnn::Compute::OperationBase< TDeviceType, TInput1, TInput2, TOutput >
 OperationBase (OperationType operation_type, std::shared_ptr< DeviceContext > context)
 Constructs an OperationBase object with a specific device context and compute precision.
 
virtual ~OperationBase ()=default
 Virtual destructor for the OperationBase class.
 
std::shared_ptr< DeviceContextgetDeviceContext () const
 Gets the device context associated with this operation.
 
DeviceType getDeviceType () const
 Gets the device type for this operation.
 
OperationType getOperationType () const
 Gets the operation type enumeration value.
 

Private Member Functions

void forward_naive (const Tensor< float, MR > &input, const std::shared_ptr< Tensor< float, MR > > &weight, const std::shared_ptr< Tensor< float, MR > > &bias, Tensor< float, MR > &output, int outer_size, int C, int OC) const
 Naive implementation of the forward pass for the Fully Connected operation.
 

Private Attributes

LinearConfig config_
 Configuration for the linear operation.
 

Detailed Description

CPU implementation of the Fully Connected operation for neural networks.

This class provides a CPU-based implementation of the Fully Connected operation, which performs a matrix multiplication between the input and a weight matrix, optionally adds a bias, and produces an output. This operation implements the standard linear layer commonly used in neural networks.

The implementation includes both a performance-optimized version with loop unrolling and a naive fallback implementation for special cases.

Template Parameters
floatThe data type of the input tensor elements.
TDataTypeThe data type used for computation and output (defaults to the input type).

Member Typedef Documentation

◆ MR

◆ OperationBase

Constructor & Destructor Documentation

◆ CpuLinearOp() [1/2]

Mila::Dnn::Compute::CpuLinearOp::CpuLinearOp ( const LinearConfig config)
inline

Constructs a new CPU Fully Connected operation with the default device context.

CPU operations always use full precision regardless of policy settings.

Parameters
precision_policyIgnored for CPU operations, as they always use full precision.

◆ CpuLinearOp() [2/2]

Mila::Dnn::Compute::CpuLinearOp::CpuLinearOp ( std::shared_ptr< DeviceContext context,
const LinearConfig config 
)
inline

Constructs a new CPU Fully Connected operation with a specific device context.

CPU operations always use full precision regardless of policy settings.

Parameters
contextThe device context to use for this operation.
precision_policyIgnored for CPU operations, as they always use full precision.
Exceptions
std::runtime_errorIf the context is not for a CPU device.

Member Function Documentation

◆ backward()

void Mila::Dnn::Compute::CpuLinearOp::backward ( Tensor< float, MR > &  input_grad,
const std::vector< std::shared_ptr< Tensor< float, MR > > > &  parameter_grads,
const Tensor< float, MR > &  output_grad,
const Tensor< float, MR input,
const Tensor< float, MR weight,
int  B,
int  T,
int  C,
int  OC 
)
inline

Performs the backward pass of the Fully Connected operation.

Computes gradients with respect to inputs, weights, and biases based on the output gradient.

Parameters
dinpPointer to the gradient buffer for input.
dweightPointer to the gradient buffer for weight parameters.
dbiasPointer to the gradient buffer for bias parameters (can be NULL if no bias is used).
doutPointer to the gradient buffer from the output.
inpPointer to the original input values.
weightPointer to the weight parameters.
BBatch size.
TDataTypeSequence length.
CInput feature dimension.
OCOutput feature dimension.
Here is the call graph for this function:

◆ forward()

void Mila::Dnn::Compute::CpuLinearOp::forward ( const Tensor< float, MR > &  input,
const std::vector< std::shared_ptr< Tensor< float, MR > > > &  parameters,
const OperationAttributes properties,
Tensor< float, MR > &  output,
std::vector< std::shared_ptr< Tensor< float, MR > > > &  output_state 
) const
inlineoverride

Performs the forward pass of the Linear operation.

Computes the matrix multiplication between input and weights, adds bias if provided, and stores the result in the output tensor. Uses loop unrolling for performance optimization when possible, otherwise falls back to a naive implementation.

Parameters
inputInput tensor of shape [B, TDataType, C] where B is batch size, TDataType is sequence length, and C is input feature dimension.
parametersVector of parameter tensors [weight, bias] where weight is of shape [OC, C] and bias (optional) is of shape [OC].
propertiesAdditional attributes for the operation.
outputOutput tensor of shape [B, TDataType, OC] where OC is output feature dimension.
output_stateCache for intermediate results (not used in this operation).
Here is the call graph for this function:

◆ forward_naive()

void Mila::Dnn::Compute::CpuLinearOp::forward_naive ( const Tensor< float, MR > &  input,
const std::shared_ptr< Tensor< float, MR > > &  weight,
const std::shared_ptr< Tensor< float, MR > > &  bias,
Tensor< float, MR > &  output,
int  outer_size,
int  C,
int  OC 
) const
inlineprivate

Naive implementation of the forward pass for the Fully Connected operation.

This is a simple implementation without optimizations that serves as a fallback for cases where the optimized implementation cannot be used.

Parameters
inputInput tensor.
weightWeight tensor.
biasBias tensor (optional).
outputOutput tensor.
BBatch size.
TDataTypeSequence length.
CInput feature dimension.
OCOutput feature dimension.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ getName()

std::string Mila::Dnn::Compute::CpuLinearOp::getName ( ) const
inlineoverridevirtual

Gets the name of this operation.

Returns
std::string The name of the operation ("Cpu::LinearOp").

Implements Mila::Dnn::Compute::OperationBase< TDeviceType, TInput1, TInput2, TOutput >.

Member Data Documentation

◆ config_

LinearConfig Mila::Dnn::Compute::CpuLinearOp::config_
private

Configuration for the linear operation.


The documentation for this class was generated from the following file: