|
Mila
Deep Neural Network Library
|
CPU implementation of the Fully Connected operation for neural networks. More...


Public Types | |
| using | MR = typename CpuDevice::MR |
| using | OperationBase = UnaryOperation< DeviceType::Cpu, float > |
Public Types inherited from Mila::Dnn::Compute::UnaryOperation< DeviceType::Cpu, float > | |
| using | MR = std::conditional_t< TDeviceType==DeviceType::Cuda, CudaMemoryResource, HostMemoryResource > |
| Memory resource type based on device type. | |
Public Member Functions | |
| CpuLinearOp (const LinearConfig &config) | |
| Constructs a new CPU Fully Connected operation with the default device context. | |
| CpuLinearOp (std::shared_ptr< DeviceContext > context, const LinearConfig &config) | |
| Constructs a new CPU Fully Connected operation with a specific device context. | |
| void | backward (Tensor< float, MR > &input_grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > ¶meter_grads, const Tensor< float, MR > &output_grad, const Tensor< float, MR > input, const Tensor< float, MR > weight, int B, int T, int C, int OC) |
| Performs the backward pass of the Fully Connected operation. | |
| void | forward (const Tensor< float, MR > &input, const std::vector< std::shared_ptr< Tensor< float, MR > > > ¶meters, const OperationAttributes &properties, Tensor< float, MR > &output, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const override |
| Performs the forward pass of the Linear operation. | |
| std::string | getName () const override |
| Gets the name of this operation. | |
Public Member Functions inherited from Mila::Dnn::Compute::UnaryOperation< DeviceType::Cpu, float > | |
| UnaryOperation (OperationType operation_type) | |
| Constructs a UnaryOperation with the specified operation type. | |
| UnaryOperation (OperationType operation_type, std::shared_ptr< DeviceContext > context) | |
| Constructs a UnaryOperation with the specified operation type and device context. | |
| virtual | ~UnaryOperation ()=default |
| Virtual destructor for proper cleanup of derived classes. | |
| virtual void | backward (const Tensor< float, MR > &grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > ¶meters, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_grads) const |
| Executes the backward pass of a unary operation. | |
| virtual void | backward (const Tensor< float, MR > &input, const Tensor< float, MR > &output_grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > ¶meters, std::vector< std::shared_ptr< Tensor< float, MR > > > ¶meter_grads, Tensor< float, MR > &input_grad, const OperationAttributes &properties, const std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const |
| Executes the comprehensive backward pass of a unary operation. | |
| virtual void | forward (const Tensor< float, MR > &input, const std::vector< std::shared_ptr< Tensor< float, MR > > > ¶meters, const OperationAttributes &properties, Tensor< float, MR > &output, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const=0 |
| Executes the forward pass of a unary operation. | |
Public Member Functions inherited from Mila::Dnn::Compute::OperationBase< TDeviceType, TInput1, TInput2, TOutput > | |
| OperationBase (OperationType operation_type, std::shared_ptr< DeviceContext > context) | |
| Constructs an OperationBase object with a specific device context and compute precision. | |
| virtual | ~OperationBase ()=default |
| Virtual destructor for the OperationBase class. | |
| std::shared_ptr< DeviceContext > | getDeviceContext () const |
| Gets the device context associated with this operation. | |
| DeviceType | getDeviceType () const |
| Gets the device type for this operation. | |
| OperationType | getOperationType () const |
| Gets the operation type enumeration value. | |
Private Member Functions | |
| void | forward_naive (const Tensor< float, MR > &input, const std::shared_ptr< Tensor< float, MR > > &weight, const std::shared_ptr< Tensor< float, MR > > &bias, Tensor< float, MR > &output, int outer_size, int C, int OC) const |
| Naive implementation of the forward pass for the Fully Connected operation. | |
Private Attributes | |
| LinearConfig | config_ |
| Configuration for the linear operation. | |
CPU implementation of the Fully Connected operation for neural networks.
This class provides a CPU-based implementation of the Fully Connected operation, which performs a matrix multiplication between the input and a weight matrix, optionally adds a bias, and produces an output. This operation implements the standard linear layer commonly used in neural networks.
The implementation includes both a performance-optimized version with loop unrolling and a naive fallback implementation for special cases.
| float | The data type of the input tensor elements. |
| TDataType | The data type used for computation and output (defaults to the input type). |
| using Mila::Dnn::Compute::CpuLinearOp::MR = typename CpuDevice::MR |
|
inline |
Constructs a new CPU Fully Connected operation with the default device context.
CPU operations always use full precision regardless of policy settings.
| precision_policy | Ignored for CPU operations, as they always use full precision. |
|
inline |
Constructs a new CPU Fully Connected operation with a specific device context.
CPU operations always use full precision regardless of policy settings.
| context | The device context to use for this operation. |
| precision_policy | Ignored for CPU operations, as they always use full precision. |
| std::runtime_error | If the context is not for a CPU device. |
|
inline |
Performs the backward pass of the Fully Connected operation.
Computes gradients with respect to inputs, weights, and biases based on the output gradient.
| dinp | Pointer to the gradient buffer for input. |
| dweight | Pointer to the gradient buffer for weight parameters. |
| dbias | Pointer to the gradient buffer for bias parameters (can be NULL if no bias is used). |
| dout | Pointer to the gradient buffer from the output. |
| inp | Pointer to the original input values. |
| weight | Pointer to the weight parameters. |
| B | Batch size. |
| TDataType | Sequence length. |
| C | Input feature dimension. |
| OC | Output feature dimension. |

|
inlineoverride |
Performs the forward pass of the Linear operation.
Computes the matrix multiplication between input and weights, adds bias if provided, and stores the result in the output tensor. Uses loop unrolling for performance optimization when possible, otherwise falls back to a naive implementation.
| input | Input tensor of shape [B, TDataType, C] where B is batch size, TDataType is sequence length, and C is input feature dimension. |
| parameters | Vector of parameter tensors [weight, bias] where weight is of shape [OC, C] and bias (optional) is of shape [OC]. |
| properties | Additional attributes for the operation. |
| output | Output tensor of shape [B, TDataType, OC] where OC is output feature dimension. |
| output_state | Cache for intermediate results (not used in this operation). |

|
inlineprivate |
Naive implementation of the forward pass for the Fully Connected operation.
This is a simple implementation without optimizations that serves as a fallback for cases where the optimized implementation cannot be used.
| input | Input tensor. |
| weight | Weight tensor. |
| bias | Bias tensor (optional). |
| output | Output tensor. |
| B | Batch size. |
| TDataType | Sequence length. |
| C | Input feature dimension. |
| OC | Output feature dimension. |


|
inlineoverridevirtual |
Gets the name of this operation.
Implements Mila::Dnn::Compute::OperationBase< TDeviceType, TInput1, TInput2, TOutput >.
|
private |
Configuration for the linear operation.