Mila
Deep Neural Network Library
Loading...
Searching...
No Matches
Mila::Dnn::Compute::CpuEncoderOp Class Referenceexport

CPU implementation of the encoder operation for neural networks. More...

Inheritance diagram for Mila::Dnn::Compute::CpuEncoderOp:
Collaboration diagram for Mila::Dnn::Compute::CpuEncoderOp:

Public Types

using MR = typename CpuDevice::MR
 
using OperationBase = UnaryOperation< DeviceType::Cpu, int, float >
 
- Public Types inherited from Mila::Dnn::Compute::UnaryOperation< DeviceType::Cpu, int, float >
using MR = std::conditional_t< TDeviceType==DeviceType::Cuda, CudaMemoryResource, HostMemoryResource >
 Memory resource type based on device type.
 

Public Member Functions

 CpuEncoderOp (const EncoderConfig &config)
 Constructs a new CPU Encoder operation with the default device context.
 
 CpuEncoderOp (std::shared_ptr< DeviceContext > context, const EncoderConfig &config)
 Constructs a new CPU Encoder operation with a specific device context.
 
void backward (const Tensor< int, MR > &input, const Tensor< float, MR > &output, const Tensor< float, MR > &output_gradient, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, std::vector< std::shared_ptr< Tensor< float, MR > > > &parameter_gradients, Tensor< int, MR > &input_gradient, const OperationAttributes &attributes, const std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const
 Performs the backward pass of the encoder operation.
 
void forward (const Tensor< int, MR > &input, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, const OperationAttributes &attributes, Tensor< float, MR > &output, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const override
 Performs the forward pass of the encoder operation.
 
std::string getName () const override
 Gets the name of this operation.
 
- Public Member Functions inherited from Mila::Dnn::Compute::UnaryOperation< DeviceType::Cpu, int, float >
 UnaryOperation (OperationType operation_type)
 Constructs a UnaryOperation with the specified operation type.
 
 UnaryOperation (OperationType operation_type, std::shared_ptr< DeviceContext > context)
 Constructs a UnaryOperation with the specified operation type and device context.
 
virtual ~UnaryOperation ()=default
 Virtual destructor for proper cleanup of derived classes.
 
virtual void backward (const Tensor< int, MR > &grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_grads) const
 Executes the backward pass of a unary operation.
 
virtual void backward (const Tensor< int, MR > &input, const Tensor< float, MR > &output_grad, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, std::vector< std::shared_ptr< Tensor< float, MR > > > &parameter_grads, Tensor< int, MR > &input_grad, const OperationAttributes &properties, const std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const
 Executes the comprehensive backward pass of a unary operation.
 
virtual void forward (const Tensor< int, MR > &input, const std::vector< std::shared_ptr< Tensor< float, MR > > > &parameters, const OperationAttributes &properties, Tensor< float, MR > &output, std::vector< std::shared_ptr< Tensor< float, MR > > > &output_state) const=0
 Executes the forward pass of a unary operation.
 
- Public Member Functions inherited from Mila::Dnn::Compute::OperationBase< TDeviceType, TInput1, TInput2, TOutput >
 OperationBase (OperationType operation_type, std::shared_ptr< DeviceContext > context)
 Constructs an OperationBase object with a specific device context and compute precision.
 
virtual ~OperationBase ()=default
 Virtual destructor for the OperationBase class.
 
std::shared_ptr< DeviceContextgetDeviceContext () const
 Gets the device context associated with this operation.
 
DeviceType getDeviceType () const
 Gets the device type for this operation.
 
OperationType getOperationType () const
 Gets the operation type enumeration value.
 

Private Attributes

EncoderConfig config_
 Configuration for the encoder operation.
 

Detailed Description

CPU implementation of the encoder operation for neural networks.

This class provides a CPU-based implementation of the encoder operation, which combines token embeddings and positional embeddings.

Template Parameters
TInputThe data type of the input tensor elements (typically int for token indices).
TDataTypeThe data type used for computation and output (typically float).

Member Typedef Documentation

◆ MR

◆ OperationBase

Constructor & Destructor Documentation

◆ CpuEncoderOp() [1/2]

Mila::Dnn::Compute::CpuEncoderOp::CpuEncoderOp ( const EncoderConfig config)
inline

Constructs a new CPU Encoder operation with the default device context.

CPU operations always use full precision regardless of policy settings.

◆ CpuEncoderOp() [2/2]

Mila::Dnn::Compute::CpuEncoderOp::CpuEncoderOp ( std::shared_ptr< DeviceContext context,
const EncoderConfig config 
)
inline

Constructs a new CPU Encoder operation with a specific device context.

CPU operations always use full precision regardless of policy settings.

Parameters
contextThe device context to use for this operation.
Exceptions
std::runtime_errorIf the context is not for a CPU device.

Member Function Documentation

◆ backward()

void Mila::Dnn::Compute::CpuEncoderOp::backward ( const Tensor< int, MR > &  input,
const Tensor< float, MR > &  output,
const Tensor< float, MR > &  output_gradient,
const std::vector< std::shared_ptr< Tensor< float, MR > > > &  parameters,
std::vector< std::shared_ptr< Tensor< float, MR > > > &  parameter_gradients,
Tensor< int, MR > &  input_gradient,
const OperationAttributes attributes,
const std::vector< std::shared_ptr< Tensor< float, MR > > > &  output_state 
) const
inline

Performs the backward pass of the encoder operation.

Computes gradients with respect to inputs and parameters.

Parameters
inputInput tensor from the forward pass.
outputOutput tensor from the forward pass.
output_gradientGradient of the loss with respect to the output.
parametersParameters tensor from forward pass.
parameter_gradientsGradients for parameters.
input_gradientGradient of the loss with respect to the input.
attributesAdditional attributes for the operation.
output_stateCache tensors from forward pass.
Here is the call graph for this function:

◆ forward()

void Mila::Dnn::Compute::CpuEncoderOp::forward ( const Tensor< int, MR > &  input,
const std::vector< std::shared_ptr< Tensor< float, MR > > > &  parameters,
const OperationAttributes attributes,
Tensor< float, MR > &  output,
std::vector< std::shared_ptr< Tensor< float, MR > > > &  output_state 
) const
inlineoverride

Performs the forward pass of the encoder operation.

Combines token embeddings and positional embeddings for input token indices.

Parameters
inputInput tensor containing token indices.
parametersParameters tensor containing embeddings and other parameters.
attributesAdditional attributes for the operation.
outputOutput tensor to store the resulting embeddings.
output_stateCache for storing intermediate results (used in backward pass).
Here is the call graph for this function:

◆ getName()

std::string Mila::Dnn::Compute::CpuEncoderOp::getName ( ) const
inlineoverridevirtual

Gets the name of this operation.

Returns
std::string The name of the operation ("Cpu::EncoderOp").

Implements Mila::Dnn::Compute::OperationBase< TDeviceType, TInput1, TInput2, TOutput >.

Member Data Documentation

◆ config_

EncoderConfig Mila::Dnn::Compute::CpuEncoderOp::config_
private

Configuration for the encoder operation.


The documentation for this class was generated from the following file: