Mila
Deep Neural Network Library
|
Base class for all compute operations in the Mila neural network framework. More...
Public Member Functions | |
OperationBase (OperationType operation_type, std::shared_ptr< DeviceContext > context) | |
Constructs an OperationBase object with a specific device context and compute precision. | |
virtual | ~OperationBase ()=default |
Virtual destructor for the OperationBase class. | |
std::shared_ptr< DeviceContext > | getDeviceContext () const |
Gets the device context associated with this operation. | |
DeviceType | getDeviceType () const |
Gets the device type for this operation. | |
virtual std::string | getName () const =0 |
Gets the name of the operation. | |
OperationType | getOperationType () const |
Gets the operation type enumeration value. | |
Private Attributes | |
std::shared_ptr< DeviceContext > | device_context_ |
The device context for execution. | |
OperationType | operation_type_ |
The operation type identifier. | |
Base class for all compute operations in the Mila neural network framework.
This abstract base class defines the common interface for all operations that can be performed in the neural network computation graph, regardless of the device type (CPU, CUDA, etc). Specific operations inherit from this class and implement their specialized behavior while adhering to a consistent interface.
TInput1 | The data type of the first input tensor elements. Must satisfy ValidTensorType constraint. |
TInput2 | The data type of the second input tensor elements, defaults to TInput1. Must satisfy ValidTensorType constraint. |
TOutput | The data type of the output tensor elements, defaults to TInput1. Must satisfy ValidFloatTensorType constraint. |
TDeviceType | The target device type for the operation, defaults to DeviceType::Cuda. |
|
inline |
Constructs an OperationBase object with a specific device context and compute precision.
Initializes the operation with the specified operation type and device context, using the template parameter-specified compute precision.
operation_type | The type of the operation (from OperationType enum). |
context | The device context to use for this operation. Must not be null. |
precision_policy | The compute precision policy to use for this operation. |
std::invalid_argument | May throw if context is null (implementation dependent). |
|
virtualdefault |
Virtual destructor for the OperationBase class.
Ensures proper cleanup of derived class resources when destroyed through a base class pointer. Default implementation is sufficient for this base class.
|
inline |
Gets the device context associated with this operation.
The device context contains information about the execution environment, including the device, streams, and memory resources. This context is used for all device interactions performed by this operation.
|
inline |
Gets the device type for this operation.
This is a convenience method that retrieves the device type from the associated device context. It delegates to the device context's device to determine the actual hardware target.
|
pure virtual |
Gets the name of the operation.
This pure virtual function must be implemented by derived classes to return a unique identifier string for the specific operation type. The name should be descriptive and consistent across framework components.
Implemented in Mila::Dnn::Compute::CpuCrossEntropyOp, Mila::Dnn::Compute::CpuEncoderOp, Mila::Dnn::Compute::CpuGeluOp, Mila::Dnn::Compute::CpuLayerNormOp, Mila::Dnn::Compute::CpuLinearOp, Mila::Dnn::Compute::CpuMultiHeadAttentionOp, Mila::Dnn::Compute::CpuResidualOp, Mila::Dnn::Compute::CpuSoftmaxOp, Mila::Dnn::Compute::CudaEncoderOp< TInput, TOutput >, Mila::Dnn::Compute::CudaGeluOp< TDataType >, Mila::Dnn::Compute::CudaLayerNormOp< TInput, TOutput >, Mila::Dnn::Compute::CudaLinearOp< TInput, TOutput >, Mila::Dnn::Compute::CudaMultiHeadAttentionOp< TInput, TOutput >, Mila::Dnn::Compute::CudaResidualOp< TInput, TOutput >, Mila::Dnn::Compute::CudaSoftmaxOp< TInput, TOutput >, Mila::Dnn::Compute::FusedSoftmaxCrossEntropyOp< TPrecision >, and Mila::Dnn::Compute::CudaMatMulBiasGeluOp< TInput, TOutput >.
|
inline |
Gets the operation type enumeration value.
Returns the operation type that was specified during construction. This identifies the category of neural network operation being performed.
|
private |
The device context for execution.
|
private |
The operation type identifier.