Mila
Deep Neural Network Library
Loading...
Searching...
No Matches
Mila::Dnn::Compute::DynamicMemoryResource Class Referenceexport

A class that represents a dynamically-determined memory resource. More...

Inheritance diagram for Mila::Dnn::Compute::DynamicMemoryResource:
Collaboration diagram for Mila::Dnn::Compute::DynamicMemoryResource:

Public Member Functions

 DynamicMemoryResource (Compute::DeviceType device_type=Compute::DeviceType::Cuda)
 Constructs a DynamicMemoryResource based on the device type.
 
bool is_device_accessible () const noexcept
 Checks if the memory resource is device-accessible.
 
bool is_host_accessible () const noexcept
 Checks if the memory resource is host-accessible.
 

Protected Member Functions

void * do_allocate (std::size_t size, std::size_t alignment) override
 Allocates memory of the specified size with the given alignment.
 
void do_deallocate (void *ptr, std::size_t size, std::size_t alignment) override
 Deallocates previously allocated memory.
 
bool do_is_equal (const std::pmr::memory_resource &other) const noexcept override
 Checks if this memory resource is equal to another memory resource.
 

Private Attributes

std::variant< Compute::CpuMemoryResource, Compute::CudaMemoryResourceresource_variant_
 

Detailed Description

A class that represents a dynamically-determined memory resource.

This class serves as an adapter between the runtime selection of memory resources (via DeviceContext) and the compile-time requirements of the Tensor class, which requires a specific memory resource type rather than a variant.

Constructor & Destructor Documentation

◆ DynamicMemoryResource()

Mila::Dnn::Compute::DynamicMemoryResource::DynamicMemoryResource ( Compute::DeviceType  device_type = Compute::DeviceType::Cuda)
inlineexplicit

Constructs a DynamicMemoryResource based on the device type.

Parameters
device_typeThe type of device to create the memory resource for.

Member Function Documentation

◆ do_allocate()

void * Mila::Dnn::Compute::DynamicMemoryResource::do_allocate ( std::size_t  size,
std::size_t  alignment 
)
inlineoverrideprotected

Allocates memory of the specified size with the given alignment.

This delegates to the appropriate memory resource based on the device type.

Parameters
sizeThe size in bytes to allocate.
alignmentThe alignment requirement for the allocation.
Returns
void* Pointer to the allocated memory.

◆ do_deallocate()

void Mila::Dnn::Compute::DynamicMemoryResource::do_deallocate ( void *  ptr,
std::size_t  size,
std::size_t  alignment 
)
inlineoverrideprotected

Deallocates previously allocated memory.

This delegates to the appropriate memory resource based on the device type.

Parameters
ptrPointer to the memory to deallocate.
sizeThe size in bytes of the allocation.
alignmentThe alignment of the allocation.

◆ do_is_equal()

bool Mila::Dnn::Compute::DynamicMemoryResource::do_is_equal ( const std::pmr::memory_resource &  other) const
inlineoverrideprotectednoexcept

Checks if this memory resource is equal to another memory resource.

Parameters
otherThe other memory resource to compare with.
Returns
bool True if the memory resources are equal, false otherwise.

◆ is_device_accessible()

bool Mila::Dnn::Compute::DynamicMemoryResource::is_device_accessible ( ) const
inlinenoexcept

Checks if the memory resource is device-accessible.

Returns
bool True if the memory is accessible from the device (GPU).

◆ is_host_accessible()

bool Mila::Dnn::Compute::DynamicMemoryResource::is_host_accessible ( ) const
inlinenoexcept

Checks if the memory resource is host-accessible.

Returns
bool True if the memory is accessible from the host (CPU).

Member Data Documentation

◆ resource_variant_

std::variant<Compute::CpuMemoryResource, Compute::CudaMemoryResource> Mila::Dnn::Compute::DynamicMemoryResource::resource_variant_
private

The documentation for this class was generated from the following file: