Mila
Deep Neural Network Library
Loading...
Searching...
No Matches
Mila::Dnn::Compute::DeviceContext Class Referenceexport

The DeviceContext class manages device contexts for module and tensor computations. More...

Public Member Functions

 DeviceContext (const DeviceContext &)=delete
 Copy constructor (deleted).
 
 DeviceContext (const std::string &device_name)
 Constructor with a specific device.
 
 DeviceContext (DeviceContext &&other) noexcept
 Move constructor.
 
 ~DeviceContext ()
 Destructor.
 
std::pair< int, int > getComputeCapability () const
 Gets the compute capability of the current CUDA device.
 
cublasLtHandle_t getCublasLtHandle ()
 Gets the cuBLASLt handle, initializing it if necessary.
 
std::shared_ptr< ComputeDevicegetDevice () const
 Gets the current device.
 
int getDeviceId () const
 Gets the ID of the current CUDA device.
 
cudaStream_t getStream () const
 Gets the current CUDA stream.
 
bool isCudaDevice () const
 Checks if the current device is a CUDA device.
 
bool isDeviceType (DeviceType type) const
 Checks if the current device is of a specific type.
 
void makeCurrent () const
 Gets the cuDNN handle, initializing it if necessary.
 
DeviceContextoperator= (const DeviceContext &)=delete
 Copy assignment operator (deleted).
 
DeviceContextoperator= (DeviceContext &&other) noexcept
 Move assignment operator.
 
void synchronize ()
 Synchronizes the device, waiting for all operations to complete.
 

Private Member Functions

void initializeDeviceResources ()
 Initializes resources specific to the current device.
 
void moveFrom (DeviceContext &&other)
 Moves resources from another DeviceContext.
 
void releaseResources ()
 Releases all device-specific resources.
 
void setDevice (const std::string &device_name)
 Sets the current device by name.
 

Private Attributes

cublasLtHandle_t cublasLtHandle_ = nullptr
 Handle for cuBLASLt operations.
 
std::shared_ptr< ComputeDevicedevice_
 The compute device used by this context.
 
int device_id_ = -1
 The CUDA device ID, -1 indicates uninitialized.
 
std::mutex handle_mutex_
 Mutex for thread-safe handle initialization.
 
cudaStream_t stream_ = nullptr
 The CUDA stream for asynchronous operations.
 
bool stream_created_ = false
 Indicates if the stream was created by this context and needs to be destroyed.
 

Detailed Description

The DeviceContext class manages device contexts for module and tensor computations.

This class provides functionality for managing compute devices and their associated resources, such as CUDA streams and optional cuBLASLt and cuDNN handles. Multiple instances can be created to manage different devices.

Constructor & Destructor Documentation

◆ DeviceContext() [1/3]

Mila::Dnn::Compute::DeviceContext::DeviceContext ( const std::string &  device_name)
inlineexplicit

Constructor with a specific device.

Parameters
device_nameThe name of the device to use (e.g., "CUDA:0", "CPU").
Exceptions
std::runtime_errorIf the device name is invalid or device initialization fails.
Here is the call graph for this function:

◆ ~DeviceContext()

Mila::Dnn::Compute::DeviceContext::~DeviceContext ( )
inline

Destructor.

Cleans up any associated resources.

Here is the call graph for this function:

◆ DeviceContext() [2/3]

Mila::Dnn::Compute::DeviceContext::DeviceContext ( const DeviceContext )
delete

Copy constructor (deleted).

Note
DeviceContext is not copyable due to unique resource ownership.

◆ DeviceContext() [3/3]

Mila::Dnn::Compute::DeviceContext::DeviceContext ( DeviceContext &&  other)
inlinenoexcept

Move constructor.

Parameters
otherThe source DeviceContext to move from.
Here is the call graph for this function:

Member Function Documentation

◆ getComputeCapability()

std::pair< int, int > Mila::Dnn::Compute::DeviceContext::getComputeCapability ( ) const
inline

Gets the compute capability of the current CUDA device.

Returns
std::pair<int, int> The major and minor versions of the compute capability, or {0,0} if the device is not a CUDA device or compute capability couldn't be determined.
Here is the call graph for this function:

◆ getCublasLtHandle()

cublasLtHandle_t Mila::Dnn::Compute::DeviceContext::getCublasLtHandle ( )
inline

Gets the cuBLASLt handle, initializing it if necessary.

Returns
The cuBLASLt handle.
Exceptions
std::runtime_errorIf creating the cuBLASLt handle fails.
Here is the call graph for this function:

◆ getDevice()

std::shared_ptr< ComputeDevice > Mila::Dnn::Compute::DeviceContext::getDevice ( ) const
inline

Gets the current device.

Returns
A shared pointer to the current device.

◆ getDeviceId()

int Mila::Dnn::Compute::DeviceContext::getDeviceId ( ) const
inline

Gets the ID of the current CUDA device.

Returns
The CUDA device ID, or -1 if not using a CUDA device.
Here is the call graph for this function:

◆ getStream()

cudaStream_t Mila::Dnn::Compute::DeviceContext::getStream ( ) const
inline

Gets the current CUDA stream.

Returns
The current CUDA stream, or nullptr if not using CUDA.

◆ initializeDeviceResources()

void Mila::Dnn::Compute::DeviceContext::initializeDeviceResources ( )
inlineprivate

Initializes resources specific to the current device.

For CUDA devices, this retrieves the device ID, sets the device as current, and creates a CUDA stream.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ isCudaDevice()

bool Mila::Dnn::Compute::DeviceContext::isCudaDevice ( ) const
inline

Checks if the current device is a CUDA device.

Returns
True if the device is a CUDA device, false otherwise.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ isDeviceType()

bool Mila::Dnn::Compute::DeviceContext::isDeviceType ( DeviceType  type) const
inline

Checks if the current device is of a specific type.

Parameters
typeThe device type to check against.
Returns
True if the device matches the specified type, false otherwise.
Here is the caller graph for this function:

◆ makeCurrent()

void Mila::Dnn::Compute::DeviceContext::makeCurrent ( ) const
inline

Gets the cuDNN handle, initializing it if necessary.

Returns
The cuDNN handle.
Exceptions
std::runtime_errorIf creating the cuDNN handle fails.

Sets the current device as active in the current thread.

This method ensures that subsequent CUDA operations are executed on the correct device by setting the current device in the thread if it's different from the previously set device. The method optimizes performance by tracking the currently active device per thread and avoiding unnecessary device switches.

Note
This method is thread-safe and optimized for multi-threaded environments.
Exceptions
std::runtime_errorIf setting the CUDA device fails.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ moveFrom()

void Mila::Dnn::Compute::DeviceContext::moveFrom ( DeviceContext &&  other)
inlineprivate

Moves resources from another DeviceContext.

Parameters
otherThe DeviceContext to move resources from.
Here is the caller graph for this function:

◆ operator=() [1/2]

DeviceContext & Mila::Dnn::Compute::DeviceContext::operator= ( const DeviceContext )
delete

Copy assignment operator (deleted).

Note
DeviceContext is not copyable due to unique resource ownership.

◆ operator=() [2/2]

DeviceContext & Mila::Dnn::Compute::DeviceContext::operator= ( DeviceContext &&  other)
inlinenoexcept

Move assignment operator.

Parameters
otherThe source DeviceContext to move from.
Returns
A reference to this DeviceContext.
Here is the call graph for this function:

◆ releaseResources()

void Mila::Dnn::Compute::DeviceContext::releaseResources ( )
inlineprivate

Releases all device-specific resources.

Frees CUDA streams and library handles when applicable.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ setDevice()

void Mila::Dnn::Compute::DeviceContext::setDevice ( const std::string &  device_name)
inlineprivate

Sets the current device by name.

Parameters
device_nameThe name of the device to set.
Exceptions
std::runtime_errorIf the device name is invalid or device initialization fails.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ synchronize()

void Mila::Dnn::Compute::DeviceContext::synchronize ( )
inline

Synchronizes the device, waiting for all operations to complete.

When using a CUDA device, this method ensures the current device is active and then synchronizes the CUDA stream, waiting for all enqueued operations to complete.

Here is the call graph for this function:

Member Data Documentation

◆ cublasLtHandle_

cublasLtHandle_t Mila::Dnn::Compute::DeviceContext::cublasLtHandle_ = nullptr
mutableprivate

Handle for cuBLASLt operations.

◆ device_

std::shared_ptr<ComputeDevice> Mila::Dnn::Compute::DeviceContext::device_
private

The compute device used by this context.

◆ device_id_

int Mila::Dnn::Compute::DeviceContext::device_id_ = -1
private

The CUDA device ID, -1 indicates uninitialized.

◆ handle_mutex_

std::mutex Mila::Dnn::Compute::DeviceContext::handle_mutex_
mutableprivate

Mutex for thread-safe handle initialization.

◆ stream_

cudaStream_t Mila::Dnn::Compute::DeviceContext::stream_ = nullptr
private

The CUDA stream for asynchronous operations.

◆ stream_created_

bool Mila::Dnn::Compute::DeviceContext::stream_created_ = false
private

Indicates if the stream was created by this context and needs to be destroyed.


The documentation for this class was generated from the following file: