Opm::cuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType > Class Template Reference Derived class of GPUSender that handles MPI made with CUDA aware MPI The copOwnerToAll function uses MPI calls refering to data that resides on the GPU in order to send it directly to other GPUs, skipping the staging step on the CPU. More...
Inheritance diagram for Opm::cuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >:
Detailed Descriptiontemplate<class field_type, int block_size, class OwnerOverlapCopyCommunicationType> class Opm::cuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType > Derived class of GPUSender that handles MPI made with CUDA aware MPI The copOwnerToAll function uses MPI calls refering to data that resides on the GPU in order to send it directly to other GPUs, skipping the staging step on the CPU.
Member Typedef Documentation◆ X
template<class field_type , int block_size, class OwnerOverlapCopyCommunicationType >
Constructor & Destructor Documentation◆ GPUAwareMPISender()
template<class field_type , int block_size, class OwnerOverlapCopyCommunicationType >
Member Function Documentation◆ copyOwnerToAll()
template<class field_type , int block_size, class OwnerOverlapCopyCommunicationType >
copyOwnerToAll will copy source to the CPU, then call OwnerOverlapCopyCommunicationType::copyOwnerToAll on the copied data, and copy the result back to the GPU
Implements Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >. References Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_cpuOwnerOverlapCopy, Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_initializedIndices, and Opm::cuistl::detail::to_int(). ◆ dot()
template<class field_type , class OwnerOverlapCopyCommunicationType >
dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes.
References Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::initIndexSet(), Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_cpuOwnerOverlapCopy, Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_indicesOwner, and Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_initializedIndices. Referenced by Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::norm(). ◆ norm()
template<class field_type , class OwnerOverlapCopyCommunicationType >
norm computes the l^2-norm of x across processes. This will compute the dot product of x with itself on owned indices, then sum the result across process and return the square root of the sum. References Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::dot(). ◆ project()
template<class field_type , class OwnerOverlapCopyCommunicationType >
project will project x to the owned subspace For each component i which is not owned, x_i will be set to 0
References Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::initIndexSet(), Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_indicesCopy, and Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::m_initializedIndices. Member Data Documentation◆ m_cpuOwnerOverlapCopy
template<class field_type , class OwnerOverlapCopyCommunicationType >
Referenced by Opm::cuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >::copyOwnerToAll(), Opm::cuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >::copyOwnerToAll(), and Opm::cuistl::GPUSender< field_type, OwnerOverlapCopyCommunicationType >::dot(). ◆ m_indicesCopy
template<class field_type , class OwnerOverlapCopyCommunicationType >
◆ m_indicesOwner
template<class field_type , class OwnerOverlapCopyCommunicationType >
◆ m_initializedIndices
template<class field_type , class OwnerOverlapCopyCommunicationType >
The documentation for this class was generated from the following file: |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||