Dune::cpgrid Namespace Reference

Namespaces

namespace  mover
 

Classes

class  Cell2FacesContainer
 
class  Cell2FacesRow
 
class  CombinedGridWellGraph
 A graph repesenting a grid together with the well completions. More...
 
class  CpGridData
 Struct that hods all the data needed to represent a Cpgrid. More...
 
struct  CpGridDataTraits
 
class  DefaultGeometryPolicy
 
class  Entity
 
class  Entity2IndexDataHandle
 Wrapper that turns a data handle suitable for dune-grid into one based on integers instead of entities. More...
 
class  EntityRep
 Represents an entity of a given codim, with positive or negative orientation. More...
 
class  EntityVariable
 A class design to hold a variable with a value for each entity of the given codimension, where the variable is not changing in sign with orientation. Examples include pressures and positions. More...
 
class  EntityVariableBase
 Base class for EntityVariable and SignedEntityVariable. Forwards a restricted subset of the std::vector interface. More...
 
class  FaceCellsContainerProxy
 A class representing the face to cells mapping similar to the way done in UnstructuredGrid. More...
 
class  FaceCellsProxy
 A proxy class representing a row of FaceCellsContainer. More...
 
class  FaceVerticesContainerProxy
 A class representing the face to vertices mapping similar to the way done in UnstructuredGrid. More...
 
struct  FaceViaCellHandleWrapper
 A data handle to send data attached to faces via cell communication. More...
 
class  Geometry
 
class  Geometry< 0, cdim >
 Specialization for 0 dimensional geometries, i.e. vertices. More...
 
class  Geometry< 2, cdim >
 
class  Geometry< 3, cdim >
 Specialization for 3-dimensional geometries, i.e. cells. More...
 
class  GlobalIdMapping
 Class managing the mappings of local indices to global ids. More...
 
class  GlobalIdSet
 The global id set for Dune. More...
 
class  HierarchicIterator
 Only needs to provide interface for doing nothing. More...
 
class  IdSet
 
class  IndexIterator
 
class  IndexSet
 
class  Intersection
 
class  IntersectionIterator
 
class  Iterator
 
class  LevelGlobalIdSet
 
class  LocalIndexContainerProxy
 A class representing the sparse mapping of entity relations (e.g. vertices of faces). More...
 
class  LocalIndexProxy
 A proxy class representing a row of LocalIndexContainerProxy. More...
 
class  MutableOrientedEntityRange
 A class used as a row type for OrientedEntityTable. More...
 
class  OrientedEntityRange
 A class used as a row type for OrientedEntityTable. More...
 
class  OrientedEntityTable
 Represents the topological relationships between sets of entities, for example cells and faces. More...
 
struct  PartitionIteratorRule
 
struct  PartitionIteratorRule< All_Partition >
 
struct  PartitionIteratorRule< Interior_Partition >
 
struct  PartitionIteratorRule< InteriorBorder_Partition >
 
struct  PartitionIteratorRule< Overlap_Partition >
 
struct  PartitionIteratorRule< OverlapFront_Partition >
 
class  PartitionTypeIndicator
 
struct  PointViaCellHandleWrapper
 A data handle to send data attached to points via cell communication. More...
 
struct  PointViaCellWarner
 
class  ReversePointGlobalIdSet
 
class  SignedEntityVariable
 A class design to hold a variable with a value for each entity of the given codimension, where the variable is changing in sign with orientation. An example is velocity fields. More...
 
class  WellConnections
 A class calculating and representing all connections of wells. More...
 

Typedefs

using OpmWellType = int
 

Functions

std::tuple< std::vector< int >, std::vector< std::pair< std::string, bool > >, std::vector< std::tuple< int, int, char > >, std::vector< std::tuple< int, int, char, int > >, WellConnectionscreateZoltanListsFromParts (const CpGrid &grid, const std::vector< cpgrid::OpmWellType > *wells, const double *transmissibilities, const std::vector< int > &parts, bool allowDistributedWells)
 Creates lists as Zoltan would return for vanilla / user specified partitioning. More...
 
std::tuple< std::vector< int >, std::vector< std::pair< std::string, bool > >, std::vector< std::tuple< int, int, char > >, std::vector< std::tuple< int, int, char, int > >, WellConnectionsvanillaPartitionGridOnRoot (const CpGrid &grid, const std::vector< cpgrid::OpmWellType > *wells, const double *transmissibilities, bool allowDistributedWells)
 Creates a vanilla partitioning without a real loadbalancer. More...
 
std::vector< std::vector< int > > perforatingWellIndicesOnProc (const std::vector< int > &parts, const std::vector< Dune::cpgrid::OpmWellType > &wells, const CpGrid &cpgrid)
 Determines the wells that have perforate cells for each process. More...
 
std::vector< std::vector< int > > postProcessPartitioningForWells (std::vector< int > &parts, std::function< int(int)> gid, const std::vector< OpmWellType > &wells, const WellConnections &well_connections, const std::vector< std::set< int > > &wellGraph, std::vector< std::tuple< int, int, char > > &exportList, std::vector< std::tuple< int, int, char, int > > &importList, const Communication< MPI_Comm > &cc)
 Computes wells assigned to processes. More...
 
std::vector< std::pair< std::string, bool > > computeParallelWells (const std::vector< std::vector< int > > &wells_on_proc, const std::vector< OpmWellType > &wells, const Communication< MPI_Comm > &cc, int root)
 Computes whether wells are perforating cells on this process. More...
 
template<class Id >
std::tuple< std::vector< int >, std::vector< std::pair< std::string, bool > >, std::vector< std::tuple< int, int, char > >, std::vector< std::tuple< int, int, char, int > >, WellConnectionsmakeImportAndExportLists (const Dune::CpGrid &cpgrid, const Dune::Communication< MPI_Comm > &cc, const std::vector< Dune::cpgrid::OpmWellType > *wells, const Dune::cpgrid::CombinedGridWellGraph *gridAndWells, int root, int numExport, int numImport, const Id *exportLocalGids, const Id *exportGlobalGids, const int *exportToPart, const Id *importGlobalGids, bool allowDistributedWells=false)
 Creates all the information needed from the partitioning. More...
 
template<class Id >
std::tuple< int, std::vector< Id > > scatterExportInformation (int numExport, const Id *exportGlobalGids, const int *exportToPart, int root, const Dune::Communication< MPI_Comm > &cc)
 

Typedef Documentation

◆ OpmWellType

using Dune::cpgrid::OpmWellType = typedef int

Function Documentation

◆ computeParallelWells()

std::vector< std::pair< std::string, bool > > Dune::cpgrid::computeParallelWells ( const std::vector< std::vector< int > > &  wells_on_proc,
const std::vector< OpmWellType > &  wells,
const Communication< MPI_Comm > &  cc,
int  root 
)

Computes whether wells are perforating cells on this process.

Parameters
wells_on_procwell indices assigned to each process
eclipseStateThe eclipse information
ccThe communicator
rootThe rank of the process that has the complete partitioning information.
Returns
Vector of pairs of well name and a boolean indicating whether the well with this name perforates cells here. Sorted by well name!

◆ createZoltanListsFromParts()

std::tuple< std::vector< int >, std::vector< std::pair< std::string, bool > >, std::vector< std::tuple< int, int, char > >, std::vector< std::tuple< int, int, char, int > >, WellConnections > Dune::cpgrid::createZoltanListsFromParts ( const CpGrid grid,
const std::vector< cpgrid::OpmWellType > *  wells,
const double *  transmissibilities,
const std::vector< int > &  parts,
bool  allowDistributedWells 
)

Creates lists as Zoltan would return for vanilla / user specified partitioning.

Parameters
gridThe grid
wellsPointer to vector with all possible wells (all perforations) of the problem. nullptr is possible
transmissibilitiesC-array with transmissibilities or nullptr.
partsThe partitioning information. Contains for each cells the partition number (zero-based, consecutive
Returns
A tuple consisting of a vector that contains for each local cell of the original grid the the number of the process that owns it after repartitioning, a vector containing a pair of name and a boolean indicating whether this well has perforated cells local to the process of all wells, vector containing information for each exported cell (global id of cell, process id to send to, attribute there), a vector containing information for each imported cell (global index, process id that sends, attribute here, local index here), and a WellConnections object containing information about the well connections (if argument wells was not null and this is the root rank this will contain connections in form of global indices)

◆ makeImportAndExportLists()

template<class Id >
std::tuple< std::vector< int >, std::vector< std::pair< std::string, bool > >, std::vector< std::tuple< int, int, char > >, std::vector< std::tuple< int, int, char, int > >, WellConnections > Dune::cpgrid::makeImportAndExportLists ( const Dune::CpGrid cpgrid,
const Dune::Communication< MPI_Comm > &  cc,
const std::vector< Dune::cpgrid::OpmWellType > *  wells,
const Dune::cpgrid::CombinedGridWellGraph gridAndWells,
int  root,
int  numExport,
int  numImport,
const Id *  exportLocalGids,
const Id *  exportGlobalGids,
const int *  exportToPart,
const Id *  importGlobalGids,
bool  allowDistributedWells = false 
)

Creates all the information needed from the partitioning.

Note that if wells is not null and allowDitributedWells is false, then the partitioning is postprocessed such that all cells that a perforated by a well are part of the interior of exactly one process. Otherwise they might spreaded between multiple processes.

Parameters
gridThe grid
ccThe MPI communicator used for the partitioning.
wellsPointer to vector with all possible wells (all perforations) of the problem. nullptr is possible
gridAndWellsGraph representing grid and wells (must match the other parameters!)
rootThe rank that has the whole grid before loadbalancing.
numExportThe number of exported cells (created e.g. by Zoltan)
numImportThe number of imported cells (created e.g. by Zoltan)
exportGlobalIdsC-array of the global ids of exported cells (created e.g. by Zoltan)
exportLocalIdsC-array of the local ids of exported cells (created e.g. by Zoltan)
exportToPartC-array with target partition of exported cells (created e.g. by Zoltan)
importGlobalIdsC-array of the global ids of exported cells (created e.g. by Zoltan)
allowDistributedWellsWhether wells might be distibuted to the interior of multiple processes.
Returns
A tuple consisting of a vector that contains for each local cell of the original grid the the number of the process that owns it after repartitioning, a vector containing a pair of name and a boolean indicating whether this well has perforated cells local to the process of all wells, vector containing information for each exported cell (global id of cell, process id to send to, attribute there), and a vector containing information for each imported cell (global index, process id that sends, attribute here, local index here), and a WellConnections object containing information about the well connections (if argument wells was not null and this is the root rank this will contain connections in form of global indices)

◆ perforatingWellIndicesOnProc()

std::vector< std::vector< int > > Dune::cpgrid::perforatingWellIndicesOnProc ( const std::vector< int > &  parts,
const std::vector< Dune::cpgrid::OpmWellType > &  wells,
const CpGrid cpgrid 
)

Determines the wells that have perforate cells for each process.

On the root process omputes for all processes all indices of wells that will perforate local cells. Note that a well might perforate local cells of multiple processes

Parameters
partsThe partition number for each cell
wellThe eclipse information about the wells.
cpGridThe unbalanced grid we compute on.
Returns
On the rank that has the global grid a vector with the well indices for process i at index i.

◆ postProcessPartitioningForWells()

std::vector< std::vector< int > > Dune::cpgrid::postProcessPartitioningForWells ( std::vector< int > &  parts,
std::function< int(int)>  gid,
const std::vector< OpmWellType > &  wells,
const WellConnections well_connections,
const std::vector< std::set< int > > &  wellGraph,
std::vector< std::tuple< int, int, char > > &  exportList,
std::vector< std::tuple< int, int, char, int > > &  importList,
const Communication< MPI_Comm > &  cc 
)

Computes wells assigned to processes.

Computes for all processes all indices of wells that will be assigned to this process.

Parameters
partsThe partition number for each cell
gidFunctor that turns cell index to global id.
eclipseStateThe eclipse information
well_connectonThe information about the perforations of each well.
exportListList of cells to be exported. Each entry is a tuple of the global cell index, the process rank to export to, and the attribute on that rank (assumed to be owner)
exportListList of cells to be imported. Each entry is a tuple of the global cell index, the process rank that exports it, and the attribute on this rank (assumed to be owner)
ccInformation about the parallelism together with the decomposition.
Returns
On rank 0 a vector with the well indices for process i at index i.

◆ scatterExportInformation()

template<class Id >
std::tuple< int, std::vector< Id > > Dune::cpgrid::scatterExportInformation ( int  numExport,
const Id *  exportGlobalGids,
const int *  exportToPart,
int  root,
const Dune::Communication< MPI_Comm > &  cc 
)

◆ vanillaPartitionGridOnRoot()

std::tuple< std::vector< int >, std::vector< std::pair< std::string, bool > >, std::vector< std::tuple< int, int, char > >, std::vector< std::tuple< int, int, char, int > >, WellConnections > Dune::cpgrid::vanillaPartitionGridOnRoot ( const CpGrid grid,
const std::vector< cpgrid::OpmWellType > *  wells,
const double *  transmissibilities,
bool  allowDistributedWells 
)

Creates a vanilla partitioning without a real loadbalancer.

The loadbalancing will take place on the cartesian grid.

Parameters
gridThe grid
wellsPointer to vector with all possible wells (all perforations) of the problem. nullptr is possible
transmissibilitiesC-array with transmissibilities or nullptr.
Returns
A tuple consisting of a vector that contains for each local cell of the original grid the the number of the process that owns it after repartitioning, a vector containing a pair of name and a boolean indicating whether this well has perforated cells local to the process of all wells, vector containing information for each exported cell (global id of cell, process id to send to, attribute there), a vector containing information for each imported cell (global index, process id that sends, attribute here, local index here), and a WellConnections object containing information about the well connections (if argument wells was not null and this is the root rank this will contain connections in form of global indices)