[Opm] Opm Digest, Vol 33, Issue 5

David Baxendale (Private) david.baxendale at gmail.com
Thu Sep 27 00:56:02 UTC 2018


Markus,

I tried running this deck in sequential mode in an Ubuntu VM and it just 
fails:

david at EIPC02-VirtualBox:/media/sf_Linux/OPM-Flow/spe10model2$ flow 
SPE10_MODEL2.DATA
**********************************************************************
* *
*                        This is flow 2018.04                        *
* *
* Flow is a simulator for fully implicit three-phase black-oil flow, *
*             including solvent and polymer capabilities.            *
*          For more information, see https://opm-project.org          *
* *
**********************************************************************

Killed
david at EIPC02-VirtualBox:/media/sf_Linux/OPM-Flow/spe10model2$

I ran under strace and got

david at EIPC02-VirtualBox:/media/sf_Linux/OPM-Flow/spe10model2$ tail 
SPE_MODEL2A.LOG
mprotect(0x7f48f54a4000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54a5000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54a6000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54a7000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54a8000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54a9000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54aa000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54ab000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0x7f48f54ac000, 4096, PROT_READ|PROT_WRITE <unfinished ...>
+++ exited with 65 +++
david at EIPC02-VirtualBox:/media/sf_Linux/OPM-Flow/spe10model2$

So something is amiss here.

Regards, OPMUSER

------------------------------------------------------------------------
On 26-Sep-18 20:00, opm-request at opm-project.org wrote:
> Send Opm mailing list submissions to
> 	opm at opm-project.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://opm-project.org/cgi-bin/mailman/listinfo/opm
> or, via email, send a message with subject or body 'help' to
> 	opm-request at opm-project.org
>
> You can reach the person managing the list at
> 	opm-owner at opm-project.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Opm digest..."
>
>
> Today's Topics:
>
>     1. MPI Parallel OPM Flow - SPE 10 Model 2 Hangs (sindimo)
>     2. Re: MPI Parallel OPM Flow - SPE 10 Model 2 Hangs (Markus Blatt)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 25 Sep 2018 21:30:26 -0400
> From: sindimo <sindimo at gmail.com>
> To: opm at opm-project.org
> Subject: [Opm] MPI Parallel OPM Flow - SPE 10 Model 2 Hangs
> Message-ID:
> 	<CACOQo+KnJeEH_5oF91C5uaBkZbx4-HsF3BoUx+kXOdggSOSbfQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I am running OPM Flow with MPICH MPI on RedHat 7 (installed via yum through
> OPM repo, version 2018.04).
>
> I am able to successfully run in parallel  some of the test models  (norne,
> spe5, spe9), however spe10model2 always hangs when I try to launch it. It
> seems it hangs during the cell partitioning as below. It just partially
> does the partitioning on a subset of the processes only (example below
> where I have 4 processes and it hangs after doing 2 partitions, I've also
> tried with 8 processors and it shows similar behavior). Any help with this
> is much appreciated as I need to run SPE10 for some work I am doing, many
> thanks!
>
> Sincerely,
>
> Mohamad
>
>
>   mpirun  -np 4 /usr/lib64/mpich/bin/flow    SPE10_MODEL2.DATA
> output_dir=out_parallel
>
> **********************************************************************
> *                                                                    *
> *                        This is flow 2018.04                        *
> *                                                                    *
> * Flow is a simulator for fully implicit three-phase black-oil flow, *
> *             including solvent and polymer capabilities.            *
> *          For more information, see https://opm-project.org          *
> *                                                                    *
> **********************************************************************
>
> After loadbalancing process 0 has 322630 cells.
> After loadbalancing process 3 has 340338 cells.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <//opm-project.org/pipermail/opm/attachments/20180925/2aa17dc1/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 26 Sep 2018 10:10:26 +0200
> From: Markus Blatt <markus at dr-blatt.de>
> To: opm at opm-project.org
> Subject: Re: [Opm] MPI Parallel OPM Flow - SPE 10 Model 2 Hangs
> Message-ID: <20180926081026.GB3152 at smaug>
> Content-Type: text/plain; charset=iso-8859-1
>
> Hi
>
> On Tue, Sep 25, 2018 at 09:30:26PM -0400, sindimo wrote:
>> I am running OPM Flow with MPICH MPI on RedHat 7 (installed via yum through
>> OPM repo, version 2018.04).
>>
>> I am able to successfully run in parallel  some of the test models  (norne,
>> spe5, spe9), however spe10model2 always hangs when I try to launch it. It
>> seems it hangs during the cell partitioning as below. It just partially
>> does the partitioning on a subset of the processes only (example below
>> where I have 4 processes and it hangs after doing 2 partitions, I've also
>> tried with 8 processors and it shows similar behavior). Any help with this
>> is much appreciated as I need to run SPE10 for some work I am doing, many
>> thanks!
> May I ask what work that is?
>
> So you are using a release (even of a target distribution). That is a bit weired.
> Unfortunately I do not have access to such a system and cannot be of much help here.
> Maybe somebody else can do a quick test?
>
> But I did a quick test with the current master on my system and it works with 4 processes.
> So if nobody else can help you, then you might want to checkout master and compile
> OPM yourself.
>
> Cheers,
>
> Markus
>



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <//opm-project.org/pipermail/opm/attachments/20180927/e0fcd84b/attachment.html>


More information about the Opm mailing list