[Opm] OPM Flow RedHat 7 Binary Package Build with MPI Support
sindimo
sindimo at gmail.com
Fri May 4 09:15:36 UTC 2018
Hi Arne,
Thank you for your clarification, I really appreciate it.
I have updated to the latest OPM release of Flow 2018.04 and tested again
on RedHat 7. I still don't see any performance improvement with the new
RedHat 7 binaries as if MPI was still not working. On Ubuntu it works fine.
For example I ran the Norne model once on 1 processor and once on 2
processors on the RedHat 7 machine and the Ubuntu machine (identical
machines on Amazon AWS m4.2xlarge instances) and these are the timings I
get:
Norne on RedHat:
1 processor: 769.437 seconds
2 processors: 839.285 seconds
Norne on Ubuntu:
1 processor: 889.158 seconds
2 processors: 585 seconds
If I do "top" command on RedHat while job is launched on 2 processors, I
can see 2 processors running. However it doesn't seem that they are running
in parallel, it seems as if each process is running a separate serial copy
of flow, hence the run on 2 processors is even slower than the serial run.
Also a quick check using "ldd" on the flow binaries on both Ubuntu and
RedHat shows that the Ubuntu executable is indeed linked to the MPI
library, while the RedHat binary still doesn't show any linked MPI
libraries:
On Ubuntu:
ubuntu at ip-172-31-15-200:~$ ldd /usr/bin/flow | grep -i mpi
libmpi_cxx.so.1 => /usr/lib/libmpi_cxx.so.1 (0x00007f350423c000)
libmpi.so.12 => /usr/lib/libmpi.so.12 (0x00007f3503f66000)
On RedHat:
[ec2-user at ip-172-31-15-201 ~]$ ldd /usr/bin/flow | grep mpi
[ec2-user at ip-172-31-15-201 ~]$
[ec2-user at ip-172-31-15-201 ~]$ rpm -qa | grep opm
libopm-common1-2018.04-0.x86_64
opm-upscaling-devel-2018.04-0.x86_64
opm-simulators-bin-2018.04-0.x86_64
libopm-grid1-2018.04-0.x86_64
libopm-upscaling1-2018.04-0.x86_64
libopm-simulators1-2018.04-0.x86_64
opm-upscaling-2018.04-0.x86_64
I would appreciate any help with this please to get the parallel version of
Flow working on RedHat.
Thank you for all of your help.
Sincerely,
Mohamad
On Thu, Apr 26, 2018 at 2:26 AM, Arne Morten Kvarving <
Arne.Morten.Kvarving at sintef.no> wrote:
> hi,
>
>
> currently the rpm's are not mpi enabled. this will change in the upcoming
> 2018.04 release.
>
>
> it's a bit involved building on rhel, as you need some packages not in
> base or epel. in particular, you need to trilinos (or rather, just zoltan
> which is part of trilinos) to get efficient mpi support, as well as dune
> and such. if it's not extremely pressing i would suggest waiting for the
> release.
>
>
> arnem
> ------------------------------
> *Fra:* Opm <opm-bounces at opm-project.org> på vegne av M. S. <
> sindimo at gmail.com>
> *Sendt:* 25. april 2018 22:46:33
> *Til:* opm at opm-project.org
> *Emne:* [Opm] OPM Flow RedHat 7 Binary Package Build with MPI Support
>
> Dear All,
>
> I am interested in running OPM Flow with MPI support on RedHat 7.
>
> I've installed the binary packages on 2 different machines, one with
> RedHat 7 and the other with Ubuntu 16.04 using these instructions:
>
> https://opm-project.org/?page_id=245
>
> The Ubuntu packages ran fine with MPI and I can see performance
> improvement when running some of the SPE models, however the RedHat 7
> version doesn't seem to have been built with MPI support.
>
> RedHat 7 currently has native mpi package that can be easily installed by
> yum (based on mpich-3):
>
> sudo yum -y install mpich-devel
>
> Would someone be able please to build the Flow binaries for RedHat 7 with
> MPI support, or perhaps if they already exist can you please point me to
> where to get them?
>
> I attempted building from source code but the Flow third party library
> prerequisite installation seems overwhelming.
>
> If anyone already has their RedHat 7 build environment setup and would be
> able to help I would truly appreciate it. From the instructions on the
> website, it says to enable MPI support you just need to pass this option to
> cmake during the build “-DUSE_MPI=1” .
>
> Thank you for all of your help, I really appreciate it.
>
> Sincerely,
>
> Mohamad Sindi
> MIT
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <//opm-project.org/pipermail/opm/attachments/20180504/48fd3b6e/attachment.html>
More information about the Opm
mailing list