About the openfoam parallel running

The source code

/*---------------------------------------------------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     |
    \\  /    A nd           | Copyright (C) 2011-2015 OpenFOAM Foundation
     \\/     M anipulation  |
-------------------------------------------------------------------------------
License
    This file is part of OpenFOAM.

    OpenFOAM is free software: you can redistribute it and/or modify it
    under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
    ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
    FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
    for more details.

    You should have received a copy of the GNU General Public License
    along with OpenFOAM.  If not, see <http://www.gnu.org/licenses/>.

\*---------------------------------------------------------------------------*/

#include "fvCFD.H"

int main(int argc, char *argv[])
{
    #include "setRootCase.H"
    #include "createTime.H"
    #include "createMesh.H"

    // For a case being run in parallel, the domain is decomposed into several
    // processor meshes. Each of them is run in a separate process and holds
    // instances of objects like mesh, U or p just as in a single-threaded (serial)
    // computation. These will have different sizes, of course, as they hold
    // fewer elements than the whole, undecomposed, mesh.
    // Pout is a stream to which each processor can write, unlike Info which only
    // gets used by the head process (processor0)
    Pout << "Hello from processor " << Pstream::myProcNo() << "! I am working on "
         << mesh.C().size() << " cells" << endl;

    // To exchange information between processes, special OpenMPI routines need
    // to be called.

    // This goes over each cell in the subdomain and integrates their volume.
    scalar meshVolume(0.);
    forAll(mesh.V(),cellI)
        meshVolume += mesh.V()[cellI];

    // Add the values from all processes together
    Pout << "Mesh volume on this processor: " << meshVolume << endl;
    reduce(meshVolume, sumOp<scalar>());
    Info << "Total mesh volume on all processors: " << meshVolume
        // Note how the reudction operation may be done in place without defning
        // a temporary variable, where appropriate.
         << " over " << returnReduce(mesh.C().size(), sumOp<label>()) << " cells" << endl;
    // During the reduction stage, different operations may be carried out, summation,
    // described by the sumOp template, being one of them.
    // Other very useful operations are minOp and maxOp.
    // Note how the type
    // of the variable must be added to make an instance of the template, here
    // this is done by adding <scalar> in front of the brackets.
    // Custom reduction operations are easy to implement but need fluency in
    // object-oriented programming in OpenFOAM, so we'll skip this for now.

    // Spreading a value across all processors is done using a scatter operation.
    Pstream::scatter(meshVolume);
    Pout << "Mesh volume on this processor is now " << meshVolume << endl;

    // It is often useful to check the distribution of something across all
    // processors. This may be done using a list, with each element of it
    // being written to by only one processor.
    List<label> nInternalFaces (Pstream::nProcs()), nBoundaries (Pstream::nProcs());
    nInternalFaces[Pstream::myProcNo()] = mesh.Cf().size();
    nBoundaries[Pstream::myProcNo()] = mesh.boundary().size();

    // The list may then be gathered on the head node as
    Pstream::gatherList(nInternalFaces);
    Pstream::gatherList(nBoundaries);
    // Scattering a list is also possbile
    Pstream::scatterList(nInternalFaces);
    Pstream::scatterList(nBoundaries);

    // It can also be useful to do things on the head node only
    // (in this case this is meaningless since we are using Info, which already
    // checks this and executes on the head node).
    // Note how the gathered lists hold information for all processors now.
    if (Pstream::master())
    {
        forAll(nInternalFaces,i)
            Info << "Processor " << i << " has " << nInternalFaces[i]
                 << " internal faces and " << nBoundaries[i] << " boundary patches" << endl;
    }

    // As the mesh is decomposed, interfaces between processors are turned
    // into patches, meaning each subdomain sees a processor boundary as a
    // boundary condition.
    forAll(mesh.boundary(),patchI)
        Pout << "Patch " << patchI << " named " << mesh.boundary()[patchI].name() << endl;

    // When looking for processor patches, it is useful to check their type,
    // similarly to how one can check if a patch is of empty type
    forAll(mesh.boundary(),patchI)
    {
        const polyPatch& pp = mesh.boundaryMesh()[patchI];
        if (isA<processorPolyPatch>(pp))
            Pout << "Patch " << patchI << " named " << mesh.boundary()[patchI].name()
                 << " is definitely a processor boundary!" << endl;
    }

    // ---
    // this is an example implementation of the code from tutoral 2 which
    // has been adjusted to run in parallel. Each difference is highlighted
    // as a NOTE.

    // It is conventional in OpenFOAM to move large parts of code to separate
    // .H files to make the code of the solver itself more readable. This is not
    // a standard C++ practice, as header files are normally associated with
    // declarations rather than definitions.
    // A very common include, apart from the setRootCase, createTime, and createMesh,
    // which are generic, is createFields, which is often unique for each solver.
    // Here we've moved all of the parts of the code dealing with setting up the fields
    // and transport constants into this include file.
    #include "createFields.H"

    // pre-calculate geometric information using field expressions rather than
    // cell-by-cell assignment.
    const dimensionedVector originVector("x0", dimLength, vector(0.05,0.05,0.005));
    volScalarField r (mag(mesh.C()-originVector));
    // NOTE: we need to get a global value; convert from dimensionedScalar to scalar
    const scalar rFarCell = returnReduce(max(r).value(), maxOp<scalar>());
    scalar f (1.);

    Info<< "\nStarting time loop\n" << endl;

    while (runTime.loop())
    {
        Info<< "Time = " << runTime.timeName() << nl << endl;

        // assign values to the field;
        // sin function expects a dimensionless argument, hence need to convert
        // current time using .value().
        // r has dimensions of length, hence the small value being added to it
        // needs to match that.
        // Finally, the result has to match dimensions of pressure, which are
        // m^2 / s^-2/
        p = Foam::sin(2.*constant::mathematical::pi*f*runTime.time().value())
            / (r/rFarCell + dimensionedScalar("small", dimLength, 1e-12))
            * dimensionedScalar("tmp", dimensionSet(0, 3, -2, 0, 0), 1.);

        // NOTE: this is needed to update the values on the processor boundaries.
        // If this is not done, the gradient operator will get confused around the
        // processor patches.
        p.correctBoundaryConditions();

        // calculate velocity from gradient of pressure
        U = fvc::grad(p)*dimensionedScalar("tmp", dimTime, 1.);
        runTime.write();
    }

    Info<< "End\n" << endl;

    return 0;
}

// ************************************************************************* //

The results

/*---------------------------------------------------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:  https://openfoam.org
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
\*---------------------------------------------------------------------------*/
Build  : 6
Exec   : ofTutorial5 -parallel
Date   : Apr 06 2019
Time   : 23:50:53
Host   : "zhoudq-MacBookAir"
PID    : 6003
I/O    : uncollated
Case   : /home/zhoudq/OpenFOAM/BasicOpenFOAMProgrammingTutorials/OFtutorial05_basicParallelComputing/testCase
nProcs : 4
Slaves : 
3
(
"zhoudq-MacBookAir.6004"
"zhoudq-MacBookAir.6005"
"zhoudq-MacBookAir.6006"
)

Pstream initialized with:
    floatTransfer      : 0
    nProcsSimpleSum    : 0
    commsType          : nonBlocking
    polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10)
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

[0] Hello from processor 0! I am working on 100 cells
[0] Mesh volume on this processor: 2.5e-05
[1] Hello from processor 1! I am working on 100 cells
[2] Hello from processor 2! I am working on 100 cells
[2] Mesh volume on this processor: 2.5e-05
[1] Mesh volume on this processor: 2.5e-05
[3] Hello from processor 3! I am working on 100 cells
[3] Mesh volume on this processor: 2.5e-05
Total mesh volume on all processors: 0.0001 over 400 cells
[0] Mesh volume on this processor is now 0.0001
[3] Mesh volume on this processor is now 0.0001
[2] Mesh volume on this processor is now 0.0001
[1] Mesh volume on this processor is now 0.0001
Processor 0 has 175 internal faces and 4 boundary patches
Processor 1 has 175 internal faces and 5 boundary patches
Processor 2 has 175 internal faces and 5 boundary patches
Processor 3 has 175 internal faces and 4 boundary patches
[0] Patch 0 named movingWall
[0] Patch 1 named fixedWalls
[0] Patch 2 named frontAndBack
[0] Patch 3 named procBoundary0to1
[0] Patch 3 named procBoundary0to1 is definitely a processor boundary!
Reading transportProperties

[1] Patch 0 named movingWall
[1] Patch 1 named fixedWalls
[3] Patch 0 named movingWall
[3] Patch 1 named fixedWalls
[2] Patch 0 named movingWall
[2] Patch 1 named fixedWalls
[2] Patch 2 named frontAndBack
[2] Patch 3 named procBoundary2to1
[2] Patch 4 named procBoundary2to3
[2] Patch 3 named procBoundary2to1 is definitely a processor boundary!
[2] Patch 4 named procBoundary2to3 is definitely a processor boundary!
[1] Patch 2 named frontAndBack
[3] Patch 2 named frontAndBack
[3] Patch 3 named procBoundary3to2
[3] Patch 3 named procBoundary3to2 is definitely a processor boundary!
[1] Patch 3 named procBoundary1to0
[1] Patch 4 named procBoundary1to2
[1] Patch 3 named procBoundary1to0 is definitely a processor boundary!
[1] Patch 4 named procBoundary1to2 is definitely a processor boundary!
Reading field p

Reading field U


Starting time loop

Time = 0.1

Time = 0.2

Time = 0.3

Time = 0.4

Time = 0.5

Time = 0.6

Time = 0.7

Time = 0.8

Time = 0.9

Time = 1

End

Finalising parallel run
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容