A coupling library for partitioned multi-physics simulations, including, but not restricted to fluid-structure interaction and conjugate heat transfer simulations.

Overview

preCICE

Communication
Gitter chat Discourse posts Mailing list Twitter preCICE website status

Project Status
Release Build status Build status

Project Quality
CII Best Practices CodeFactor Codacy Badge Language grade: C/C++ codecov

preCICE stands for Precise Code Interaction Coupling Environment. Its main component is a library that can be used by simulation programs to be coupled together in a partitioned way, enabling multi-physics simulations, such as fluid-structure interaction.

If you are new to preCICE, please have a look at our documentation and at precice.org. You may also prefer to get and install a binary package for the latest release (master branch).

preCICE overview

preCICE is an academic project, developed at the Technical University of Munich and at the University of Stuttgart. If you use preCICE, please cite us:

H.-J. Bungartz, F. Lindner, B. Gatzhammer, M. Mehl, K. Scheufele, A. Shukaev, and B. Uekermann: preCICE - A Fully Parallel Library for Multi-Physics Surface Coupling. Computers and Fluids, 141, 250–258, 2016.

Comments
  • Tests fail for PETSc RBF in Ubuntu 17.10/18.04

    Tests fail for PETSc RBF in Ubuntu 17.10/18.04

    Setup:

    • preCICE 1.0.3.
    • Ubuntu 17.10 and 18.04.
    • Both come with PETSc 3.7.6 in their repositories.
    • They also come with OpenMPI 2.1.1.
    • 17.10 comes with Boost 1.62, 18.04 with Boost 1.65.1.
    • Everything installed using
      sudo apt install build-essential scons libeigen3-dev libxml2-dev petsc-dev libboost-dev libboost-log-dev libboost-thread-dev libboost-system-dev libboost-filesystem-dev libboost-program-options-dev libboost-test-dev python-dev python-numpy
      
    • Both running in VirtualBox, with 1, 2, or 4 cores assigned (after creating the VM).

    Problem: While everything compiles perfectly fine, when running the tests (./tools/compileAndTest.py -b), I get the following error:

    [mapping::PetRadialBasisFctMapping]:722 in map: ERROR: RBF linear system has not converged.
    

    More detailed:

    KSP Object:Coefficient Solver 4 MPI processes
      type: gmres
        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
        GMRES: happy breakdown tolerance 1e-30
      maximum iterations=10000
      tolerances:  relative=1e-09, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object: 4 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 4
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:  (sub_)   1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object:  (sub_)   1 MPI processes
        type: icc
          0 levels of fill
          tolerance for zero pivot 2.22045e-14
          using Manteuffel shift [POSITIVE_DEFINITE]
          matrix ordering: natural
          factor fill ratio given 1., needed 1.
            Factored matrix follows:
              Mat Object:           1 MPI processes
                type: seqsbaij
                rows=5, cols=5
                package used to perform factorization: petsc
                total: nonzeros=11, allocated nonzeros=11
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
        linear system matrix = precond matrix:
        Mat Object:     1 MPI processes
          type: seqsbaij
          rows=5, cols=5
          total: nonzeros=11, allocated nonzeros=15
          total number of mallocs used during MatSetValues calls =0
              block size is 1
      linear system matrix = precond matrix:
      Mat Object:  C   4 MPI processes
        type: mpisbaij
        rows=11, cols=11
    (3) 11:52:29 [mapping::PetRadialBasisFctMapping]:722 in map: ERROR: RBF linear system has not converged.
    (2) 11:52:29 [mapping::PetRadialBasisFctMapping]:722 in map: ERROR: RBF linear system has not converged.
    (1) 11:52:29 [mapping::PetRadialBasisFctMapping]:722 in map: ERROR: RBF linear system has not converged.
        total: nonzeros=35, allocated nonzeros=133
        total number of mallocs used during MatSetValues calls =3
            block size is 1
    (0) 11:52:29 [mapping::PetRadialBasisFctMapping]:722 in map: ERROR: RBF linear system has not converged.
    

    The full log file is available here (18.04).

    Other observations:

    • No problem when I use the same PETSc version compiled from source on my host machine (Ubuntu 16.04). No problem when I use an older or newer one (3.6, 3.8) on 16.04.
    • No files are written in tests/ before the error occurs.

    Todo:

    • Check with the same PETSc version compiled from source in the VM.

    Any ideas/suggestions?

    help wanted petsc 
    opened by MakisH 36
  • Release v2.0.0

    Release v2.0.0

    How to work with this template

    • [x] assign a release manager, who takes care of the process @fsimonis
    • [x] assign each point below to a responsible person, before you continue. Use @member.

    Only the release manager should update this post (even tickboxes, due to race conditions in editing). Everybody else should comment on the PR with the progress.

    Step by step guide

    • [x] Look over CHANGELOG.md (@fsimonis )
      • Add things, if necessary
      • Extract summary
      • Fix wording
      • Sort the entries lexicographically
    • [x] Look over the Roadmap and update entries.
    • [x] Merge master to develop (No commits after the release on master)
    • [x] Check code base w.r.t code formatting (run precice/tools/formatting/check-format) and reformat if required (run precice/tools/formatting/format-all)
    • [x] Create branch release-N from develop. If needed, git rebase develop.
    • [x] Open PR from release-N to master (use this template)
    • [x] Do regression tests using the release branch (specific revision) list below :arrow_down: (all)
    • [x] Fix potential problems in develop (all)
    • [x] Run tools/releasing/bumpversion.sh MAJOR.MINOR.PATCH to bump the version @fsimonis
    • [x] Verify the version changes in: @fsimonis
    • [x] Draft message to mailing list @uekerman
    • [x] Update documentation (all)
      • [x] Update markdown configuration reference in wiki
    • [x] Approve the PR with at least two reviews (all)
    • [x] Merge PR to master @fsimonis
    • [x] Tag release on master and verify by running git describe --tags (on GitHub, make sure to select the release branch as target) @fsimonis
    • [x] Merge back to develop and verify by running git describe --tags @fsimonis

    Regression Tests

    Run all these tests manually on your system. If you succeed, please write a comment with the revisions of the components that you used below. Example: https://github.com/precice/precice/pull/507#issuecomment-530432289

    • [x] SU2 / CalculiX flap_perp @BenjaminRueth
    • [x] OpenFOAM / OpenFOAM flow_over_plate @MakisH
    • [x] OpenFOAM / OpenFOAM - NP mapping in OpenFOAM flow_over_plate @MakisH
    • [x] OpenFOAM / CalculiX FSI flap perp @MakisH
    • [x] OpenFOAM / CalculiX FSI - NP mapping in CalculiX 3D_Tube @MakisH
    • [x] OpenFOAM / CalculiX / OpenFOAM CHT heat_exchanger @KyleDavisSA
    • [x] OpenFOAM / deal.II flap_perp_2D @MakisH
    • [x] OpenFOAM / FEniCS flap_perp @IshaanDesai
    • [x] OpenFOAM / FEniCS cylinderFlap, only run first few minutes @IshaanDesai
    • [x] OpenFOAM / FEniCS flow-over-plate @IshaanDesai
    • [x] OpenFOAM / Nutils flow-over-plate @uekerman
    • [x] FEniCS / FEniCS partitioned-heat @IshaanDesai
    • [x] MATLAB / MATLAB ODEs @BenjaminRueth
    • [x] ExaFSA: Ateles / FASTEST :warning: Adapters are not ready for 2.0 :warning: @atotoun
    • [x] Alya @uekerman
    • [x] 1D-ElasticTube C++ @fsimonis
    • [x] 1D-ElasticTube Python @fsimonis https://github.com/precice/elastictube1d/issues/29
    • [x] SuperMUC @atotoun
    • [x] Solverdummy C++ @fsimonis
    • [x] Solverdummy C @fsimonis
    • [x] Solverdummy Fortran @MakisH
    • [x] Solverdummy Fortran 2003
    • [x] Solverdummy Python @fsimonis https://github.com/precice/python-bindings/pull/32
    • [x] Solverdummy MATLAB @BenjaminRueth

    Post-release

    • [x] Generate packages
      • [x] Latest Ubuntu LTS
      • [x] Latest Ubuntu
      • [ ] Arch Linux AUR Package
    • [x] Update Spack recipe
    • [ ] Send email and do marketing
    • [ ] Tweet
    • [ ] Update the PR template @BenjaminRueth https://github.com/precice/precice/pull/659
    breaking change 
    opened by fsimonis 31
  • Release 2.1.0

    Release 2.1.0

    How to work with this template

    • [x] assign a release manager, who takes care of the process
    • [x] assign each point below to a responsible person, before you continue. Use @member.

    Only the release manager should update this post (even tickboxes, due to race conditions in editing). Everybody else should comment on the PR with the progress.

    Step by step guide

    • [x] Look over CHANGELOG.md (all)
      • Add things, if necessary
      • Extract summary
      • Fix wording
      • Sort the entries lexicographically
    • [x] Look over the Roadmap and update entries.
    • [x] Merge master to develop (No commits after the release on master)
    • [x] Check code base w.r.t code formatting (run precice/tools/formatting/check-format) and reformat if required (run precice/tools/formatting/format-all)
    • [x] Create branch release-N from develop. If needed, git rebase develop.
    • [x] Open PR from release-N to master (use this template)
    • [x] Run tools/releasing/bumpversion.sh MAJOR.MINOR.PATCH to bump the version
    • [x] Verify the version changes in:
    • [x] Commit the version bump
    • [x] Do regression tests using the release branch (specific revision) list below :arrow_down: (all)
    • [x] Fix potential problems in develop (all)
    • [x] Draft message to mailing list and write blog post on discourse
    • [x] Update documentation (all)
      • [x] Update markdown configuration reference in wiki
    • [x] Approve the PR with at least two reviews (all)
    • [x] Merge PR to master ( use git merge --no-ff release-N )
    • [x] Tag release on master vN and verify by running git describe --tags
    • [x] Merge back to develop and verify by running git describe --tags
    • [x] Push master and push the vN tag
    • [x] Draft a new release on GitHub
    • [x] Generate packages and upload to the draft release
      • [x] Latest Ubuntu LTS
      • [x] Latest Ubuntu
    • [x] Publish the GitHub release

    Regression Tests

    Run all these tests manually on your system. If you succeed, please write a comment with the revisions of the components that you used below. Example: https://github.com/precice/precice/pull/507#issuecomment-530432289 and update the table.

    | State | Success | Failure | Skipped | | --- | --- | --- | --- | | Write | :o: | :x: | :fast_forward: | | Read | :o: | :x: | :fast_forward: |

    | State | Tester | Test | | --- | --- | --- | | :o: | @MakisH | SU2 / CalculiX flap_perp | | :o: | @MakisH | OpenFOAM / OpenFOAM flow_over_plate | | :o: | @MakisH | OpenFOAM / OpenFOAM - NP mapping in OpenFOAM flow_over_plate | | :o: | @KyleDavisSA | OpenFOAM / CalculiX FSI flap perp | | :o: | @KyleDavisSA | OpenFOAM / CalculiX FSI - NP mapping in CalculiX 3D_Tube | | :o: | @KyleDavisSA | OpenFOAM / CalculiX / OpenFOAM CHT heat_exchanger | | :fast_forward: | @MakisH | OpenFOAM / deal.II FSI cylinderFlap | | :fast_forward: | @MakisH | OpenFOAM / deal.II FSI cylinderFlap_2D | | :o: | @MakisH | OpenFOAM / deal.II flap_perp_2D | | :o: | @MakisH | OpenFOAM / deal.II flap_perp | | :o: | @BenjaminRueth | OpenFOAM / FEniCS flap_perp | | :o: | @BenjaminRueth | OpenFOAM / FEniCS flow-over-plate | | :o: | @BenjaminRueth | OpenFOAM / FEniCS cylinderFlap, only run first few minutes | | :o:| @uekerman | OpenFOAM / Nutils flow-over-plate | | :o: | @BenjaminRueth | FEniCS / FEniCS partitioned-heat | | :o: | @IshaanDesai | MATLAB / MATLAB ODEs | | :o: | @uekerman | Alya | | :o: | @uekerman | 1D-ElasticTube C++ | | :o: | @BenjaminRueth | 1D-ElasticTube Python | | :o: | @atotoun | SuperMUC Ateles / Ateles | | :o: | @KyleDavisSA | Solverdummy Fortran 2003 | | :o: | @BenjaminRueth | Solverdummy Python | | :o: | @IshaanDesai | Solverdummy MATLAB |

    Post-release

    • [ ] Update Arch Linux AUR Package
    • [x] Update Spack recipe

    Release new version for bindings (to ensure compatibility with newest preCICE version)

    (only if breaking changes) Open PRs or issues develop -> master for all adapters

    (only if breaking changes) Open PRs or issues develop -> master for all other tools

    Marketing

    • [x] Send email to mailing list
    • [x] Tweet
    • [ ] Digest CFD-Online
    • [ ] NADigest

    Misc

    opened by fsimonis 26
  • Release v2.2.0

    Release v2.2.0

    How to work with this template

    • [x] assign a release manager, who takes care of the process
    • [x] assign each point below to a responsible person, before you continue. Use @member.

    Only the release manager should update this post (even tickboxes, due to race conditions in editing). Everybody else should comment on the PR with the progress.

    Step by step guide

    • [x] Look over CHANGELOG.md (all)
      • Add things, if necessary
      • Extract summary
      • Fix wording and tense
      • Sort the entries lexicographically
    • [x] Merge master to develop (No commits after the release on master)
    • [x] Check code base w.r.t code formatting (run precice/tools/formatting/check-format) and reformat if required (run precice/tools/formatting/format-all)
    • [x] Create branch release-N from develop. If needed, git rebase develop.
    • [x] Open PR from release-N to master (use this template)
    • [x] Do regression tests using the release branch (specific revision) list below :arrow_down: (all)
    • [x] Fix potential problems on release branch (all)
    • [x] Look over the Roadmap and update entries.
    • [x] Run tools/releasing/bumpversion.sh MAJOR.MINOR.PATCH to bump the version
    • [x] Verify the version changes in:
    • [x] Commit the version bump
    • [x] Draft message to mailing list @fsimonis
    • [x] Write a draft "blog post" on Discourse @MakisH
    • [x] Approve the PR with at least two reviews (all)
    • [x] Merge PR to master ( use git merge --no-ff release-N )
    • [x] Tag release on master vN and verify by running git describe --tags
    • [x] Merge back to develop and verify by running git describe --tags
    • [x] Push master and push the vN tag
    • [x] Draft a new release on GitHub
    • [x] Update documentation (all) @BenjaminRodenberg
    • [x] Generate packages and upload to the draft release
      • [x] Latest Ubuntu LTS
      • [x] Latest Ubuntu
    • [x] Publish the GitHub release @fsimonis

    Regression Tests

    Run all these tests manually on your system. If you succeed, please write a comment with the revisions of the components that you used below. Example: https://github.com/precice/precice/pull/507#issuecomment-530432289 and update the table.

    Branches to test:

    | State | Success | Failure | Skipped | | --- | --- | --- | --- | | Write | :o: | :x: | :fast_forward: | | Read | :o: | :x: | :fast_forward: |

    | State | Tester | Test | | --- | --- | --- | | :o: | @IshaanDesai | SU2 / CalculiX flap_perp | | :o: | @MakisH | OpenFOAM / OpenFOAM flow_over_plate serial + parallel | | :o: | @MakisH | OpenFOAM / OpenFOAM - NP mapping in OpenFOAM flow_over_plate | | :o: | @MakisH | OpenFOAM / CalculiX FSI flap perp | | :o: | @MakisH | OpenFOAM / CalculiX FSI - NP mapping in CalculiX 3D_Tube | | :o: | @MakisH | OpenFOAM / CalculiX / OpenFOAM CHT heat_exchanger | | :o: | @MakisH | OpenFOAM / deal.II flap_perp_2D (linear + non-linear, serial + parallel) | | :o: | @MakisH | OpenFOAM / deal.II flap_perp | | :x: | @MakisH | OpenFOAM / deal.II FSI cylinderFlap_2D | | :x: | @MakisH | OpenFOAM / deal.II FSI cylinderFlap | | :o: | @IshaanDesai | OpenFOAM / FEniCS flap_perp | | :o: | @IshaanDesai | OpenFOAM / FEniCS flow-over-plate | | :o: | @IshaanDesai | OpenFOAM / FEniCS cylinderFlap, only run first few minutes | | :o: #951 | @uekerman | OpenFOAM / Nutils flow-over-plate | | :o: | @BenjaminRodenberg | FEniCS / FEniCS partitioned-heat | | :o: | @IshaanDesai | SU2 / FEniCS flap_perp | | :o: | @BenjaminRodenberg | MATLAB / MATLAB ODEs | | :o: | @fsimonis | 1D-ElasticTube C++ | | :o: | @BenjaminRodenberg | 1D-ElasticTube Python | | :o: | @MakisH | Solverdummy Fortran module | | :o: | @BenjaminRodenberg | Solverdummy Python | | :o: | @BenjaminRodenberg | Solverdummy MATLAB | | :o: | @uekerman | Alya | | :skull_and_crossbones: | | ~~ExaFSA: Ateles / FASTEST~~ | | :o: | @atotoun | SuperMUC Build | | :o: | @atotoun | Ateles / Ateles (SuperMUC) |

    Post-release

    • [x] Update Spack recipe
    • [x] Update Arch Linux AUR Package

    Release new version for bindings (to ensure compatibility with newest preCICE version)

    Marketing

    • [x] Finalize post on Discourse @MakisH
    • [x] Write on Gitter @uekerman
    • [ ] Send announcement to the mailing list @fsimonis
    • [x] CFD-Online @uekerman
    • [x] NADigest @uekerman
    • [x] Post on Twitter (additionally to the automatic) @MakisH
    • [x] Post on ResearchGate @MakisH
    • [ ] Post on LinkedIn (individuals)
    • [ ] Submit a short article to the Quartl @MakisH

    Misc

    opened by fsimonis 25
  • BOOST and mingw (msys2)

    BOOST and mingw (msys2)

    Hi, I try to compiled preCICE in mingw, but now I have a problem with boost library. I install this library by command: pacman -S mingw-w64-x86_64-boost, instalation are OK but name have suffix -mt.

    for example: libboost_log-mt.a libboost_log_setup-mt.a libboost_thread-mt.a

    When I am compiling preCICE, the compiler reports that there are no boost libraries, how can I correct this?

    building 
    opened by 3rav 25
  • Filter mesh by mappings crashes in Mesh::computeState()

    Filter mesh by mappings crashes in Mesh::computeState()

    I stumbled upon a crash while trying to test Nearest-Projection mapping with OpenFOAM (see https://github.com/precice/openfoam-adapter/pull/46) in parallel. Since we don't use Nearest-Projection so widely, maybe this is an old problem, originating from preCICE. In any case, no error message is given.

    Some details:

    • Only in parallel (at least not when Fluid is parallel and Solid is serial)
    • Both participants do a nearest-projection consistent read mapping.
    • System-dependent
      • Works on Ubuntu 18.04, Boost 1.65.1, OpenMPI 2.1.1
      • Solid crashes on Ubuntu 16.04, Boost 1.67.0, OpenMPI 1.10.2
    • preCICE v1.4.1
    • Relevant files:

    ReceivedPartition::compute():

    https://github.com/precice/precice/blob/143b6e5043c5bad984cc8578e7c95d2dd3590b77/src/partition/ReceivedPartition.cpp#L198-L207

    void Mesh:: computeState():

    https://github.com/precice/precice/blob/143b6e5043c5bad984cc8578e7c95d2dd3590b77/src/mesh/Mesh.cpp#L279

    bug 
    opened by MakisH 24
  • Restructure tools and bindings

    Restructure tools and bindings

    With v2.0.0 we want to clean-up a few relics and to also better organize the (increasing) bindings and tools.

    We decide on the following:

    • [x] Move Python bindings into a separate repository: precice/python-bindings. Repository name consistent with adapters' repositories.
    • [x] Move Matlab bindings into a separate repository: precice/matlab-bindings. Repository name consistent with adapters' repositories.
    • [x] Move C, Fortran90, and Fortran2003 bindings to extras/bindings.
    • [x] Move tools/ to extras/tools.
    • [x] Split the tools/ and move the solverdummies to extras/solverdummies.
    breaking change 
    opened by MakisH 23
  • Python Adapter for preCICE

    Python Adapter for preCICE

    1. Currently required local build of the PySolverInterface library (work in progress to include it in scons)
    2. Does not support getMeshHandle, inquireClosestMesh and inquireVoxelPosition
    3. Instructions for use included in the readme
    opened by saumiJ 23
  • Hanging at initialize when both participant do mapping

    Hanging at initialize when both participant do mapping

    When I use precice, and both participants do the data mapping ( read consistent), often the simulation hangs here:

    Initialize preCICE
     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s
     | precice::impl::SolverInterfaceImpl::initialize()        | Coupling partner/s are connected
     | precice::geometry::CommunicatedGeometry::sendMesh()     | Gather mesh AcousticSurface_euler
     | precice::geometry::CommunicatedGeometry::sendMesh()     | Send global mesh AcousticSurface_euler
     | precice::geometry::CommunicatedGeometry::receiveMesh()  | Receive global mesh AcousticSurface_acoustic
     | precice::geometry::BroadcastFilterDecomposition::broadcast() | Broadcast mesh AcousticSurface_acoustic
     | precice::geometry::BroadcastFilterDecomposition::filter() | Filter mesh AcousticSurface_acoustic
     | precice::geometry::BroadcastFilterDecomposition::feedback() | Feedback mesh AcousticSurface_acoustic
     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up slaves communication to coupling partner/s
     | precice::impl::SolverInterfaceImpl::initialize()        | Slaves are connected
     | precice::impl::SolverInterfaceImpl::initialize()        | it 1 of 1 | dt# 1 of 200000000 | t 0 | dt 1e-05 | max dt 1e-05 | ongoing yes | dt complete no | write-initial-data |
    

    and the acoustic domain:

     Initialize preCICE
     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s
     | precice::impl::SolverInterfaceImpl::initialize()        | Coupling partner/s are connected
     | precice::geometry::CommunicatedGeometry::sendMesh()     | Gather mesh AcousticSurface_acoustic
     | precice::geometry::CommunicatedGeometry::sendMesh()     | Send global mesh AcousticSurface_acoustic
     | precice::geometry::CommunicatedGeometry::receiveMesh()  | Receive global mesh AcousticSurface_euler
     | precice::geometry::BroadcastFilterDecomposition::broadcast() | Broadcast mesh AcousticSurface_euler
     | precice::geometry::BroadcastFilterDecomposition::filter() | Filter mesh AcousticSurface_euler
     | precice::geometry::BroadcastFilterDecomposition::feedback() | Feedback mesh AcousticSurface_euler
     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up slaves communication to coupling partner/s
     | precice::impl::SolverInterfaceImpl::initialize()        | Slaves are connected
     | precice::impl::SolverInterfaceImpl::initialize()        | it 1 of 1 | dt# 1 of 200000000 | t 0 | dt 1e-05 | max dt 1e-05 | ongoing yes | dt complete no | write-initial-data |
    

    It is a rather small testcase,2d with 2*1500 points at the interfaces, matching grids.

    Using debug flags, this is a output where it stops:

    precice::com::SocketCommunication::acceptConnectionAsServer() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
     | precice::com::SocketCommunication::getRemoteCommunicatorSize() | (72) Entering  (file:src/utils/Tracer.cpp,line:21)
     | precice::com::SocketCommunication::getRemoteCommunicatorSize() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
     | precice::com::SocketCommunication::receive(int)         | (72) Entering rankSender=0 (file:src/utils/Tracer.cpp,line:21)
     | precice::com::SocketCommunication::receive(int)         | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
     | precice::m2n::PointToPointCommunication::acceptConnection() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
     | precice::m2n::M2N::acceptSlavesConnection()             | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
     | precice::cplscheme::ParallelCouplingScheme::initialize() | (72) Entering startTime=0, startTimestep=1 (file:src/utils/Tracer.cpp,line:21)
     | precice::cplscheme::ParallelCouplingScheme::initialize() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
     | precice::utils::Parallel::synchronizeProcesses()        | (72) Entering  (file:src/utils/Tracer.cpp,line:21)
    

    I figured out that for jobs where both participants are < 64 processes, it is running fine. The strange part is, that same executable, same input files from the solver, same precice config and same job script, it is sometimes running.

    I worked a lot with Mohammed Shaheen (IBM support at LRZ) on this, but from the machine support, he could not fine any problem.

    I am pretty sure that the problem is due to the data mapping on one participant. When I change the read consistent to write conservative of the other participant, I never have a problem at that point.

    This is quite urgent, since I want to run simulations :) and the data mapping on one participant is really slow ( --> issue 43) .

    opened by gk780 23
  • Release v2.4.0

    Release v2.4.0

    How to work with this template

    • [x] assign a release manager, who takes care of the process
    • [x] assign each point below to a responsible person, before you continue. Use @member.

    Only the release manager should update this post (even tickboxes, due to race conditions in editing). Everybody else should comment on the PR with the progress.

    Pre-PR steps

    • [x] Look over PRs and Issues without an assigned version. (all)
    • [x] Look over entries in docs/changelog) (all)
      • Add missing entries, if necessary
      • Fix wording and tense
    • [x] Make sure you have the latest develop and master branches locally.
    • [x] Merge master to develop ( This should result in no commits )
    • [x] Check code base w.r.t code formatting (run precice/tools/formatting/check-format) and reformat if required (run precice/tools/formatting/format-all)
    • [x] Create branch release-vX.Y.Z from develop. If needed, git rebase develop.
    • [x] Run tools/releasing/bumpversion.sh MAJOR.MINOR.PATCH to bump the version
    • [x] Look over CHANGELOG.md (all)
      • Check for merged lines
      • Add things, if necessary
      • Fix wording and tense
      • Sort the entries lexicographically
      • Extract summary
    • [x] Verify the version changes in:
    • [x] Commit the version bump
    • [x] Push the release branch to the precice repository
    • Prepare independent releases

    Step by step guide

    • [x] Open PR from release-vX.Y.Z to master (use this template)
    • [x] Do regression tests using the release branch (specific revision) list below :arrow_down: (all)
    • [x] Fix potential problems in develop (all)
    • [x] Rebase the release branch on develop (version bump should be the latest commit)
    • [x] Draft message to mailing list
    • [x] Write a draft "blog post" on Discourse /blob/master/_includes/xmlreference.md)
    • [x] Approve the PR with at least two reviews (all)
    • [x] Merge PR to master ( use git merge --no-ff release-vX.Y.Z )
    • [x] Tag release on master vX.Y.Z and verify by running git describe --tags
    • [x] Merge back to develop and verify by running git describe --tags
    • [x] Tripple check that you haven't messed anything up. (You can always discard local changes)
    • [x] Push master and push the vX.Y.Z tag
    • [x] Push develop
    • [x] Wait for the release pipeline
    • [x] Write release text
    • [x] Publish the GitHub release

    Regression Tests

    Use the following branches:

    • precice release-vX.Y.Z
    • pyprecice python-bindings-vX.Y.Z.1
    • matlab-bindings matlab-bindings-vX.Y.Z.1
    • rest master

    Run all these tests manually on your system. If you succeed, please write a comment with the revisions of the components that you used below. Example: https://github.com/precice/precice/pull/507#issuecomment-530432289 and update the table.

    | State | Success | Failure | Skipped | | --- | --- | --- | --- | | Write | :o: | :x: | :fast_forward: | | Read | :o: | :x: | :fast_forward: |

    | State | Tester | Test | | --- | --- | --- | | :o: :o: | @IshaanDesai @uekerman | perpendicular-flap fluid-nutils - solid-calculix | | :o: | @DavidSCN | perpendicular-flap fluid-openfoam - solid-dealii | | :o: | @IshaanDesai | perpendicular-flap fluid-su2 - solid-fenics | | :o: | @DavidSCN | multiple-perpendicular-flaps fluid-openfoam - solid-(left+right)-dealii | | :o: | @MakisH | flow-over-heated-plate fluid-openfoam - solid-openfoam serial + parallel | | :o: | @IshaanDesai | flow-over-heated-plate fluid-openfoam - solid-fenics serial + parallel | | :o: :o: | @IshaanDesai @uekerman | flow-over-heated-plate fluid-openfoam - solid-nutils | | :o: | @MakisH | flow-over-heated-plate-nearest-projection fluid-openfoam - solid-openfoam | | :o: | @IshaanDesai | flow-over-heated-plate-steady-state fluid-openfoam - solid-codeaster | | :x: :o: | @MakisH @fsimonis | heat-exchanger fluid-(inner+outer)-openfoam - solid-calculix | | :o: | @MakisH | partitioned-elastic-beam dirichlet-calculix - neumann-calculix | | :o: :o: | @IshaanDesai @uekerman | partitioned-heat-conduction fenics - nutils | |:o: :o: | @IshaanDesai @uekerman | partitioned-heat-conduction-complex fenics- fenics | | :o: | @MakisH | partitioned-pipe fluid1-openfoam-pimplefoam - fluid2-openfoam-sonicliquidfoam | | :o: | @MakisH | elastic-tube-1d fluid-cpp - solid-python | | :o: | @MakisH | elastic-tube-3d fluid-openfoam - solid-calculix | | :o: | @erikscheurer | MATLAB / MATLAB ODEs | | :o: | @MakisH | Solverdummy Fortran module | | :o: | @IshaanDesai | Solverdummy Python | | :o: | @erikscheurer | Solverdummy MATLAB | | :o: | @IshaanDesai | Solverdummy Julia | | :o: | @uekerman | Alya | | :o: | @KyleDavisSA | SuperMUC |

    Post-release

    • [x] Update documentation
      • [x] Install from source
      • [x] Baseline Ubuntu
      • [x] Master-tag to intra-comm
      • [x] Solverdummies (mesh arg)
    • [x] Flag Arch Linux AUR package and dependants as out-of-date.
    • [x] Update Spack recipe
    • [ ] Update pyprecice Spack
    • [x] Update Website:

    Release new version for bindings (to ensure compatibility with newest preCICE version)

    Marketing

    Misc

    To open a new PR with this template, use this PR template query

    opened by fsimonis 19
  • Rename MasterSlave to something more appropriate

    Rename MasterSlave to something more appropriate

    Please describe the problem you are trying to solve. In preCICE we use the terms master and slave to distinguish between multiple ranks running in parallel. These terms are archaic and have some baggage. Alternatives have been discussed at about every other project meeting and it is finally time to come to a conclusion and move on.

    Moving away from these wide-spread terms will break a widespread convention, but if one always follows conventions, one can never improve.

    There are 3 main execution paths in preCICE:

    • a Participant runs on a single process (size = 1)
    • a Participant runs on multiple parallel processes (size > 1) and
      • the current rank is the "main rank" (aka master) who sometimes takes the leading role in coordination (rank = 0)
      • the current ranks is a rank who sometimes needs to be coordinated (aka slave) the aforementioned rank (rank > 0)

    The direct interface to this information is the MasterSlave utility class, which provides this information based on its configured size and rank.

    This terminology is also used for the communication between participants. preCICE first connects master ranks of participants and then establishes a connection between slaves.

    Alternative name for the class MasterSlave

    We use communication in 2 cases:

    • inside a participant (MasterSlave) and
    • between participants (M2N).

    I propose to use another very common terminology from networking:

    • use intra-communication (as in IntraNet) for the communication between ranks of a single participant
    • use inter-communication (as in InterNet) for the communication between participants

    This allows us to rename utils::MasterSlave to utils::IntraComm or utils::IntraCommunication.

    Alternative names for the states master and slave

    The following is a table of alternatives I came across while researching this and that I picked up in discussions. I numbered them to simplify the discussion.

    Nr | master | slave | Opinion --- | --- | --- | --- 1 | master | follower | :-1: :-1: :-1: 2 | leader | follower | :-1: :-1: :-1: 3 | primary | secondary | :+1: :+1: :+1: 4 | main | secondary | :+1: :-1: 5 | agency | agent | :-1: :-1: 6 | coordinator | coordinated | :-1: :-1: 7 | coordinator | node | :-1: 8 | controller | worker | :+1: :-1: :-1: 9 | supervisor | worker | :+1: :-1: :-1: 10 | parent | child | :+1: :-1: :-1:

    My current favourite is (3) as it does not impose additional relations. However, it still isn't really fitting as for a non-parallel case, isPimary() = false.

    opened by fsimonis 19
  • Remove unnecessary (?) read data in test

    Remove unnecessary (?) read data in test

    Main changes of this PR

    Removes unnecessary read data "DataTwo".

    Motivation and additional information

    DataTwo looks unnecessary to me and it's hard to understand what's the purpose from looking at the configuration. Is it necessary for the watch integral to work?

    Additional issue: It looks like preCICE is not checking for read data that that is actually never receiving any data from any participant. Is there a use case for this setup? If not, I think we should have a check for to avoid this faulty config.

    Author's checklist

    • [ ] I used the pre-commit hook to prevent dirty commits and used pre-commit run --all to format old commits.
    • [ ] I added a changelog file with make changelog if there are user-observable changes since the last release.
    • [ ] I added a test to cover the proposed changes in our test suite.
    • [ ] For breaking changes: I documented the changes in the appropriate porting guide.
    • [ ] I sticked to C++17 features.
    • [ ] I sticked to CMake version 3.16.3.
    • [ ] I squashed / am about to squash all commits that should be seen as one.

    Reviewers' checklist

    • [ ] Does the changelog entry make sense? Is it formatted correctly?
    • [ ] Do you understand the code changes?
    bug 
    opened by BenjaminRodenberg 3
  • Cleanup in data read/write functions in SolverInterface

    Cleanup in data read/write functions in SolverInterface

    I noticed minor issues here and there in the API. Creating this thread to discuss them (and add more as they come up).

    1. Dimensions and Vertex Count in writeScalarData(...)

    In writeScalarData(...), we first check that the dimensions of the writeContext is 1. Then we divide by 1 here:

    const auto vertexCount = values.size() / context.getDataDimensions();
    

    Since it's scalar, vertexCount would simply be values.size().

    2. values and value

    In most of the data read and write functions, values represents the input to the function, and value represents the internal data structure where the data is written to / read from. This naming is confusing. Instead of value, something like valuesInternal would be a better name (as used in readBlockVectorDataImpl(...) ).

    3. valueIndex checks

    We have

    PRECICE_CHECK(valueIndex >= -1,
    

    followed a few lines later by

    PRECICE_CHECK(0 <= valueIndex && valueIndex < vertexCount,
    

    And valueIndex doesn't change anywhere in these functions. Then the first check looks redundant.

    opened by kanishkbh 0
  • WIP: write data when subcycling is used

    WIP: write data when subcycling is used

    Main changes of this PR

    Allows to write data to preCICE when subcycling is used. This either results in a piecewise constant or linear interpolation. Possibly also in higher-order interpolation.

    Motivation and additional information

    See #1171

    This is a rebased version of https://github.com/precice/precice/pull/1414, based on https://github.com/precice/precice/pull/1520.

    Author's checklist

    • [x] merge #1422
    • [x] merge #1445
    • [x] merge https://github.com/precice/precice/pull/1455
    • [x] merge https://github.com/precice/precice/pull/1456 ? (or directly do this here?)
    • [x] what to do with #1432 (Continue working on https://github.com/BenjaminRodenberg/precice/pull/13 until then)
    • [ ] merge https://github.com/precice/precice/pull/1507
    • [ ] merge #1504
    • [ ] merge #1520
    • [x] merge #1503
    • [ ] Add tests for MultiCoupling: A write mapping should work in the context of multi coupling, even, if the sent meshes in time do not agree. For making a read mapping in this situation work as well, we need to be able to refer to received times etc. in an individual fashion (currently a coupling scheme only gives access to the received times globally).
    • [ ] I added a changelog file with make changelog if there are user-observable changes since the last release.
    • [ ] I added a test to cover the proposed changes in our test suite.
    • [ ] I ran make format to ensure everything is formatted correctly.
    • [ ] I sticked to C++14 features.
    • [ ] I sticked to CMake version 3.16.3.
    • [ ] I squashed / am about to squash all commits that should be seen as one.

    Reviewers' checklist

    • [ ] Does the changelog entry make sense? Is it formatted correctly?
    • [ ] Do you understand the code changes?
    enhancement 
    opened by BenjaminRodenberg 1
  • Use data structure similar to `time::Storage` in `mesh::Data::_values`

    Use data structure similar to `time::Storage` in `mesh::Data::_values`

    In #1504 I introduce time::Storage in CouplingData to be able to store multiple samples per time window. This will become even more important when we move to subcycling (see #1414).

    I'm currently restricting the use of time::Storage to cplscheme::CouplingData. This requires manually moving data from cplscheme::CouplingData to mesh::Data::_values and back again before and after mapping (or acceleration). This is not ideal, but in the current situation a reasonable solution with limited scope.

    Ideally we would use this datastructure everywhere in preCICE where mesh::Data::_values is accessed (acceleration, mapping, communication ...).

    maintainability 
    opened by BenjaminRodenberg 0
  • Perform extrapolation inside Storage

    Perform extrapolation inside Storage

    Main changes of this PR

    Removes extrapolation from cplscheme package. Storage now directly takes care of extrapolation.

    Motivation and additional information

    Removes a lot of bookkeeping logic, because both extrapolation and storage need to keep track of data at the beginning and at the end of a window.

    Author's checklist

    • [x] I used the pre-commit hook to prevent dirty commits and used pre-commit run --all to format old commits.
    • [ ] I added a changelog file with make changelog if there are user-observable changes since the last release.
    • [x] ~~I added a test to cover the proposed changes in our test suite.~~ Not needed. Refactoring.
    • [ ] For breaking changes: I documented the changes in the appropriate porting guide.
    • [x] I sticked to C++17 features.
    • [x] I sticked to CMake version 3.16.3.
    • [x] I squashed / am about to squash all commits that should be seen as one.
    • [ ] Merge #1504

    Reviewers' checklist

    • [ ] Does the changelog entry make sense? Is it formatted correctly?
    • [ ] Do you understand the code changes?
    maintainability 
    opened by BenjaminRodenberg 0
  • Stacktrace in testprecice for triggered PRECICE_ASSERT is not helpful for debugging anymore

    Stacktrace in testprecice for triggered PRECICE_ASSERT is not helpful for debugging anymore

    Describe your setup

    Operating system (e.g. Linux distribution and version): Ubuntu 22.04 preCICE Version: e3db3c1c7216d8d48564a6ffb2e62b1343bc13a5

    Describe the problem

    If testprecice runs into an assertion inside the core library the stacktrace is not useful, because it only showes testprecice. I remember that this was different about 2-3 weeks ago, where a hit assertion resulted in a stacktrace that showed me the trace from testprecice down to the function that triggered the assertion.

    Step To Reproduce

    1. Introduced an assertion that gets triggered by test A. (I added PRECICE_ASSERT(false, "Assertion triggered in ParallelCouplingScheme::ParallelCouplingScheme") in the constructor of ParalleCouplingScheme)
    2. Run testprecice for test A

    For my example:

    precice/build$ mpirun -np 4 ./testprecice -t CplSchemeTests/ParallelImplicitCouplingSchemeTests/Extrapolation/FirstOrderWithAcceleration
    ...
    testprecice: /home/benjamin/Programming/precice/src/cplscheme/ParallelCouplingScheme.cpp:26: precice::cplscheme::ParallelCouplingScheme::ParallelCouplingScheme(double, int, double, int, const string&, const string&, const string&, precice::m2n::PtrM2N, precice::cplscheme::constants::TimesteppingMethod, precice::cplscheme::BaseCouplingScheme::CouplingMode, int, int): Assertion `false' failed.
    ASSERTION FAILED
    Location:   precice::cplscheme::ParallelCouplingScheme::ParallelCouplingScheme(double, int, double, int, const string&, const string&, const string&, precice::m2n::PtrM2N, precice::cplscheme::constants::TimesteppingMethod, precice::cplscheme::BaseCouplingScheme::CouplingMode, int, int)
    File:       /home/benjamin/Programming/precice/src/cplscheme/ParallelCouplingScheme.cpp:26
    Expression: false
    Rank:       0
    Arguments:  
      0: "Assertion triggered in ParallelCouplingScheme::ParallelCouplingScheme" == Assertion triggered in ParallelCouplingScheme::ParallelCouplingScheme
    
    Stacktrace:
     0# 0x000055D21D0365C0 in ./testprecice
     1# 0x000055D21CD5C7B7 in ./testprecice
     2# 0x000055D21C517783 in ./testprecice
     3# 0x000055D21C5168AD in ./testprecice
     4# 0x000055D21C3A3331 in ./testprecice
     5# 0x00007F5E30C523F2 in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
     6# boost::execution_monitor::catch_signals(boost::function<int ()> const&) in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
     7# boost::execution_monitor::execute(boost::function<int ()> const&) in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
     8# boost::execution_monitor::vexecute(boost::function<void ()> const&) in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
     9# boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned long) in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    10# 0x00007F5E30C605A9 in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    11# 0x00007F5E30C608BB in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    12# 0x00007F5E30C608BB in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    13# 0x00007F5E30C608BB in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    14# 0x00007F5E30C608BB in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    15# boost::unit_test::framework::run(unsigned long, bool) in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    16# boost::unit_test::unit_test_main(bool (*)(), int, char**) in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    17# 0x000055D21C34E5B5 in ./testprecice
    18# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
    19# 0x000055D21C34E2DE in ./testprecice
    

    Expected behaviour The stacktrace should look like

    ....
    File:       /home/benjamin/Programming/precice/src/cplscheme/ParallelCouplingScheme.cpp:26
    Expression: false
    Rank:       0
    Arguments:  
      0: "Assertion triggered in ParallelCouplingScheme::ParallelCouplingScheme" == Assertion triggered in ParallelCouplingScheme::ParallelCouplingScheme
    
    Stacktrace:
     0# 0x000055D21D0365C0 in ./ParallelCouplingScheme
     1# 0x000055D21CD5C7B7 in ./SomeThingMore
     2# 0x000055D21C517783 in ./SomeOtherThing
     3# 0x000055D21C5168AD in ./SomeThing
     4# 0x000055D21C3A3331 in ./testprecice
     5# 0x00007F5E30C523F2 in /lib/x86_64-linux-gnu/libboost_unit_test_framework.so.1.71.0
    

    Additional context

    I'm currently suspecting bfb10564b73a5d43215233764fec76bd25ade27b, b8f96f9aefe3916835d23c2f05fc4ecdce8db01c or b8fdde35054d1a287a3a430912ed4a32f103c53b to have introduced this change, but I'm not very deep into CMake so I was not able to fix the problem.

    bug usability 
    opened by BenjaminRodenberg 4
Releases(v2.5.0)
Owner
preCICE
A Coupling Library for Partitioned Multi-Physics Simulations on Massively Parallel Systems
preCICE
A custom GEGL filter that does layer effects. It may not be non-destructive but you can make presets of your favorite text styles

A custom GEGL filter that does layer effects. It may not be non-destructive but you can make presets of your favorite text styles. Futures plans are going to include an image file overlay, and pro tip you can do a multistroke if sacrifice a shadow/glow.

null 11 Jan 2, 2023
SSAT Makeup/Makeup Transfer infer by ncnn

ncnn_MakeupTransfer SSAT Makeup/Makeup Transfer infer by ncnn The ncnn demo of SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfe

FeiGeChuanShu 11 Dec 26, 2022
from Microsoft STL, but multi-platform

mscharconv <charconv> from Microsoft STL, but multi-platform. Tested with MSVC, gcc, and clang on Windows, Linux, and macOS (will likely work on other

Borislav Stanimirov 37 Dec 29, 2022
QMoM methods for fluid dynamics in C++, C, and OpenACC.

GPU-QBMMlib QMoM methods for fluid dynamics in C++, C, and OpenACC. Agenda Add more test cases from Marchisio Debug higher-dimensional algorithms Fami

Computational Physics @ Georgia Tech 2 Feb 26, 2022
A CUDA implementation of Lattice Boltzmann for fluid dynamics simulation

Lattice Boltzmann simulation I am conscious of being only an individual struggling weakly against the stream of time. But it still remains in my power

Long Nguyen 17 Mar 1, 2022
4eisa40 GPU computing : exploiting the GPU to execute advanced simulations

GPU-computing 4eisa40 GPU computing : exploiting the GPU to execute advanced simulations Activities Parallel programming Algorithms Image processing O

Ecam 4MIN repositories 2 Jan 10, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Dec 23, 2022
LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

hongyu wang 12 Dec 28, 2022
open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.

OpenMVG (open Multiple View Geometry) License Documentation Continuous Integration (Linux/MacOs/Windows) Build Code Quality Chat Wiki local/docker bui

openMVG 4.6k Jan 8, 2023
Transformer related optimization, including BERT, GPT

FasterTransformer This repository provides a script and recipe to run the highly optimized transformer-based encoder and decoder component, and it is

NVIDIA Corporation 1.7k Dec 26, 2022
Nimble: Physics Engine for Deep Learning

Nimble: Physics Engine for Deep Learning

Keenon Werling 312 Dec 27, 2022
Super Mario Remake using C++, SFML, and Image Processing which was a project for Structure Programming Course, 1st Year

Super Mario Remake We use : C++ in OOP concepts SFML for game animations and sound effects. Image processing (Tensorflow and openCV) to add additional

Omar Elshopky 5 Dec 11, 2022
The repository contains our dataset and C++ implementation of the CVPR 2022 paper, Geometric Structure Preserving Warp for Natural Image Stitching.

Geometric Structure Preserving Warp for Natural Image Stitching This repository contains our dataset and C++ implementation of the CVPR 2022 paper, Ge

null 21 Dec 22, 2022
RXMesh - A GPU Mesh Data Structure - SIGGRAPH 2021

RXMesh About RXMesh is a surface triangle mesh data structure and programming model for processing static meshes on the GPU. RXMesh aims at provides a

null 137 Dec 18, 2022
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Oral)

Pixel-Perfect Structure-from-Motion Best student paper award @ ICCV 2021 We introduce a framework that improves the accuracy of Structure-from-Motion

Computer Vision and Geometry Lab 830 Dec 28, 2022
An x64 binary executing code that's not inside of it.

Remote Machine Code Fetch & Exec in other words, another self rewriting binary.. boy I just love doing these. Description The idea behind this one is

x0reaxeax 2 Nov 19, 2022
Mirror of compiler project code. Not for SVC purpose.

Compiler-proj Project progress is updated here. Progress 2021/11/28: Started! Set up Makefile and finished basic scanner. 2021/10/24: Repo created. Ac

Yuheng 0 Dec 23, 2021
ORB-SLAM3 is the first real-time SLAM library able to perform Visual, Visual-Inertial and Multi-Map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models.

Just to test for my research, and I add coordinate transformation to evaluate the ORB_SLAM3. Only applied in research, and respect the authors' all work.

B.X.W 5 Jul 11, 2022