KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software

Overview

License Github CI DOI Twitter

Release

KRATOS Multiphysics ("Kratos") is a framework for building parallel, multi-disciplinary simulation software, aiming at modularity, extensibility, and high performance. Kratos is written in C++, and counts with an extensive Python interface. More in Overview

Kratos is free under BSD-4 license and can be used even in comercial softwares as it is. Many of its main applications are also free and BSD-4 licensed but each derived application can have its own propietary license.

Main Features

Kratos is multiplatform and available for Windows, Linux (several distros) and macOS.

Kratos is OpenMP and MPI parallel and scalable up to thousands of cores.

Kratos provides a core which defines the common framework and several application which work like plug-ins that can be extended in diverse fields.

Its main applications are:

Some main modules are:

Documentation

Here you can find the basic documentation of the project:

Getting Started

Tutorials

More documentation

Wiki

Examples of use

Kratos has been used for simulation of many different problems in a wide variety of disciplines ranging from wind over singular building to granular domain dynamics. Some examples and validation benchmarks simulated by Kratos can be found here

Barcelona Wind Simulation

Contributors

Organizations contributing to Kratos:



International Center for Numerical Methods in Engineering




Chair of Structural Analysis
Technical University of Munich


Altair Engineering


Deltares

Our Users

Some users of the technologies developed in Kratos are:

Airbus Defence and Space
Stress Methods & Optimisation Department

Siemens AG
Corporate Technology

ONERA, The French Aerospace Lab
Applied Aerodynamics Department

Looking forward to seeing your logo here!

Special Thanks To

In Kratos Core:

  • Boost for ublas
  • pybind11 for exposing C++ to python
  • GidPost providing output to GiD
  • AMGCL for its highly scalable multigrid solver
  • JSON JSON for Modern C++
  • filesystem Header-only single-file std::filesystem compatible helper library, based on the C++17 specs
  • ZLib The compression library

In applications:

How to cite Kratos?

Please, use the following references when citing Kratos in your work.

  • Dadvand, P., Rossi, R. & Oñate, E. An Object-oriented Environment for Developing Finite Element Codes for Multi-disciplinary Applications. Arch Computat Methods Eng 17, 253–297 (2010). https://doi.org/10.1007/s11831-010-9045-2
  • Dadvand, P., Rossi, R., Gil, M., Martorell, X., Cotela, J., Juanpere, E., Idelsohn, S., Oñate, E. (2013). Migration of a generic multi-physics framework to HPC environments. Computers & Fluids. 80. 301–309. 10.1016/j.compfluid.2012.02.004.
  • Mataix Ferrándiz, V., Bucher, P., Rossi, R., Cotela, J., Carbonell, J. M., Zorrilla, R., … Tosi, R. (2020, November 27). KratosMultiphysics (Version 8.1). Zenodo. https://doi.org/10.5281/zenodo.3234644
Comments
  • [Core] Adding Subproperties

    [Core] Adding Subproperties

    Fixes #2414

    My only concern is that the only method in the model part that does not do a conversion from PropertiesWithSubProperties to Properties in python is GetProperties()

    We defined a new class called PropertiesWithSubProperties, which is derived from Properties (we do this to avoid overload more the Properties object)

    Enhancement Kratos Core Consensus 
    opened by loumalouomega 141
  • [Core][Not to merge right now] Proposal for solving strategies factories

    [Core][Not to merge right now] Proposal for solving strategies factories

    @frastellini and I are interested in implemente a proper adaptative NR strategy. For doing this properly we need to take into account the recomputation of the processes. @RiccardoRossi told me to create factories similar to the linear solvers with the solving strategies in order to implement this in a consistent way with the design of the analysis

    This PR adds factories to:

    • Convergence criteria
    • Builder and solver
    • Strategies -> This one uses the other factories
    • Schemes

    I used the parameters keys already used on the solvers in order to reduce conflicts with the current implementations. Anyway further changes will be necessary

    Right now it only affects the core, the idea is to adapt the applications ones once this is merged and approved.

    I was thinking to move the factories to a different folder, in order to reduce the size of the includes folder (which is huge)

    This PR includes changes from:

    #3178 and #3179, these changes will disapear once merged into master

    I was thinking how to implement the tests of these factories. Maybe using the method info from the different classes and expect a certain output depending of the parameters. What do you say @pooyan-dadvand ?

    Enhancement Discussion Kratos Core Implementation Committee Consensus 
    opened by loumalouomega 104
  • SIMP element for topology optimization

    SIMP element for topology optimization

    Hi everyone,

    I am busy with the reactivation of the Topology Optimization Application, but I am having trouble implementing the SIMP element based on the small-dispalcement solid element (small_displacement.cpp) of the Structural Mechanics Application. In the legacy TopOpt code, the SIMP element was based on the Solid Mechanics Application and it seems that the implementation "philosophy" of these elements is different. I am specifically having a problem with the Calculate function.

    I am developing this in the following repository: MyRepository

    The test example that I am using: exmaples/01_Small_Cantilever_Hexahedra/ (01_Small_Cantilever_Hexahedra/)

    I am not able to get to the root of this problem, does anyone have en idea and can help me with this?

    Thanks in advance!

    opened by PhiHo-eng 95
  • Release 5.2

    Release 5.2

    The next release has been scheduled for the end of the November. So at the end of this month we will make the release branch from master to be used for interface refinement and bug fixing.

    It is very important to add your application to the linux and windows binaries in order to have them available and downloadable for the next 3 month for other people without need to compile the code. (This is useful for collaborating with users who only work with python) The code at the moment of making release branch should contain all features and for release time be as stable as possible. I would like to emphasis that the effort for creating a release increases considerably the quality of the application and improves its maintainability. So I would strongly recommend application developers to make such effort.

    I have changed the previous release project to new one maintaining the same structure and also the application which where included.

    I have also created a milestones for better organization

    Steps to take

    I would encourage all @KratosMultiphysics/team-maintainers and also developers to:

    1. Revising the project and add their application if they want to join this release.
    2. Revising their corresponding issues and assign them to the release 5.2 milestone

    I would kindly ask for all your collaboration during this release period.

    Update: I have realized that an issue cannot be added to two milestones. So I have removed one milestone to keep the organization easier.

    Release 
    opened by pooyan-dadvand 95
  • Model v3 - third iteration of the model redesign

    Model v3 - third iteration of the model redesign

    this is third iteration of the model. It is now not registered in Kernel not it is a global object.

    i would say that the design is pretty clean now (there is nothing strange in the model object, in the sense that it is NOT a singleton any longer)

    the problem is that this change is backward incompatible in that it hides the modelpart. Modelpart can now ONLY be created through the model interface.

    as of now, i ported all of the core, to the point at which all the tests pass (both python and c++). If we go for this design i will need help in porting all of the applications

    API Breaker 
    opened by RiccardoRossi 92
  • [core] Reduce node and dof size stage 1

    [core] Reduce node and dof size stage 1

    This PR is the first stage in reducing the size of Dof and Node:

    • The sizeof(Dof<double>) has been reduced from 64 to 32 bytes
    • The sizeof(Node<3>) has been reduced from 240 to 224 bytes (For the record the real occupation of empty node node with its allocations was about 290 bytes before these changes)

    The reduction has been made by:

    • Rearranging the Dof data to be more coherent and reducing the data sizes using c++ bit fields.
    • Dof is not derived from Indexed object any more to avoid virtual table pointer allocation
    • A new NodalData class has been created to have all data stored in Node reducing the number of pointers
    • Dof has a pointer to NodalData which has also the Id of the Node so the copy of Node Id is removed

    This PR changes the API by not deriving from Indexed object (so Node and Dof are not Indexed object) and Dof SetId is removed but as far as I know this change should not affect the backward compatibility as this relation was not explored in the code. Meanwhile the behaviour is the same. I would suggest all @KratosMultiphysics/team-maintainers to test their application with this branch before merging it to master

    Behaviour Change 
    opened by pooyan-dadvand 91
  • [Structural] Adding initial stress and strain capability

    [Structural] Adding initial stress and strain capability

    In this PR I'll be adding to all the implemented CL the capability of imposing an initial strain or stress. Only the Linear elastic 3D law has been improved so far so you can see how it works.

    In order to apply this initial strain/stress we have used the mdpa feature of

    Begin ElementalData INITIAL_STRESS_VECTOR 1 [6] (0,0,1e6,0,0,0) 2 [6] (0,0,1e6,0,0,0) 3 [6] (0,0,1e6,0,0,0) 4 [6] (0,0,1e6,0,0,0) End ElementalData

    Begin ElementalData INITIAL_STRAIN_VECTOR 1 [6] (0.01,0.01,0.01,0,0,0) 2 [6] (0.01,0.01,0.01,0,0,0) 3 [6] (0.01,0.01,0.01,0,0,0) 4 [6] (0.01,0.01,0.01,0,0,0) End ElementalData

    or by using the python process inside the json:

    {
    {
                "python_module" : "set_initial_state_process",
                "kratos_module" : "KratosMultiphysics",
                "process_name"  : "set_initial_state_process",
                "Parameters"    : {
                        "mesh_id"         : 0,
                        "model_part_name" : "Structure",
                        "dimension"       : 2,
                        "imposed_strain"  : [0.0,0.00,0],
                        "imposed_stress"  : [1000000,0,0],
                        "imposed_deformation_gradient"  : [[1,0],[0,1]],
                        "interval"        : [0.0, 1e30]
                        }
            }
    

    The method (inside linear elastic CL) checks whether the geometry has this initial stress/strain, otherwise is a ZeroVector:

        /**
         * @brief Adds the initial stress vector if it is defined in the InitialState
         */
        const void AddInitialStressVectorContribution(Vector& rStressVector, Parameters& rParameterValues) 
        {
            const auto p_initial_state = pGetpInitialState();
            if (p_initial_state) {
                noalias(rStressVector) += p_initial_state->GetInitialStressVector();
            }
        }
    

    Additionally I can add the capability of retrieving the initial strains and stresses from the mat props.

    Enhancement Applications 
    opened by AlejandroCornejo 87
  • [Structural] adding prebuckling solver

    [Structural] adding prebuckling solver

    The Prebuckling Solver computes the critical load multiplier for a given load set at which the structure will buckle. It always refers to the initial, user-defined load. The implementation does not as usually compute the "classical buckling eigenvalue problem" given as (K_mat+ lambda*K_geo)phi = 0, but relies only on the total stiffness matrix of two consecutive load steps. (K_current + lambdaK_dot)*phi = 0; with K_dot = (( K_next(lambda + h) - K_current) ) / h). Where h is a small change in the load factor. Therefore it is not required to compute K_mat and K_geo separately, which would require major changes in the current element implementations. To follow the entire prebuckling load path the simulation is conducted iteratively, while the applied loads are modified towards the computed buckling load. It is differentiated between a "small" load step and a "big" load step. The underlying theory of the approximation of K_dot requires a small change in the load factor (small step). After the small step (small change in the load factor) we analyse the eigenvalue problem and find a new load factor. During the big step we push the loads closer to the actual buckling load e.g. to half the value of the computed buckling load. Then another small step and eigenvalue analysis follows etc.. This procedure is repeated until the load factors finally converge. In case one wants to compute the linear/theoretical buckling load, the simulation can be stopped after the first small load step. The Solver only converges when the initial load is smaller than the actual buckling load. In general it is recommend to apply a very small load.

    Applications Feature 
    opened by manuelmessmer 81
  • explicit mpcs slave-master relation

    explicit mpcs slave-master relation

    Hi I have the following setup: imp the master nodes are on the light blue line and two slave nodes are at the connection between the dark and the light blue line. A force is applied at the right lower node and as you can see in the video above I can realize a sliding on the light blue line by coupling DISP_Y and DISP_Z between master and slave + searching new neighbour nodes in each iteration (not completely correct sliding but I will improve this...). This is using the implicit dynamic scheme.

    I now wanted to try the new explicit mpcs and this is the result: exi One can see that the master line does not deform, but the constraints are properly set.

    I think the problem is that in the current implementation of the explicit mpcs the load is not transferred to the master line. My guess would be that we have to couple the residual in void ExplicitCentralDifferencesScheme::Update of the slave and the master dof. Because they are in a certain relation, which is not considered at the moment.

    I would be happy about any suggestions.

    Help Wanted 
    opened by KlausBSautter 78
  • Defining the local coordinate system of elements (beams, shells)

    Defining the local coordinate system of elements (beams, shells)

    Hi together,

    with this post the discussion for the definition of the local coordinate system for elements, which is especially crucial for beam elements, is opened. Feel free to add other members of the team who might be of interested.

    Here is our suggestion:

    The local x-axis is the vector spanning from the starting point to the end point of the beam. Then the local y-axis is calculated with help of the cross product (gobalZ(0,0,1) X local x-axis). Another crossproduct (local x-axis X local y-axis) results in the local z-axis. All local axis are normalized. One exception: In case that the beam axis is parallel to the global Z-Axis: nX = (0, 0, +- 1); nY = (0, 0 ,1); nZ = (-+ 1, 0, 0)

    Looking forward to your comments.

    Andreas

    Discussion 
    opened by AndreasWinterstein 76
  • [Core] Transition #3185 with only explicit strategies

    [Core] Transition #3185 with only explicit strategies

    Description This is a transition PR for #3185 as @RiccardoRossi suggested. In here only explicit strategies are included (there are not many, so simpler). This way the way #3185 works can be appreciated in a simpler way

    Changelog

    • Added BaseFactory
    • Added factory for explicit builder
    • Added factory for explicit strategy
    • Added tests (cpp/python)
    • Added to Kratos components and registered
    Enhancement Kratos Core Transition 
    opened by loumalouomega 75
  • [Poro/Dam] Fix joint width calculation for interface element and cleanup

    [Poro/Dam] Fix joint width calculation for interface element and cleanup

    📝 Description This PR fixes an error in the calculation of the joint width for the interface element. It also includes a minor cleanup of various methods of the element.

    🆕 Changelog

    • The shape function matrix associated to the displacements at the interface has been modified to correct the joint width calculation (interface_element_utilities.hpp)
    • The CalculateJointWidth methods in U_Pw_small_strain_interface_element.cpp and small_displacement_interface_element.cpp have been updated
    • Some minor cleanup of other related methods has been performed
    Cleanup Bugfix 
    opened by ipouplana 0
  • [Core] Missing distance check in Line3D2 `IsInside`

    [Core] Missing distance check in Line3D2 `IsInside`

    opened by loumalouomega 4
  • Line condition cannot be applied in a 3D space

    Line condition cannot be applied in a 3D space

    A line condition cannot be applied in a 3D space in the Geomechanics application. During the calculation, neighbour elements of the condition are searched. In this procedure, faces of the model elements are compared to the line condition elements. However, 3D elements do not contain line faces, thus neighbour elements cannot be found and the calculation does not continue

    Bug GeoMechanics 
    opened by aronnoordam 8
  • Embedded modeler

    Embedded modeler

    📝 Description Just adding a handy tool for embedded simulations. I've added it to the core because it's importing a core process.

    Maybe, in the future, modelers could implement and UpdateModelPart or UpdateGeometry. But it should be discussed in a separate issue.

    🆕 Changelog

    • Add embedded modeler
    opened by miguelmaso 1
  • Fix typos in `scripts/` and `.github/` subdirectories

    Fix typos in `scripts/` and `.github/` subdirectories


    name: 🗎 Documentation and Styling about: Adding or modifying documentation or code style.


    Description

    Fix various typos

    Changelog

    • Fixed typos in scripts/ subdirectory
    • Fixed typos in .github/ subdirectory
    opened by luzpaz 1
  • [Core] Consistent definition between InvertMatrix and GeneralizedInvertMatrix

    [Core] Consistent definition between InvertMatrix and GeneralizedInvertMatrix

    opened by loumalouomega 0
Releases(v9.2)
  • v9.2(Sep 16, 2022)

    Kratos now uses C++17 by default.

    You can get the last version of Kratos from pip: $ pip install KratosMultiphysics-all

    Or this version: $ pip install KratosMultiphysics-all==9.2

    Source code(tar.gz)
    Source code(zip)
  • v9.1.4(Jul 28, 2022)

    Added developments for the Eflows4HPC European Project.

    You can get the last version of Kratos from pip: $ pip install KratosMultiphysics-all

    Or this version: $ pip install KratosMultiphysics-all==9.1.4

    Source code(tar.gz)
    Source code(zip)
  • v9.1(Mar 2, 2022)

    • Added distributed sparse matrices
    • Fixed module errors in CoSimulationApplication
    • Added initial background support for multistage

    To obtain the code please: pip install KratosMultiphysics-all

    Source code(tar.gz)
    Source code(zip)
  • v9.0(Nov 22, 2021)

    Features

    • Distribution:

      • Kratos is now distributed and installed through python packages. Please refer to the wiki for more info.
      • Kratos now supports being installed with popular python modules (numpy, scipy etc...)
    • Features:

      • Core:

        • ParallelUtilities can now handle exceptions in parallel regions
        • Linear and Quadratic pyramid geometries are added
      • CoSimulationApplication:

        • Support for coupling to external solvers in MPI through CoSimIO
        • In MPI is is now possible to have solvers run with less MPI processes or in serial
      • MappingApplication:

        • Support for mapping non-historical variables was added
        • Searching was improved and is much faster now
      • MultilevelMonteCarloApplication

        • Hierarchical Monte Carlo methods support MPI parallelism
      • ExaquteSandboxApplication

        • Added initial condition process (it supports MPI parallelism)
        • Added wind generator process to generate steady-state and turbulent wind inlet (it supports MPI parallelism)
        • Added process to compute simultaneously drag force and base moment
    Source code(tar.gz)
    Source code(zip)
    Kratos-9.0.1-cp310-linux.zip(102.91 MB)
    Kratos-9.0.1-cp36-linux.zip(102.95 MB)
    Kratos-9.0.1-cp36-win.zip(35.93 MB)
    Kratos-9.0.1-cp37-linux.zip(102.95 MB)
    Kratos-9.0.1-cp37-win.zip(35.93 MB)
    Kratos-9.0.1-cp38-linux.zip(102.90 MB)
    Kratos-9.0.1-cp38-win.zip(35.99 MB)
    Kratos-9.0.1-cp39-linux.zip(102.90 MB)
    Kratos-9.0.1-cp39-win.zip(35.77 MB)
  • v8.1(Nov 25, 2020)

  • v8.0.1(May 28, 2020)

    Features

    • Added GlobalPointer
    • FEAST with MKL support was added to the EigenSolversApplication #6482

    Improvements

    • Neighbour and Element nodes use now GlobalPointers
    • Applications are now imported as python modules.
    • PARTITION_INDEX is now an int (was double) #6771
    • GlobalNumberOf... functions added to Communicator #6747

    MultiLevelMonteCarlo

    • MultilevelMonteCarloApplication has been integrated with Hierarchical Monte Carlo library XMC.
    • MultilevelMonteCarloApplication is capable of running Monte Carlo, Multilevel Monte Carlo and Continuation Multilevel Monte Carlo algorithms in distributed environment with optimal parallel efficiency.

    API Changes

    • GetValuesOnIntegrationPoints has been removed in favor of CalculateValuesOnIntegrationPoints
    • find_nodal_neighbours_process no longer accepts the number of expected results.
    • Access function in the core are now const. Please refer to #2993, #4938 and #5290 for more info.
    • Variable copy constructor is now explicit and private.

    Deprecations

    • ExternalSolversApplication has been deprecated in favor of EigenSolversApplication (which will become the LinearSolversApplication in the future.
    • NOT_FLAG flags are removed from Kratos. Temporal FLAG.AsFalse() has been added for compatibility.
    • MixedUPLinearSolver and DeflatedGMRESSolver have been removed.
    • Builder and solver now takes the Main modelpart as default. (was ComputingModelPart previously)

    Notes

    • Please notice that this will be the last version with python 2 support
    Source code(tar.gz)
    Source code(zip)
    kratos-8.0.1-linux-64.tgz(149.95 MB)
    kratos-8.0.1-win-64.zip(117.54 MB)
  • 7.1(Nov 29, 2019)

  • v7.0.2-Exaqute(May 30, 2019)

  • v7.0-Exaqute(May 29, 2019)

  • 7.0(Mar 20, 2019)

    Core Changes

    Features

    • New Model class has been added which stores all model parts #2417 #3211 #3730 #3835
    • Adding HasNodalSolutionStepVariable to ModelPart #2298
    • Exposing geometry of Elements/Conditions in python #2969
    • Adding merge to data value container #3134
    • Added new File and Stream Serializers #3233

    Improvements

    • Adding large amount of tests in c++ and python
    • AnalysisStage improvments #2135
    • Several improvements in geometries #2355 #2386 #3105 #3621 #3531 #3796
    • Enhancing constraints #2967 #2897 #2896
    • Fixing norm in residual criteria #2976
    • Improving error messages #2095
    • Improving nodal data checks #2091
    • Reducing use of boost classes #2162 #2189 #2676 #2675 #2987

    Api Changes

    • ModelPart no longer has a default constructor and must be created through the model #2417 #3835
    • Many methods in the core are now marked as explicit #2542 #2527 #3602
    • Python level mpi-collectives now are now divided into int and float variants #3051
    • Mapping Application has been rewritten #3108
    • pGetDof is now a const method that returns a const iterator #3122
    • AreaNormal function has been renamed to Normal #3123
    • AdjointFluid Application has been removed and its functionality has been ported to FluidDynamics #3153
    • DEM Application strategies now require a new Parameters argument to acomodate CUDA #3172
    • Many methods in the DEMApplication python api are now private #3832
    • mpi_communicator now uses the new Stream Serializer #3233
    • UpdateTimeInModelParts now has a new bool parameter to toggle the print #3298
    • FluidDynamic Application embedded solvers have been unified under a common API #3303
    • Mpi flags behavior has been normalized and deprecated functions removed #3347
    • CustomResponse function now requires a ModelPart #3541

    Deprecations

    • line_2d.h #2076
    • Old-style methods of the constitutive law #3420
    • WorkingSpaceDimension method from conditions and elements #2997
    Source code(tar.gz)
    Source code(zip)
    kratos-7.0-linux-64.tgz(132.43 MB)
    kratos-7.0-win-64.zip(61.65 MB)
  • v7.0-Beta1(Mar 8, 2019)

  • v6.0(Jun 3, 2018)

    • Fixed Problems in ConstituveModelsApplication(#2249)
    • Fixed Problems in FluidDynamicsApplication (#2243, #2254)
    • Fixed Problrms in StructuralMechanicsApplication(#2235)
    • Fixed Problems in SolidlMechanicsApplication(#2227, #2249)
    • Fixed Problems in ConvectionDiffusionApplication(#2224, #2250)
    • Fixed Missing include in the core (#2242)
    • Fixed Problems in the application generator (#2228)
    Source code(tar.gz)
    Source code(zip)
    kratos-6.0-linux-64.tgz(125.20 MB)
    kratos-6.0-macOS-HighSierra.zip(91.42 MB)
    kratos-6.0-win-32.zip(46.60 MB)
    kratos-6.0-win-64.zip(55.22 MB)
  • v5.3(Mar 9, 2018)

    Core

    • Fixed node order while writing Line elements

    Structural Mechanics Application

    • Added process for Eigenvalue Postprocessing Visualization
    • Added force/Moment output for Beams and Trusses
    • Added initial support for beam-hinges
    • Added new constitutive law for J2 plasticity in 3D and Plane Strain 2D geometries
    • Added new small strain B-bar element for hexahedral and quadrilateral geometries

    Solid Mechancis Application

    • Old result files are now cleaned
    • Constitutive law corrections for shells
    • Corrections in beams

    Fluid Mechanics Application

    • Clean up and rewrite of the monolithic fluid element to unify the body-fitted and embedded formulations.
    • The new monolithic element now supports quadrilateral and tetrahedral geometries, as well as non-Newtonian constitutive relations through the use of a ConstitutiveLaw.
    • New utility to compute the forces over arbitrary objects (DragUtility) both for body-fitted and embedded problems.
    • Corrections in ausas 2D conditions tests
    • Corrections in body-fitted drag computation

    Poromechanics Application

    • The internal variable for the damage model is only updated at the end of the step
    • Small bug fixes
    Source code(tar.gz)
    Source code(zip)
    kratos-5.3.0-linux-64.tgz(122.18 MB)
    kratos-5.3.0-win-32.zip(43.77 MB)
    kratos-5.3.0-win-64.zip(53.35 MB)
  • v5.2.1(Dec 20, 2017)

  • v5.2.0(Dec 5, 2017)

    What's new:

    • Several bug fixes in core and applications:
      • Better MPI portability
      • More consistence support of json configuration file
    • Structural Mechanic:
      • Linear and nonlinear beam element (for small and large displacement)
      • Linear and nonlinear truss element (for small and large displacement)
      • Initial support for beam orientation
      • Truss element now available
      • Corotational Shell Elements available with 3 and 4 Nodes, thick and thin
      • New Hyperelastic Law
      • Corrections in the Updated Lagrangian Element
      • Membrane of both 3 and 4 nodes now available
    • Fluid Dynamics:
      • Several improvements under the hood for the Embedded CFD solver
      • New monolithic version of the embedded solver based on Ausas shape functions
      • New slip model for embedded CFD
    • FSI:
      • Improved precision of the solvers
    • Dam application
      • New GUI
    Source code(tar.gz)
    Source code(zip)
    kratos-5.2.0-linux-64.tgz(111.55 MB)
    kratos-5.2.0-win-32.zip(41.56 MB)
    kratos-5.2.0-win-64.zip(50.70 MB)
  • v5.0-Simphony(Apr 26, 2017)

    This release contains all the code needed for the new versions of the Simphony project wrappers. Notice that only the following applications are provided in the compiled release:

    • Meshing Application
    • External Solvers Application
    • Fluid Dynamics Application
    • DEM Application
    • Swimming DEM Application
    Source code(tar.gz)
    Source code(zip)
    kratos-simphony.tgz(27.68 MB)
Owner
KratosMultiphysics
KratosMultiphysics
Powerful multi-threaded coroutine dispatcher and parallel execution engine

Quantum Library : A scalable C++ coroutine framework Quantum is a full-featured and powerful C++ framework build on top of the Boost coroutine library

Bloomberg 480 Nov 28, 2022
A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

moodycamel::ConcurrentQueue An industrial-strength lock-free queue for C++. Note: If all you need is a single-producer, single-consumer queue, I have

Cameron 7.3k Dec 1, 2022
A bounded multi-producer multi-consumer concurrent queue written in C++11

MPMCQueue.h A bounded multi-producer multi-consumer concurrent queue written in C++11. It's battle hardened and used daily in production: In the Frost

Erik Rigtorp 817 Nov 25, 2022
C++11 thread safe, multi-producer, multi-consumer blocking queue, stack & priority queue class

BlockingCollection BlockingCollection is a C++11 thread safe collection class that provides the following features: Modeled after .NET BlockingCollect

Code Ex Machina LLC 50 Nov 23, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous tasks programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, a

Taskflow 7.5k Nov 28, 2022
Kokkos C++ Performance Portability Programming EcoSystem: The Programming Model - Parallel Execution and Memory Abstraction

Kokkos: Core Libraries Kokkos Core implements a programming model in C++ for writing performance portable applications targeting all major HPC platfor

Kokkos 1.2k Nov 27, 2022
An optimized C library for math, parallel processing and data movement

PAL: The Parallel Architectures Library The Parallel Architectures Library (PAL) is a compact C library with optimized routines for math, synchronizat

Parallella 295 Nov 22, 2022
Material for the UIBK Parallel Programming Lab (2021)

UIBK PS Parallel Systems (703078, 2021) This repository contains material required to complete exercises for the Parallel Programming lab in the 2021

null 12 May 6, 2022
Shared-Memory Parallel Graph Partitioning for Large K

KaMinPar The graph partitioning software KaMinPar -- Karlsruhe Minimal Graph Partitioning. KaMinPar is a shared-memory parallel tool to heuristically

Karlsruhe High Quality Graph Partitioning 17 Nov 10, 2022
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous task programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, an

Taskflow 7.5k Dec 1, 2022
C++-based high-performance parallel environment execution engine for general RL environments.

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With

Sea AI Lab 685 Dec 2, 2022
Parallel algorithms (quick-sort, merge-sort , enumeration-sort) implemented by p-threads and CUDA

程序运行方式 一、编译程序,进入sort-project(cuda-sort-project),输入命令行 make 程序即可自动编译为可以执行文件sort(cudaSort)。 二、运行可执行程序,输入命令行 ./sort 或 ./cudaSort 三、删除程序 make clean 四、指定线程

Fu-Yun Wang 3 May 30, 2022
Partr - Parallel Tasks Runtime

Parallel Tasks Runtime A parallel task execution runtime that uses parallel depth-first (PDF) scheduling [1]. [1] Shimin Chen, Phillip B. Gibbons, Mic

null 32 Jul 17, 2022
Cpp-taskflow - Modern C++ Parallel Task Programming Library

Cpp-Taskflow A fast C++ header-only library to help you quickly write parallel programs with complex task dependencies Why Cpp-Taskflow? Cpp-Taskflow

null 4 Mar 30, 2021
Thrust - The C++ parallel algorithms library.

Thrust: Code at the speed of light Thrust is a C++ parallel programming library which resembles the C++ Standard Library. Thrust's high-level interfac

NVIDIA Corporation 4.3k Dec 4, 2022
EnkiTS - A permissively licensed C and C++ Task Scheduler for creating parallel programs. Requires C++11 support.

Support development of enkiTS through Github Sponsors or Patreon enkiTS Master branch Dev branch enki Task Scheduler A permissively licensed C and C++

Doug Binks 1.4k Nov 29, 2022
Parallel-hashmap - A family of header-only, very fast and memory-friendly hashmap and btree containers.

The Parallel Hashmap Overview This repository aims to provide a set of excellent hash map implementations, as well as a btree alternative to std::map

Gregory Popovitch 1.7k Dec 2, 2022
ParallelComputingPlayground - Shows different programming techniques for parallel computing on CPU and GPU

ParallelComputingPlayground Shows different programming techniques for parallel computing on CPU and GPU. Purpose The idea here is to compute a Mandel

Morten Nobel-Jørgensen 2 May 16, 2020
Parallel implementation of Dijkstra's shortest path algorithm using MPI

Parallel implementation of Dijkstra's shortest path algorithm using MPI

Alex Diop 1 Jan 21, 2022