A transaction processor for a hypothetical, general-purpose, central bank digital currency

Overview

CI Status Contributor Covenant

Introduction

OpenCBDC is a technical research project focused on answering open questions surrounding central bank digital currencies (CBDCs).

This repository includes the core transaction processor for a hypothetical, general purpose central bank digital currency (CBDC). Initially, this work was derived from Project Hamilton (a collaboration between the MIT Digital Currency Initiative (DCI) and the Federal Reserve Bank of Boston (FRBB)).

For higher-level conceptual explanations, as well as findings and conclusions related to this code, see our research paper.

Initially, we focused our work on achieving high transaction throughput, low latency, and resilience against multiple geographical datacenter outages without significant downtime or any data loss. The design decisions we made to achieve these goals will help inform policy makers around the world about the spectrum of tradeoffs and available options for CBDC design.

Important News

NOTE: In cases where there are significant changes to the repository that might need manual intervention down-stream (or other important updates), we will make a NEWS post.

Architecture

We explored two system architectures for transaction settlement, both based on an unspent transaction output (UTXO) data model and transaction format. Both architectures implement the same schema representing an unspent hash set (UHS) abstraction. One architecture provides linearizability of transactions, whereas the other only provides serializability. By relaxing the ordering constraint, the peak transaction throughput supported by the system scales horizontally with the number of nodes, but the transaction history is unavailable making the system harder to audit retroactively. Both architectures handle multiple geo-distributed datacenter outages with a recovery time objective (RTO) of under ten seconds and a recovery point objective (RPO) of zero.

  1. "Atomizer" architecture
    • Materializes a total ordering of all transactions settled by the system in a linear sequence of batches.
    • Requires vertical scaling as peak transaction throughput is limited by the performance of a single system component.
    • Maximum demonstrated throughput ~170K transactions per second.
    • Geo-replicated latency <2 seconds.
  2. "Two-phase commit" architecture
    • Transaction history is not materialized and only a relative ordering is assigned between directly related transactions.
    • Combines two-phase commit (2PC) and conservative two-phase locking (C2PL) to create a system without a single bottlenecked component where peak transaction throughput scales horizontally with the number of nodes.
    • Maximum demonstrated throughput ~1.7M transactions per second.
    • Geo-replicated latency <1 second.

Read the architecture guide for a detailed description of the system components and implementation of each architecture.

Contributing

You can sign up to receive updates from technical working groups and to learn more about our work. If you would like to join our technical discussions and help workshop proposals, you can join our Zulip chat.

For more information on how to contribute, please see our Contribution Guide!

Get the Code

  1. Install Git
  2. Clone the repository (including submodules)
    • git clone --recurse-submodules https://github.com/mit-dci/opencbdc-tx

Run the Code

The easiest way to compile the code and run the system locally is using Docker.

Setup Docker

Don't forget to run the docker daemon!

Launch the System

Note: You will need to both run the system and interact with it; you can either use two shells, or you can add the --detach flag when launching the system (note that it will then remain running till you stop it, e.g., with docker stop). Additionally, you can start the atomizer architecture by passing --file docker-compose-atomizer.yml instead.

The commands below will build a new image every time that you run it. You can remove the --build flag after the image has been built to avoid rebuilding. To run the system with our pre-built image proceed to the next section for the commands to run.

  1. Run the System
    # docker compose --file docker-compose-2pc.yml up --build
    
  2. Launch a container in which to run wallet commands (use --network atomizer-network instead of --network 2pc-network if using the atomizer architecture)
    # docker run --network 2pc-network -ti opencbdc-tx /bin/bash
    

Launch the System With a Pre-built Image

We publish new docker images for all commits to trunk. You can find the images in the Github Container Registry.

Note: You must use docker compose (not docker-compose) for this approach to work or you will need to pull the image manually docker pull ghcr.io/mit-dci/opencbdc-tx. Additionally, you can start the atomizer architecture by passing --file docker-compose-atomizer.yml --file docker-compose-prebuilt-atomizer.yml instead.

  1. Run the system
    # docker compose --file docker-compose-2pc.yml --file docker-compose-prebuilt-2pc.yml up --no-build
    
  2. Launch a container in which to run wallet commands (use --network atomizer-network instead of --network 2pc-network if using the atomizer architecture)
    # docker run --network 2pc-network -ti ghcr.io/mit-dci/opencbdc-tx /bin/bash
    

Setup test wallets and test them

The following commands are all performed from within the second container we started in the previous step. In each of the below commands, you should pass atomizer-compose.cfg instead of 2pc-compose.cfg if you started the atomizer architecture.

  • Mint new coins (e.g., 10 new UTXOs each with a value of 5 atomic units of currency)

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint 10 5
    [2021-08-17 15:11:57.686] [WARN ] Existing wallet file not found
    [2021-08-17 15:11:57.686] [WARN ] Existing mempool not found
    4bc23da407c3a8110145c5b6c38199c8ec3b0e35ea66bbfd78f0ed65304ce6fa
    

    If using the atomizer architecture, you'll need to sync the wallet after:

    # ./build/src/uhs/client/client-cli atomizer-compose.cfg mempool0.dat wallet0.dat sync
    
  • Inspect the balance of a wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info
    Balance: $0.50, UTXOs: 10, pending TXs: 0
    
  • Make a new wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat newaddress
    [2021-08-17 15:13:16.148] [WARN ] Existing wallet file not found
    [2021-08-17 15:13:16.148] [WARN ] Existing mempool not found
    usd1qrw038lx5n4wxx3yvuwdndpr7gnm347d6pn37uywgudzq90w7fsuk52kd5u
    
  • Send currency from one wallet to another (e.g., 30 atomic units of currency)

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat send 30 usd1qrw038lx5n4wxx3yvuwdndpr7gnm347d6pn37uywgudzq90w7fsuk52kd5u
    tx_id:
    cc1f7dc708be5b07e23e125cf0674002ff8546a9342928114bc97031d8b96e75
    Data for recipient importinput:
    cc1f7dc708be5b07e23e125cf0674002ff8546a9342928114bc97031d8b96e750000000000000000d0e4f689b550f623e9370edae235de50417860be0f2f8e924eca9f402fcefeaa1e00000000000000
    Sentinel responded: Confirmed
    

    If using the atomizer architecture, you'll need to sync the sending wallet after:

    # ./build/src/uhs/client/client-cli atomizer-compose.cfg mempool0.dat wallet0.dat sync
    
  • Check that the currency is no longer available in the sending wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info
    Balance: $0.20, UTXOs: 4, pending TXs: 0
    
  • Import coins to the receiving wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat importinput cc1f7dc708be5b07e23e125cf0674002ff8546a9342928114bc97031d8b96e750000000000000000d0e4f689b550f623e9370edae235de50417860be0f2f8e924eca9f402fcefeaa1e00000000000000
    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat sync
    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat info
    Balance: $0.30, UTXOs: 1, pending TXs: 0
    

Testing

Running Unit & Integration Tests

  1. Build the container
    # docker build . -t opencbdc-tx
    
  2. Run Unit & Integration Tests
    # docker run -ti opencbdc-tx ./scripts/test.sh
    
Issues
  • [ERROR] Failed to connect to any atomizers at Setup test wallets and test them

    [ERROR] Failed to connect to any atomizers at Setup test wallets and test them

    Affected Branch

    main @ https://github.com/mit-dci/opencbdc-tx.git

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [ ] The issue is reproducible in docker

    Description

    While running "Setup test wallets and test them": ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint 10 5

    [2022-02-09 19:08:44.231] [WARN ] Existing wallet file not found [2022-02-09 19:08:44.232] [WARN ] Existing client file not found

    following, we see an error: """ ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info Balance: $0.50, UTXOs: 10, pending TXs: 0 [email protected]:/opt/tx-processor# Balance: $0.50, UTXOs: 10, pending TXs: 0 bash: Balance:: command not found [email protected]:/opt/tx-processor# ./build/src/uhs/client/client-cli atomizer-compose.cfg mempool0.dat wallet0.dat sync [2022-02-09 19:10:34.623] [ERROR] Failed to connect to any atomizers terminate called after throwing an instance of 'std::system_error' what(): Invalid argument """

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug status/on-hold 
    opened by UrsaEli 21
  • Extend test script

    Extend test script

    This pull request addresses Issue 122, 'Extend test.sh'. test.sh is the script that runs unit tests and integration tests and measures coverage. This pull request contains commits that extend it in the following ways:

    • Users can now choose via command-line arguments which tests to run (e.g. unit tests, integration tests, or both) and whether to measure coverage.
    • Users can now define the build folder via a command-line argument.
    • Users can now run the script from any folder.
    • Users can now read about what the script does and how to use it by calling it with -h or --help flags.

    In addition, test.sh now adheres to the 80-char. line limit followed in the repo's C++ code. It also now has variable references enclosed in double quotes, as recommended in the Advanced Bash Scripting Guide.

    The new test.sh in this pull request is completely backward compatible: running it with no command-line arguments does exactly what the previous version of the script does. As a result, documentation that references the script and code that invokes it should not need to change. Also, users don't need to change how they use the script if they don't want to.

    opened by mszulcz-mitre 11
  • adjust scripts, Cmake for OSX. Updates README

    adjust scripts, Cmake for OSX. Updates README

    Signed-off-by: Dave Bryson

    Changes:

    • Removed brew installs from configure.sh and added to build information in README. Why? calling sudo ./configure.sh fails as brew can't be called with sudo
    • Moved OSX specific CMAKE_FLAGS to CmakeFile.txt
    • Added build information to README

    References: https://github.com/mit-dci/opencbdc-tx/issues/94

    opened by davebryson 11
  • Fix tests build on MacOS

    Fix tests build on MacOS

    This PR simplifies the googletest depedency finding in CMake, and removes the gmock dependency which isn't used. This fixes building the tests for me on MacOS, where it insisted on using the shared libraries for google test, even when only static libraries were installed.

    opened by metalicjames 9
  • C++ errors failed to build

    C++ errors failed to build

    Affected Branch

    Scanning dependencies of target crypto [ 4%] Building CXX object 3rdparty/crypto/CMakeFiles/crypto.dir/sha256_avx2.o c++: error: unrecognized command line option '-mavx' c++: error: unrecognized command line option '-mavx2' make[2]: *** [3rdparty/crypto/CMakeFiles/crypto.dir/build.make:63: 3rdparty/crypto/CMakeFiles/crypto.dir/sha256_avx2.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:771: 3rdparty/crypto/CMakeFiles/crypto.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 The command '/bin/sh -c mkdir build && cd build && cmake -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} .. && make' returned a non-zero code: 2 ERROR: Service 'shard0' failed to build : Build failed

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [ ] The issue is reproducible in docker

    Description

    In order to reproduce the issue, follow these steps:

    Having trouble with the build got the error above.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug closed/duplicate 
    opened by Pi-Runner 9
  • Fix #39: README issues w/ Docker Permissions

    Fix #39: README issues w/ Docker Permissions

    • don't tell users to run as root
    • use compose if we're going to use compose
    • simplify commands for up/down/stop

    Would be great if we could have a default docker-compose file that wouldn't require specifying the file each time, but not necessary. Would also be nice to not use a ubuntu base image, they're rather bloated. Would be even better to separate runtime and building images for faster builds and smaller images overall. I'd also like to not make the interactive container manually, but simply define a runtime container as a service in compose, and users could run docker-compose run wallet, for example, and that'd dump them into a shell within the environment specified (and since it's in the same compose file, the network is automatically correct).

    If any/all of these additional changes are desired, that can be added to this or made as separate PRs too.

    closes #39

    opened by tarfeef101 7
  • Guard against malicious sentinels

    Guard against malicious sentinels

    In the current UHS-based data model, sentinels validate the transaction-local invariants of a full transaction, before converting the transaction to a compact representation which is the data structure processed by the backend (2PC or Atomizer). Since the data required to validate the full transaction is lost after being converted to a compact transaction, it would be trivial for a compromised sentinel to submit an invalid transaction for processing. This would allow the adversary to spend any UHS ID, and mint arbitrary UHS IDs representing any value (that an honest sentinel would later accept as a valid output). This is possible because the upstream component from the sentinels (shards in the atomizer architecture, coordinators in 2PC) blindly accepts compact transactions (deleting input UHS IDs, and creating output UHD IDs) and has no knowledge of the pre-image data. The current implementation does not restrict the origin of compact transactions either, so any entity with internal network access can send compact transactions directly to backend components, bypassing the sentinels.

    It would be desirable to protect against up to n sentinels being compromised. n should be configurable based on the threat model and number of nodes in the system. As long as <=n sentinels are malicious, it should be impossible to settle an invalid transaction. Furthermore, upstream components should only accept compact transactions for processing from known sentinels.

    topic/hardening-security topic/architecture enhancement/feature 
    opened by metalicjames 6
  • [benchmarking question]

    [benchmarking question]

    Affected Branch

    trunk

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [ ] The issue is reproducible in docker

    Description

    In order to reproduce the issue, follow these steps:

    1. I stress test 2pc architectrure on 8 real machine for 8 sharding,each machine runing one sentinel ,coordinator,locking_shardd
    2. runing ./tools/bench/twophase-gen and Statistical throughput count from tx_samples_x.txt,and find the throughput not stable ,as show below ,the first cloumn is count ,the second cloumn is timestamp,why? 42 1644890386 43724 1644890393 119916 1644890394 30255 1644890395 13765 1644890396 2840 1644890397 2 1644891076 3 1644891077 1 1644891101 8 1644891107 15 1644891170 13 1644891172 12 1644891173 1 1644891459 2 1644891473 1 1644891474 13 1644891529 18 1644891540 3 1644891541 1 1644891637 7 1644891638 4 1644891661 42 1644891684 2 1644891826 3 1644891828 51904 1644892939 39527 1644892940 13126 1644893569 23061 1644893570 3549 1644893571
    3. The CPU usage of the core where a process coordinator resides and the core where process locking_shardd resides reaches 100%, but other cores are idle. whether CPU binding is used?
    lx_clip1644807420082

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug 
    opened by tanqiwen 6
  • Add unit test to cover non-error cases of cbdc::nuraft_serializer::read()

    Add unit test to cover non-error cases of cbdc::nuraft_serializer::read()

    This pull request addresses https://github.com/mit-dci/opencbdc-tx/issues/117#issue-1257715247

    Added a unit test to cover non error case for cbdc::nuraft_serializer::read()

    Closes https://github.com/mit-dci/opencbdc-tx/issues/117

    opened by ykaravas 5
  • Sentinel compact transaction attestations

    Sentinel compact transaction attestations

    This PR implements a potential solution for #84.

    Each sentinel is assigned a unique public/private key, defined in the system configuration. When sentinels validate a transaction, they sign the resulting compact transaction, if valid. There is a configuration parameter attestation_threshold which defines the number of distinct sentinels required to sign a compact transaction before upstream components will consider it valid. The sentinel that received the transaction first will distribute the transaction to additional sentinels which will also validate and return signatures. The originating sentinel gathers all of the signatures until attestation_threshold are available. Finally, the sentinel forwards the compact transaction and associated signatures to the upstream component for settlement. The upstream components (shards, atomizer cluster, coordinator cluster) check the signatures on the compact transaction, ensuring that attestation_threshold distinct, known sentinels have signed it. This solution guards against up to attestation_threshold - 1 malicious sentinels and prevents upstream components from accepting compact transactions from unknown sentinels.

    opened by metalicjames 5
  • Adding a Github Actions Workflow to publish Docker images to Github C…

    Adding a Github Actions Workflow to publish Docker images to Github C…

    Adding a Github Actions Workflow to publish Docker images to Github Container Registry. Images will be published with the branch name, latest and commit sha only on the default branch. In this case every time a branch is merged into trunk there will be a new Docker image published with the tags sha-<commit>, latest and trunk.

    This docker image can be pulled using the following:

    docker pull ghcr.io/mit-dci/opencbdc-tx:latest
    

    It can also be used in a downstream Docker image like this:

    FROM ghcr.io/mit-dci/opencbdc-tx:latest
    
    • I've added a new file called docker-compose-prebuilt-2pc.yml that only contains the image and build args necessary to make sure that docker compose will use a prebuilt image if this file is provided. Documentation has been updated with instructions.
      • The benefit of this change is that it will be significantly faster to get up and running with the system if you are not making any code changes.

    Re-opening #76 Signed-off-by: Kyle Crawshaw [email protected]

    opened by kylecrawshaw 5
  • Fix doc and double save in client

    Fix doc and double save in client

    This PR contains 2 commits:

    1. The 1st commit fixes an inaccurate method description

    Here's the description of 'client::init': https://github.com/mit-dci/opencbdc-tx/blob/2b21cc9f857cabaf129a0a413a3ed8825dea2f36/src/uhs/client/client.hpp#L37-L42

    The implementation shows that init does not attempt to create data files if they don't exist: https://github.com/mit-dci/opencbdc-tx/blob/2b21cc9f857cabaf129a0a413a3ed8825dea2f36/src/uhs/client/client.cpp#L33-L48

    The creation of a wallet data file occurs in transaction::wallet::save: https://github.com/mit-dci/opencbdc-tx/blob/2b21cc9f857cabaf129a0a413a3ed8825dea2f36/src/uhs/transaction/wallet.cpp#L307-L327

    The creation of a client data file occurs in client::save_client_state: https://github.com/mit-dci/opencbdc-tx/blob/2b21cc9f857cabaf129a0a413a3ed8825dea2f36/src/uhs/client/client.cpp#L287-L298

    2. The 2nd commit removes a redundant call

    client::mint appears to call save twice. As a result, the files written in the 1st call are cleared and the same data is then re-written to them. The first call to save occurs when mint calls import_transaction: https://github.com/mit-dci/opencbdc-tx/blob/2b21cc9f857cabaf129a0a413a3ed8825dea2f36/src/uhs/client/client.cpp#L57-L70 https://github.com/mit-dci/opencbdc-tx/blob/2b21cc9f857cabaf129a0a413a3ed8825dea2f36/src/uhs/client/client.cpp#L198-L201 The 2nd call occurs in mint itself (line 67). This commit deletes the 2nd call.

    opened by mszulcz-mitre 3
  • Check preconditions for init methods for locking shard and coordinator controllers

    Check preconditions for init methods for locking shard and coordinator controllers

    This pull request addresses Issue #153. It contains the following:

    • commits that check preconditions of the init methods for cbdc::controller::controller and cbdc::locking_shard::controller
    • unit tests for the init methods
    • an extension to the test script, test.sh, that allows unit tests to use configuration files
    opened by mszulcz-mitre 1
  • Accessing non-existent elements in locking_shard::controller and coordinator::controller

    Accessing non-existent elements in locking_shard::controller and coordinator::controller

    Affected Branch

    trunk

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [X] The issue is reproducible in docker

    Description

    Problem

    This issue is similar to Issue #140.

    There are no checks in the constructor or init method forlocking_shard::controller that the values of parameters shard_id and node_id are valid. This can lead to non-existent element accesses in vectors. For example, here is the constructor: https://github.com/mit-dci/opencbdc-tx/blob/a8b696b315c670f3b3f71ab353cc471c0d7025e8/src/uhs/twophase/locking_shard/controller.cpp#L17-L50

    On Line 32, m_shard_id is used as in index to m_shard_ranges. On line 40, shard_id and node_id are used as indices to m_locking_shard_raft_endpoints. They are also used as indices to vectors in at least 2 places in init.

    The same problem occurs in coordinator::controller, but the variables used as indices are m_node_id and m_coordinator_id: https://github.com/mit-dci/opencbdc-tx/blob/a8b696b315c670f3b3f71ab353cc471c0d7025e8/src/uhs/twophase/coordinator/controller.cpp#L18-L55

    Solution

    There a a few ways to solve this problem.

    1. One way is the use the at method of std::vector for element access. Unlike the bracket operator [], it provides bounds checking and will raise an exception if the requested element is out of range.

    2. A second method is to move all code with potentially problematic vector accesses to the init methods, manually check whether the indices are in range, and if not, log an error and return false. This method is used to check other things in init. For example, here is init for locking_shard::controller: https://github.com/mit-dci/opencbdc-tx/blob/a8b696b315c670f3b3f71ab353cc471c0d7025e8/src/uhs/twophase/locking_shard/controller.cpp#L52-L88

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    enhancement/refactor difficulty/01-good-first-issue fix/bug 
    opened by mszulcz-mitre 3
  • Sentinel 2pc controller bug fixes

    Sentinel 2pc controller bug fixes

    This PR addresses the bugs identified in Issue #140 and Issue #141. The bug fixes and associated unit tests are the same as those described in the issues.

    opened by mszulcz-mitre 0
  • config.sh accommodates brew's new install prefix on Silicon Mac

    config.sh accommodates brew's new install prefix on Silicon Mac

    Also, gmake is covered by brew install make, hence gnumake -> gmake link is no longer necessary.

    Fix #139 Signed-off-by: Alexander Jung [email protected]

    opened by AlexRamRam 0
  • Simplify copyright statement

    Simplify copyright statement

    In the initial release, only two parties needed to be named for copyright. That's no longer the case due to several open-source contributors (thank you and welcome)!

    Rather than have the copyright notice need to be updated across all the files which contain the notice, we should simplify this process to maintain a single list:

    • [ ] Create an AUTHORS file
    • [ ] Migrate all known copyright holders to AUTHORS
    • [ ] Update all copyright notices to point to the AUTHORS file
    • [ ] Explore automation / update process to ask people to include their name in AUTHORS if they wish to have their copyright recorded
    enhancement/documentation 
    opened by HalosGhost 0
Owner
The MIT Digital Currency Initiative @ Media Lab
The MIT Digital Currency Initiative @ Media Lab
A simple processor emulator written in c++ that can parse and execute x32 code. x32 is binary code made by me for this processor.

A SIMPLE PROCESSOR EMULATOR AND CODE EXECUTOR The Repository This is a fairly new project and is still heavy in development. If you find and bugs feel

Luka Golob 4 Jan 20, 2022
An open-source general-purpose programming language

An open source general purpose high-level programming language! [OO-N-YUH] This language is gonna onya other languages Vs Code Extention Language Supp

Tech Penguin 4 Aug 9, 2021
🎩 An interpreted general-purpose scripting language 🔨

Dunamis Version 0.0.0.2 - Alpha Version 1.1 An interpreted general-purpose programming language Contents Syntax Functions Variables Objects Enums Incl

Tech Penguin 4 Dec 21, 2021
The pico can be used to program other devices. Raspberry pi made such an effort. However there is no board yet, that is open-source and can be used with OpenOCD as a general-purpose programmer

pico-probe-programmer The pico can be used to program other devices. Raspberry pi made such an effort. However there is no board yet, that is open-sou

martijn 21 Jul 20, 2022
An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).

Sextans Sextans is an accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM). One exciting feature is that we only need to p

linghao.song 23 Jul 22, 2022
Yet another abstraction layer - a general purpose C++ library.

Yet Another Abstraction Layer What yaal is a cross platform, general purpose C++ library. This library provides unified, high level, C++ interfaces an

Marcin Konarski 14 Jul 27, 2022
Appscope - General-Purpose Observable Application Telemetry System

AppScope AppScope is an open source, runtime-agnostic instrumentation utility for any Linux command or application. It helps users explore, understand

Cribl 159 Jul 18, 2022
General purpose power controller, capable of driving soldering irons using different voltages and probe types.

All-purpose Power Micro Controller This general purpose power micro controller features: Wheatstone Bridge front-end New Texas Instruments INA823 inst

Tomasz Jastrzębski 27 Aug 4, 2022
Bank Management System - Written In C

Bank Management System Bank management system writtetn in c About The Project Compile And Running Usage About The Project When I researched about bank

Hakkı Anıl Ragıboğlu 7 Aug 1, 2022
A multi-bank MRAM based memory card for Roland instruments

Roland compatible multi-bank MRAM memory card (click to enlarge) This is a replacement memory card for old Roland instruments of the late 80s and earl

Joachim Fenkes 17 Aug 3, 2022
Bank of challenges & solutions from r/dailyprogrammer for people learning to program

DailyProgrammerChallenges This repo contains all of the challenges from r/dailyprogrammer and also scripts used to pull challenges from the subreddit

Freddie Vargus 315 Jul 9, 2022
Bank Management System is based on a concept of recording customer’s account details

Bank Management System is based on a concept of recording customer’s account details. Here the user can perform all the tasks like creating an account, deposit amount, withdraw amount, check balance, view all account holders detail, close an account and modify an account. There’s no login system for this project. All the main features for banking system are set in this project.

AmrMohamed 2 Feb 12, 2022
CERBERUS 2080™, the amazing multi-processor 8-bit microcomputer

CERBERUS 2080™ CERBERUS 2080™, the amazing multi-processor 8-bit microcomputer: a fully open-source project available for anyone to peruse, build, mod

The Byte Attic 66 Jun 22, 2022
This program designed to stress both the processor and memory.

StressTest What it is This program designed to stress both the processor and memory. Firstly it comlete data into memory so the memory load upto 90%(d

null 2 Oct 17, 2021
Enoki: structured vectorization and differentiation on modern processor architectures

Enoki: structured vectorization and differentiation on modern processor architectures

Mitsuba Physically Based Renderer 1.1k Aug 8, 2022
esp32 cam digital low latency fpv

esp32-cam-fpv esp32 cam digital, low latency FPV This project uses a modified esp-camera component running on an AI Thinker board to send low-latency

null 104 Aug 5, 2022
Comparing the performance of Wave Digital Filter implementations

WDF Bakeoff Comparing performance between Wave Digital Filters implemented in C++ and Faust. Building First clone the repository and submodules: git c

null 10 Jun 19, 2022
Voltage Controlled Digital Core Multimode Oscillator using Mozzi library on Arduino

Arduino-VDCO Voltage Controlled Digital Core Multimode Oscillator using Mozzi library on Arduino Its a digital Oscillator/Voice for the Eurorack Stand

null 41 Aug 8, 2022
Direct Digital Synthesis DDS with microcontroller

Direct Digital Synthesis DDS with microcontroller DDS is a cool thing. To understand how it works is even cooler. So I wrote a little tutorial about D

weigu 1 Oct 31, 2021