A transaction processor for a hypothetical, general-purpose, central bank digital currency

Overview

CI Status Contributor Covenant

Introduction

OpenCBDC is a technical research project focused on answering open questions surrounding central bank digital currencies (CBDCs).

This repository includes the core transaction processor for a hypothetical, general purpose central bank digital currency (CBDC). Initially, this work was derived from Project Hamilton (a collaboration between the MIT Digital Currency Initiative (DCI) and the Federal Reserve Bank of Boston (FRBB)).

For higher-level conceptual explanations, as well as findings and conclusions related to this code, see our research paper.

Initially, we focused our work on achieving high transaction throughput, low latency, and resilience against multiple geographical datacenter outages without significant downtime or any data loss. The design decisions we made to achieve these goals will help inform policy makers around the world about the spectrum of tradeoffs and available options for CBDC design.

Important News

NOTE: In cases where there are significant changes to the repository that might need manual intervention down-stream (or other important updates), we will make a NEWS post.

Architecture

We explored two system architectures for transaction settlement, both based on an unspent transaction output (UTXO) data model and transaction format. Both architectures implement the same schema representing an unspent hash set (UHS) abstraction. One architecture provides linearizability of transactions, whereas the other only provides serializability. By relaxing the ordering constraint, the peak transaction throughput supported by the system scales horizontally with the number of nodes, but the transaction history is unavailable making the system harder to audit retroactively. Both architectures handle multiple geo-distributed datacenter outages with a recovery time objective (RTO) of under ten seconds and a recovery point objective (RPO) of zero.

  1. "Atomizer" architecture
    • Materializes a total ordering of all transactions settled by the system in a linear sequence of batches.
    • Requires vertical scaling as peak transaction throughput is limited by the performance of a single system component.
    • Maximum demonstrated throughput ~170K transactions per second.
    • Geo-replicated latency <2 seconds.
  2. "Two-phase commit" architecture
    • Transaction history is not materialized and only a relative ordering is assigned between directly related transactions.
    • Combines two-phase commit (2PC) and conservative two-phase locking (C2PL) to create a system without a single bottlenecked component where peak transaction throughput scales horizontally with the number of nodes.
    • Maximum demonstrated throughput ~1.7M transactions per second.
    • Geo-replicated latency <1 second.

Read the architecture guide for a detailed description of the system components and implementation of each architecture.

Contributing

You can sign up to receive updates from technical working groups and to learn more about our work. If you would like to join our technical discussions and help workshop proposals, you can join our Zulip chat.

For more information on how to contribute, please see our Contribution Guide!

Get the Code

  1. Install Git
  2. Clone the repository (including submodules)
    • git clone --recurse-submodules https://github.com/mit-dci/opencbdc-tx

Run the Code

The easiest way to compile the code and run the system locally is using Docker.

Setup Docker

Don't forget to run the docker daemon!

Launch the System

Note: You will need to both run the system and interact with it; you can either use two shells, or you can add the --detach flag when launching the system (note that it will then remain running till you stop it, e.g., with docker stop). Additionally, you can start the atomizer architecture by passing --file docker-compose-atomizer.yml instead.

The commands below will build a new image every time that you run it. You can remove the --build flag after the image has been built to avoid rebuilding. To run the system with our pre-built image proceed to the next section for the commands to run.

  1. Run the System
    # docker compose --file docker-compose-2pc.yml up --build
    
  2. Launch a container in which to run wallet commands (use --network atomizer-network instead of --network 2pc-network if using the atomizer architecture)
    # docker run --network 2pc-network -ti opencbdc-tx /bin/bash
    

Launch the System With a Pre-built Image

We publish new docker images for all commits to trunk. You can find the images in the Github Container Registry.

Note: You must use docker compose (not docker-compose) for this approach to work or you will need to pull the image manually docker pull ghcr.io/mit-dci/opencbdc-tx. Additionally, you can start the atomizer architecture by passing --file docker-compose-atomizer.yml --file docker-compose-prebuilt-atomizer.yml instead.

  1. Run the system
    # docker compose --file docker-compose-2pc.yml --file docker-compose-prebuilt-2pc.yml up --no-build
    
  2. Launch a container in which to run wallet commands (use --network atomizer-network instead of --network 2pc-network if using the atomizer architecture)
    # docker run --network 2pc-network -ti ghcr.io/mit-dci/opencbdc-tx /bin/bash
    

Setup test wallets and test them

The following commands are all performed from within the second container we started in the previous step. In each of the below commands, you should pass atomizer-compose.cfg instead of 2pc-compose.cfg if you started the atomizer architecture.

  • Mint new coins (e.g., 10 new UTXOs each with a value of 5 atomic units of currency)

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint 10 5
    [2021-08-17 15:11:57.686] [WARN ] Existing wallet file not found
    [2021-08-17 15:11:57.686] [WARN ] Existing mempool not found
    4bc23da407c3a8110145c5b6c38199c8ec3b0e35ea66bbfd78f0ed65304ce6fa
    

    If using the atomizer architecture, you'll need to sync the wallet after:

    # ./build/src/uhs/client/client-cli atomizer-compose.cfg mempool0.dat wallet0.dat sync
    
  • Inspect the balance of a wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info
    Balance: $0.50, UTXOs: 10, pending TXs: 0
    
  • Make a new wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat newaddress
    [2021-08-17 15:13:16.148] [WARN ] Existing wallet file not found
    [2021-08-17 15:13:16.148] [WARN ] Existing mempool not found
    usd1qrw038lx5n4wxx3yvuwdndpr7gnm347d6pn37uywgudzq90w7fsuk52kd5u
    
  • Send currency from one wallet to another (e.g., 30 atomic units of currency)

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat send 30 usd1qrw038lx5n4wxx3yvuwdndpr7gnm347d6pn37uywgudzq90w7fsuk52kd5u
    tx_id:
    cc1f7dc708be5b07e23e125cf0674002ff8546a9342928114bc97031d8b96e75
    Data for recipient importinput:
    cc1f7dc708be5b07e23e125cf0674002ff8546a9342928114bc97031d8b96e750000000000000000d0e4f689b550f623e9370edae235de50417860be0f2f8e924eca9f402fcefeaa1e00000000000000
    Sentinel responded: Confirmed
    

    If using the atomizer architecture, you'll need to sync the sending wallet after:

    # ./build/src/uhs/client/client-cli atomizer-compose.cfg mempool0.dat wallet0.dat sync
    
  • Check that the currency is no longer available in the sending wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info
    Balance: $0.20, UTXOs: 4, pending TXs: 0
    
  • Import coins to the receiving wallet

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat importinput cc1f7dc708be5b07e23e125cf0674002ff8546a9342928114bc97031d8b96e750000000000000000d0e4f689b550f623e9370edae235de50417860be0f2f8e924eca9f402fcefeaa1e00000000000000
    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat sync
    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat info
    Balance: $0.30, UTXOs: 1, pending TXs: 0
    

Testing

Running Unit & Integration Tests

  1. Build the container
    # docker build . -t opencbdc-tx
    
  2. Run Unit & Integration Tests
    # docker run -ti opencbdc-tx ./scripts/test.sh
    
Comments
  • [ERROR] Failed to connect to any atomizers at Setup test wallets and test them

    [ERROR] Failed to connect to any atomizers at Setup test wallets and test them

    Affected Branch

    main @ https://github.com/mit-dci/opencbdc-tx.git

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [ ] The issue is reproducible in docker

    Description

    While running "Setup test wallets and test them": ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint 10 5

    [2022-02-09 19:08:44.231] [WARN ] Existing wallet file not found [2022-02-09 19:08:44.232] [WARN ] Existing client file not found

    following, we see an error: """ ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info Balance: $0.50, UTXOs: 10, pending TXs: 0 root@88cc0c05fa6d:/opt/tx-processor# Balance: $0.50, UTXOs: 10, pending TXs: 0 bash: Balance:: command not found root@88cc0c05fa6d:/opt/tx-processor# ./build/src/uhs/client/client-cli atomizer-compose.cfg mempool0.dat wallet0.dat sync [2022-02-09 19:10:34.623] [ERROR] Failed to connect to any atomizers terminate called after throwing an instance of 'std::system_error' what(): Invalid argument """

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug status/on-hold 
    opened by UrsaEli 21
  • Part of

    Part of "Run the Code" section in README.md appears broken

    Affected Branch

    trunk

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [X] The issue is reproducible in docker

    Description

    In the "Run the Code" section of README.md, the following step hangs:

    • Send currency from the first wallet to the second wallet created in the previous step (e.g., 30 atomic units of currency)
      # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat send 30 <wallet address>
      

    where <wallet address> is the address returned from running the client-cli with the newaddress keyword.

    I followed the instructions in the sections "Launch the System" and "Setup test wallets and test them" for the 2PC architecture. The instructions are:

    1. Run the System
      # docker compose --file docker-compose-2pc.yml up --build
      
    2. Launch a container in which to run wallet commands (use --network atomizer-network instead of --network 2pc-network if using the atomizer architecture)
      # docker run --network 2pc-network -ti opencbdc-tx /bin/bash
      
    • Mint new coins (e.g., 10 new UTXOs each with a value of 5 atomic units of currency)

      # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint 10 5
      [2021-08-17 15:11:57.686] [WARN ] Existing wallet file not found
      [2021-08-17 15:11:57.686] [WARN ] Existing mempool not found
      4bc23da407c3a8110145c5b6c38199c8ec3b0e35ea66bbfd78f0ed65304ce6fa
      
    • Inspect the balance of a wallet

      # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info
      Balance: $0.50, UTXOs: 10, pending TXs: 0
      
    • Make a new wallet

      # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool1.dat wallet1.dat newaddress
      [2021-08-17 15:13:16.148] [WARN ] Existing wallet file not found
      [2021-08-17 15:13:16.148] [WARN ] Existing mempool not found
      usd1qrw038lx5n4wxx3yvuwdndpr7gnm347d6pn37uywgudzq90w7fsuk52kd5u
      
    • Send currency from the first wallet to the second wallet created in the previous step (e.g., 30 atomic units of currency)

      # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat send 30 usd1qrw038lx5n4wxx3yvuwdndpr7gnm347d6pn37uywgudzq90w7fsuk52kd5u
      

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug 
    opened by mszulcz-mitre 11
  • Extend test script

    Extend test script

    This pull request addresses Issue 122, 'Extend test.sh'. test.sh is the script that runs unit tests and integration tests and measures coverage. This pull request contains commits that extend it in the following ways:

    • Users can now choose via command-line arguments which tests to run (e.g. unit tests, integration tests, or both) and whether to measure coverage.
    • Users can now define the build folder via a command-line argument.
    • Users can now run the script from any folder.
    • Users can now read about what the script does and how to use it by calling it with -h or --help flags.

    In addition, test.sh now adheres to the 80-char. line limit followed in the repo's C++ code. It also now has variable references enclosed in double quotes, as recommended in the Advanced Bash Scripting Guide.

    The new test.sh in this pull request is completely backward compatible: running it with no command-line arguments does exactly what the previous version of the script does. As a result, documentation that references the script and code that invokes it should not need to change. Also, users don't need to change how they use the script if they don't want to.

    opened by mszulcz-mitre 11
  • adjust scripts, Cmake for OSX. Updates README

    adjust scripts, Cmake for OSX. Updates README

    Signed-off-by: Dave Bryson

    Changes:

    • Removed brew installs from configure.sh and added to build information in README. Why? calling sudo ./configure.sh fails as brew can't be called with sudo
    • Moved OSX specific CMAKE_FLAGS to CmakeFile.txt
    • Added build information to README

    References: https://github.com/mit-dci/opencbdc-tx/issues/94

    opened by davebryson 11
  • Fix tests build on MacOS

    Fix tests build on MacOS

    This PR simplifies the googletest depedency finding in CMake, and removes the gmock dependency which isn't used. This fixes building the tests for me on MacOS, where it insisted on using the shared libraries for google test, even when only static libraries were installed.

    opened by metalicjames 9
  • tcp_client::init bug fix

    tcp_client::init bug fix

    GitHub Issue #131 describes a bug that makes it impossible to trigger the error "Failed to start coordinator client" in sentinel_2pc::controller::init (src/uhs/twophase/sentinel_2pc/controller.cpp, lines 38-41). This pull request contains a commit that fixes the bug using the method described in Issue #131. The fix causes the unit test tcp_rpc_test.send_fail_test to fail, so the next commit in this pull request fixes it by testing for behavior that's expected after the bug fix. The final commit in this pull request adds a new unit test that triggers the error that was previously impossible due to the bug.

    opened by mszulcz-mitre 9
  • C++ errors failed to build

    C++ errors failed to build

    Affected Branch

    Scanning dependencies of target crypto [ 4%] Building CXX object 3rdparty/crypto/CMakeFiles/crypto.dir/sha256_avx2.o c++: error: unrecognized command line option '-mavx' c++: error: unrecognized command line option '-mavx2' make[2]: *** [3rdparty/crypto/CMakeFiles/crypto.dir/build.make:63: 3rdparty/crypto/CMakeFiles/crypto.dir/sha256_avx2.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:771: 3rdparty/crypto/CMakeFiles/crypto.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 The command '/bin/sh -c mkdir build && cd build && cmake -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} .. && make' returned a non-zero code: 2 ERROR: Service 'shard0' failed to build : Build failed

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [ ] The issue is reproducible in docker

    Description

    In order to reproduce the issue, follow these steps:

    Having trouble with the build got the error above.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug closed/duplicate 
    opened by Pi-Runner 9
  • Initial commit of Helm charts and e2e testing

    Initial commit of Helm charts and e2e testing

    This PR aims to improve validation of changes to opencbdc by setting up a Kubernetes cluster with minikube in Github actions and then install the 2PC architecture. After installation into the minikube cluster there are tests run to check whether or not pods/containers have started and are running. This is a very minimal test that currently does not check anything besides pod/container status. While we are using the Helm chart to run a test this exact configuration can be used to run in any Kubernetes cluster/environment. Documentation should be improved in a future PR to make it clear how to use the Helm chart and customize configuration.

    The following has been added:

    • Helm Chart for 2PC architecture to simplify running 2PC in Kubernetes
    • Go tests using testify and Terratest to install the 2PC Helm Chart and check if containers are running

    Signed-off-by: Kyle Crawshaw [email protected]

    opened by kylecrawshaw 8
  • Arch linux snappy

    Arch linux snappy

    #171 according to my last comment check_library_exists(snappy snappy_compress "" HAVE_SNAPPY) should be changed to find_library(SNAPPY_LIBRARY snappy REQUIRED) as this library is needed by LevelDB and if shared only exists on the system it must be added to target linking.

    opened by pr4u4t 7
  • Fix #39: README issues w/ Docker Permissions

    Fix #39: README issues w/ Docker Permissions

    • don't tell users to run as root
    • use compose if we're going to use compose
    • simplify commands for up/down/stop

    Would be great if we could have a default docker-compose file that wouldn't require specifying the file each time, but not necessary. Would also be nice to not use a ubuntu base image, they're rather bloated. Would be even better to separate runtime and building images for faster builds and smaller images overall. I'd also like to not make the interactive container manually, but simply define a runtime container as a service in compose, and users could run docker-compose run wallet, for example, and that'd dump them into a shell within the environment specified (and since it's in the same compose file, the network is automatically correct).

    If any/all of these additional changes are desired, that can be added to this or made as separate PRs too.

    closes #39

    opened by tarfeef101 7
  • Guard against malicious sentinels

    Guard against malicious sentinels

    In the current UHS-based data model, sentinels validate the transaction-local invariants of a full transaction, before converting the transaction to a compact representation which is the data structure processed by the backend (2PC or Atomizer). Since the data required to validate the full transaction is lost after being converted to a compact transaction, it would be trivial for a compromised sentinel to submit an invalid transaction for processing. This would allow the adversary to spend any UHS ID, and mint arbitrary UHS IDs representing any value (that an honest sentinel would later accept as a valid output). This is possible because the upstream component from the sentinels (shards in the atomizer architecture, coordinators in 2PC) blindly accepts compact transactions (deleting input UHS IDs, and creating output UHD IDs) and has no knowledge of the pre-image data. The current implementation does not restrict the origin of compact transactions either, so any entity with internal network access can send compact transactions directly to backend components, bypassing the sentinels.

    It would be desirable to protect against up to n sentinels being compromised. n should be configurable based on the threat model and number of nodes in the system. As long as <=n sentinels are malicious, it should be impossible to settle an invalid transaction. Furthermore, upstream components should only accept compact transactions for processing from known sentinels.

    topic/hardening-security topic/architecture enhancement/feature 
    opened by metalicjames 6
  • Errors in running

    Errors in running "E2E Testing with Kubernetes"

    Hi, I tried to run "E2E Testing with Kubernetes" , but all test cases failed with the following error: Events: Type Reason Age From Message


    Normal Scheduled 7m17s default-scheduler Successfully assigned opencbdc-tx-qg7ep1/shard-0-1-0 to opencbdc Warning Failed 5m17s (x12 over 7m15s) kubelet Error: ErrImageNeverPull Warning ErrImageNeverPull 2m6s (x27 over 7m15s) kubelet Container image "opencbdc-tx:latest" is not present with pull policy of Never

    Not sure where to get the correct Image, would you please help when have time. thank you.

    File "testrun.log" attached. testrun.log Yes, I have run the ./scripts/build-docker.sh

    FYI: OS: ubuntu 22.04

    tried to run:

    E2E Testing with Kubernetes Requirements Go (go test library used to run tests) Minikube Helm Kubectl Running tests ./scripts/test-e2e-minikube.sh Review results and logs at testruns//

    FYI: ./scripts/build-docker.sh Sending build context to Docker daemon 106.7MB Step 1/10 : ARG IMAGE_VERSION="ubuntu:20.04" Step 2/10 : ARG BASE_IMAGE="ghcr.io/mit-dci/opencbdc-tx-base:latest" Step 3/10 : FROM $IMAGE_VERSION AS base 20.04: Pulling from library/ubuntu eaead16dc43b: Pull complete Digest: sha256:450e066588f42ebe1551f3b1a535034b6aa46cd936fe7f2c6b0d72997ec61dbd Status: Downloaded newer image for ubuntu:20.04 ---> 680e5dfb52c7 Step 4/10 : ENV DEBIAN_FRONTEND noninteractive .......

    $ docker images -a --digests REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE f217f881db26 20 minutes ago 119MB 5c8c1273d9d3 20 minutes ago 94.7MB opencbdc-tx-twophase latest b08cde7306d1 20 minutes ago 119MB 87c3318f7a11 20 minutes ago 117MB d10a8be8a553 20 minutes ago 116MB f5bef2400852 20 minutes ago 117MB 92f60702b416 20 minutes ago 1.81GB 202277689b37 20 minutes ago 73.6MB 94790781d5dc 20 minutes ago 72.8MB 6b14885d4292 21 minutes ago 1.64GB opencbdc-tx-base latest 676b02da7baa 21 minutes ago 1.53GB 18a322dc3633 23 minutes ago 72.8MB 65fb0c173c7f 23 minutes ago 72.8MB ac60d424e32b 23 minutes ago 72.8MB 1932112415d7 23 minutes ago 72.8MB fea9d7cc6b92 23 minutes ago 72.8MB 1b2a63db5764 23 minutes ago 72.8MB ubuntu 20.04 sha256:450e066588f42ebe1551f3b1a535034b6aa46cd936fe7f2c6b0d72997ec61dbd 680e5dfb52c7 8 days ago 72.8MB ghcr.io/mit-dci/opencbdc-tx-base latest sha256:2076bdd505b9a003acebc733c1981ec4ef1ed3d3f2cdea1ba8abb3e1d2250720 49fc6337a4eb 13 days ago 1.54GB ghcr.io/mit-dci/opencbdc-tx-base sha-16bd61a sha256:2076bdd505b9a003acebc733c1981ec4ef1ed3d3f2cdea1ba8abb3e1d2250720 49fc6337a4eb 13 days ago 1.54GB opencbdc-tx latest 929938b6244b 2 weeks ago 2.88GB gcr.io/k8s-minikube/kicbase v0.0.35 sha256:e6f9b2700841634f3b94907f9cfa6785ca4409ef8e428a0322c1781e809d311b 7fb60d0ea30e 4 weeks ago 1.12GB

    opened by Cbdc2022 2
  • Performance fixes for sentinel attestations

    Performance fixes for sentinel attestations

    This PR adds performance improvements to the sentinel attestations:

    • Validation and attestation happen in parallel threads in the sentinel
    • Attestation requests are prioritized by sentinels such that the end-to-end latency of transactions reduces under high(er) load: incoming transactions will wait until existing transactions pending attestation are completed.
    • Validation of attestations happens in parallel in the coordinator
    • Validation of attestations happens in parallel and outside of the state machine (before replication) in the shard.
    opened by wadagso-gertjaap 0
  • Fix casting errors in client-cli

    Fix casting errors in client-cli

    client-cli can produce casting errors if the numbers given as command-line arguments are negative. With this commit, the code now checks if the arguments are negative, and if so, prints an error message and exits.

    Signed-off-by: Michael L. Szulczewski [email protected]

    opened by mszulcz-mitre 8
  • Casting errors in client-cli

    Casting errors in client-cli

    Affected Branch

    trunk

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [X] The issue is reproducible in docker

    Description

    The executable compiled from client-cli.cpp is used to interact with the transaction processor. In the "Launch System" section in README.md, it's called to mint new coins, print the balance of a wallet, make a new wallet, and send coins between wallets. For example, to mint new coins, the command is:

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint 10 5
    

    When calling client-cli with the commands "mint", "send", or "fan", the code may exhibit a casting error. For example, if the mint command is accidentally called with a negative number, such as in

    # ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint -1 5
    

    the code would cast -1 to 18446744073709551615 and would create 18446744073709551615 new utxos without warning. If the mint command is invoked with -18446744073709551615 outputs, it actually only makes one:

    root@102611d59e8f:/opt/tx-processor# ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat mint -18446744073709551615 5
    [2022-09-28 05:18:56.541] [WARN ] Existing wallet file not found
    [2022-09-28 05:18:56.541] [WARN ] Existing client file not found
    34162c6120b9ddb3d1dd6f69b4898ba2af4e4e6868e3b099d39316c133ab54ae
    root@102611d59e8f:/opt/tx-processor# ./build/src/uhs/client/client-cli 2pc-compose.cfg mempool0.dat wallet0.dat info
    Balance: $0.05, UTXOs: 1, pending TXs: 0
    

    This is caused by the use of std::stoull and std::stoul, which are used to convert strings to unsigned integers. For example, here's the function mint_command:

    auto mint_command(cbdc::client& client, const std::vector<std::string>& args)
        -> bool {
        static constexpr auto min_mint_arg_count = 7;
        if(args.size() < min_mint_arg_count) {
            std::cerr << "Mint requires args <n outputs> <output value>"
                      << std::endl;
            return false;
        }
    
        const auto n_outputs = std::stoull(args[5]);
        const auto output_val = std::stoul(args[6]);
    

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug 
    opened by mszulcz-mitre 0
  • Refactor mock_system class for clarity

    Refactor mock_system class for clarity

    The mock_system class has a few parts that could cause confusion. For example, the words 'module' and 'component' are used interchangeably, it's not clear when modules will be mocked, and it's not clear what the return value of start_servers means (see Issue #183). This commit refactors the code to clarify these issues.

    Signed-off-by: Michael L. Szulczewski [email protected]

    opened by mszulcz-mitre 0
  • Potential improvements to `mock_system`.

    Potential improvements to `mock_system`.

    Affected Branch

    trunk

    Basic Diagnostics

    • [X] I've pulled the latest changes on the affected branch and the issue is still present.

    • [X] The issue is reproducible in docker

    Description

    The mock_system class “Establishes dummy listeners for each enabled system component” such as a watchtower, atomizer, or sentinel. It’s used in many of the integration tests. I think it’s an extremely useful class, but when using it for the first few times, I got confused about some things. I think the code could modified a little to make the class a little clearer.

    Components or modules?

    Sometimes the things that are mocked are called components; sometimes they’re called modules. This might be confusing to a new user. For example, when looking at the signature of the expect method, the variables for_module and component_id appear, and it’s not obvious that component_id is an ID of the specified for_module: https://github.com/mit-dci/opencbdc-tx/blob/4d82040c58142aec263a6e293b2664ec140141f6/tests/integration/mock_system.hpp#L79-L82

    I can't see any value in using both terms, so it seems reasonable to just use the word “module” since it’s already used more frequently in variable names in the class.

    When’s a module disabled?

    It’s not always obvious what modules will be mocked. By default, the class mocks all modules in the m_modules set and requires the user to specify what modules not to mock by passing them to the constructor in the disabled_modules set: https://github.com/mit-dci/opencbdc-tx/blob/4d82040c58142aec263a6e293b2664ec140141f6/tests/integration/mock_system.hpp#L50-L52

    However, even if a module is enabled, it still won’t get mocked if its endpoints aren’t specified in the options struct. This is understandable, but makes code difficult to debug because it happens without warning. For example, according to the disabled modules set, the coordinator module should be mocked in the following tests, but isn’t because endpoints aren’t given: watchtower_integration_test, sentinel_integration_test, replicated_atomizer_integration_tests, replicated_shard_integration_tests, watchtower_integration_test, atomizer_raft_integration_test. In the sentinel_2pc_integration_test, the sentinel module itself is the only module in the disabled_modules set, but 4 of the remaining 5 possible modules do not get mocked because their endpoints aren’t given.

    What does the return value of start_servers mean?

    The method start_servers attempts to start a listening server for a given module and returns a boolean: https://github.com/mit-dci/opencbdc-tx/blob/4d82040c58142aec263a6e293b2664ec140141f6/tests/integration/mock_system.hpp#L154-L156 As written, it returns true for both of these conditions:

    • A server is started for the given module.
    • A server is not started for the given module because it’s in the disabled_modules set or because no endpoints for the module were given.

    Having the method return true even if servers aren’t started is a little confusing.

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    fix/bug 
    opened by mszulcz-mitre 1
Owner
The MIT Digital Currency Initiative @ Media Lab
The MIT Digital Currency Initiative @ Media Lab
A simple processor emulator written in c++ that can parse and execute x32 code. x32 is binary code made by me for this processor.

A SIMPLE PROCESSOR EMULATOR AND CODE EXECUTOR The Repository This is a fairly new project and is still heavy in development. If you find and bugs feel

Luka Golob 4 Jan 20, 2022
An open-source general-purpose programming language

An open source general purpose high-level programming language! [OO-N-YUH] This language is gonna onya other languages Vs Code Extention Language Supp

Tech Penguin 4 Aug 9, 2021
🎩 An interpreted general-purpose scripting language 🔨

Dunamis Version 0.0.0.2 - Alpha Version 1.1 An interpreted general-purpose programming language Contents Syntax Functions Variables Objects Enums Incl

Tech Penguin 4 Dec 21, 2021
The pico can be used to program other devices. Raspberry pi made such an effort. However there is no board yet, that is open-source and can be used with OpenOCD as a general-purpose programmer

pico-probe-programmer The pico can be used to program other devices. Raspberry pi made such an effort. However there is no board yet, that is open-sou

martijn 22 Oct 15, 2022
An FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).

Sextans Sextans is an accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM). One exciting feature is that we only need to p

linghao.song 30 Dec 29, 2022
Yet another abstraction layer - a general purpose C++ library.

Yet Another Abstraction Layer What yaal is a cross platform, general purpose C++ library. This library provides unified, high level, C++ interfaces an

Marcin Konarski 14 Jul 27, 2022
Appscope - General-Purpose Observable Application Telemetry System

AppScope AppScope is an open source, runtime-agnostic instrumentation utility for any Linux command or application. It helps users explore, understand

Cribl 182 Dec 22, 2022
General purpose power controller, capable of driving soldering irons using different voltages and probe types.

All-purpose Power Micro Controller This general purpose power micro controller features: Wheatstone Bridge front-end New Texas Instruments INA823 inst

Tomasz Jastrzębski 30 Dec 3, 2022
Bank Management System - Written In C

Bank Management System Bank management system writtetn in c About The Project Compile And Running Usage About The Project When I researched about bank

Hakkı Anıl Ragıboğlu 10 Sep 15, 2022
A multi-bank MRAM based memory card for Roland instruments

Roland compatible multi-bank MRAM memory card (click to enlarge) This is a replacement memory card for old Roland instruments of the late 80s and earl

Joachim Fenkes 23 Nov 25, 2022
Bank of challenges & solutions from r/dailyprogrammer for people learning to program

DailyProgrammerChallenges This repo contains all of the challenges from r/dailyprogrammer and also scripts used to pull challenges from the subreddit

Freddie Vargus 317 Dec 8, 2022
Bank Management System is based on a concept of recording customer’s account details

Bank Management System is based on a concept of recording customer’s account details. Here the user can perform all the tasks like creating an account, deposit amount, withdraw amount, check balance, view all account holders detail, close an account and modify an account. There’s no login system for this project. All the main features for banking system are set in this project.

AmrMohamed 2 Feb 12, 2022
CERBERUS 2080™, the amazing multi-processor 8-bit microcomputer

CERBERUS 2080™ CERBERUS 2080™, the amazing multi-processor 8-bit microcomputer: a fully open-source project available for anyone to peruse, build, mod

The Byte Attic 84 Dec 14, 2022
This program designed to stress both the processor and memory.

StressTest What it is This program designed to stress both the processor and memory. Firstly it comlete data into memory so the memory load upto 90%(d

null 2 Oct 17, 2021
Enoki: structured vectorization and differentiation on modern processor architectures

Enoki: structured vectorization and differentiation on modern processor architectures

Mitsuba Physically Based Renderer 1.2k Dec 25, 2022
esp32 cam digital low latency fpv

esp32-cam-fpv esp32 cam digital, low latency FPV This project uses a modified esp-camera component running on an AI Thinker board to send low-latency

null 126 Dec 31, 2022
Comparing the performance of Wave Digital Filter implementations

WDF Bakeoff Comparing performance between Wave Digital Filters implemented in C++ and Faust. Building First clone the repository and submodules: git c

null 12 Dec 4, 2022
Voltage Controlled Digital Core Multimode Oscillator using Mozzi library on Arduino

Arduino-VDCO Voltage Controlled Digital Core Multimode Oscillator using Mozzi library on Arduino Its a digital Oscillator/Voice for the Eurorack Stand

null 47 Dec 12, 2022
Direct Digital Synthesis DDS with microcontroller

Direct Digital Synthesis DDS with microcontroller DDS is a cool thing. To understand how it works is even cooler. So I wrote a little tutorial about D

weigu 2 Nov 8, 2022