FoundationDB - the open source, distributed, transactional key-value store

Overview

FoundationDB logo

Build Status

FoundationDB is a distributed database designed to handle large volumes of structured data across clusters of commodity servers. It organizes data as an ordered key-value store and employs ACID transactions for all operations. It is especially well-suited for read/write workloads but also has excellent performance for write-intensive workloads. Users interact with the database using API language binding.

To learn more about FoundationDB, visit foundationdb.org

Documentation

Documentation can be found online at https://apple.github.io/foundationdb/. The documentation covers details of API usage, background information on design philosophy, and extensive usage examples. Docs are built from the source in this repo.

Forums

The FoundationDB Forums are the home for most of the discussion and communication about the FoundationDB project. We welcome your participation! We want FoundationDB to be a great project to be a part of and, as part of that, have established a Code of Conduct to establish what constitutes permissible modes of interaction.

Contributing

Contributing to FoundationDB can be in contributions to the code base, sharing your experience and insights in the community on the Forums, or contributing to projects that make use of FoundationDB. Please see the contributing guide for more specifics.

Getting Started

Binary downloads

Developers interested in using FoundationDB can get started by downloading and installing a binary package. Please see the downloads page for a list of available packages.

Compiling from source

Developers on an OS for which there is no binary package, or who would like to start hacking on the code, can get started by compiling from source.

The official docker image for building is foundationdb/build which has all dependencies installed. The Docker image definitions used by FoundationDB team members can be found in the dedicated repository..

To build outside the official docker image you'll need at least these dependencies:

  1. Install cmake Version 3.13 or higher CMake
  2. Install Mono
  3. Install Ninja (optional, but recommended)

If compiling for local development, please set -DUSE_WERROR=ON in cmake. Our CI compiles with -Werror on, so this way you'll find out about compiler warnings that break the build earlier.

Once you have your dependencies, you can run cmake and then build:

  1. Check out this repository.
  2. Create a build directory (you can have the build directory anywhere you like). There is currently a directory in the source tree called build, but you should not use it. See #3098
  3. cd
  4. cmake -G Ninja
  5. ninja # If this crashes it probably ran out of memory. Try ninja -j1

Language Bindings

The language bindings that are supported by cmake will have a corresponding README.md file in the corresponding bindings/lang directory.

Generally, cmake will build all language bindings for which it can find all necessary dependencies. After each successful cmake run, cmake will tell you which language bindings it is going to build.

Generating compile_commands.json

CMake can build a compilation database for you. However, the default generated one is not too useful as it operates on the generated files. When running make, the build system will create another compile_commands.json file in the source directory. This can than be used for tools like CCLS, CQuery, etc. This way you can get code-completion and code navigation in flow. It is not yet perfect (it will show a few errors) but we are constantly working on improving the development experience.

CMake will not produce a compile_commands.json, you must pass -DCMAKE_EXPORT_COMPILE_COMMANDS=ON. This also enables the target processed_compile_commands, which rewrites compile_commands.json to describe the actor compiler source file, not the post-processed output files, and places the output file in the source directory. This file should then be picked up automatically by any tooling.

Note that if building inside of the foundationdb/build docker image, the resulting paths will still be incorrect and require manual fixing. One will wish to re-run cmake with -DCMAKE_EXPORT_COMPILE_COMMANDS=OFF to prevent it from reverting the manual changes.

Using IDEs

CMake has built in support for a number of popular IDEs. However, because flow files are precompiled with the actor compiler, an IDE will not be very useful as a user will only be presented with the generated code - which is not what she wants to edit and get IDE features for.

The good news is, that it is possible to generate project files for editing flow with a supported IDE. There is a CMake option called OPEN_FOR_IDE which will generate a project which can be opened in an IDE for editing. You won't be able to build this project, but you will be able to edit the files and get most edit and navigation features your IDE supports.

For example, if you want to use XCode to make changes to FoundationDB you can create a XCode-project with the following command:

cmake -G Xcode -DOPEN_FOR_IDE=ON <FDB_SOURCE_DIRECTORY>

You should create a second build-directory which you will use for building and debugging.

FreeBSD

  1. Check out this repo on your server.

  2. Install compile-time dependencies from ports.

  3. (Optional) Use tmpfs & ccache for significantly faster repeat builds

  4. (Optional) Install a JDK for Java Bindings. FoundationDB currently builds with Java 8.

  5. Navigate to the directory where you checked out the foundationdb repo.

  6. Build from source.

    sudo pkg install -r FreeBSD \
        shells/bash devel/cmake devel/ninja devel/ccache  \
        lang/mono lang/python3 \
        devel/boost-libs devel/libeio \
        security/openssl
    mkdir .build && cd .build
    cmake -G Ninja \
        -DUSE_CCACHE=on \
        -DDISABLE_TLS=off \
        -DUSE_DTRACE=off \
        ..
    ninja -j 10
    # run fast tests
    ctest -L fast
    # run all tests
    ctest --output-on-failure -v

Linux

There are no special requirements for Linux. A docker image can be pulled from foundationdb/build that has all of FoundationDB's dependencies pre-installed, and is what the CI uses to build and test PRs.

cmake -G Ninja 
   
    
ninja
cpack -G DEB

   

For RPM simply replace DEB with RPM.

MacOS

The build under MacOS will work the same way as on Linux. To get boost and ninja you can use Homebrew.

cmake -G Ninja <PATH_TO_FOUNDATIONDB_SOURCE>

To generate a installable package,

ninja
$SRCDIR/packaging/osx/buildpkg.sh . $SRCDIR

Windows

Under Windows, the build instructions are very similar, with the main difference that Visual Studio is used to compile.

  1. Install Visual Studio 2017 (Community Edition is tested)
  2. Install cmake Version 3.12 or higher CMake
  3. Download version 1.72 of Boost
  4. Unpack boost (you don't need to compile it)
  5. Install Mono
  6. (Optional) Install a JDK. FoundationDB currently builds with Java 8
  7. Set JAVA_HOME to the unpacked location and JAVA_COMPILE to $JAVA_HOME/bin/javac.
  8. Install Python if it is not already installed by Visual Studio
  9. (Optional) Install WIX. Without it Visual Studio won't build the Windows installer
  10. Create a build directory (you can have the build directory anywhere you like): mkdir build
  11. cd build
  12. cmake -G "Visual Studio 15 2017 Win64" -DBOOST_ROOT=
  13. This should succeed. In which case you can build using msbuild: msbuild /p:Configuration=Release foundationdb.sln. You can also open the resulting solution in Visual Studio and compile from there. However, be aware that using Visual Studio for development is currently not supported as Visual Studio will only know about the generated files. msbuild is located at c:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe for Visual Studio 15.

If you installed WIX before running cmake you should find the FDBInstaller.msi in your build directory under packaging/msi.

TODO: Re-add instructions for TLS support #3022

Comments
  • Rebase to main

    Rebase to main

    Put description here...

    Code-Reviewer Section

    The general guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by yao-xiao-github 159
  • Use DDSketch for sample data

    Use DDSketch for sample data

    Updated version of https://github.com/apple/foundationdb/pull/4088 .

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-sgwydir 152
  • Shard based move

    Shard based move

    1. Assign a UID for each shard, the new format is protected by SHARD_ENCODE_LOCATION_METADATA

      1. Introduced two new UID field in keyServers/, which represent the source and dest shard IDs respectively.
      2. The shardId is also recorded in serverKeys/ keyspace: serverKeys/[serverId]/[key]: [shardId]
    2. Introduced a new shard-based data move pipeline, this is necessary preparation for sharded-rocks, and physical shard move, e.g., each data move has a target physical shard, and each data move's resources are all allocated under the corresponding data move ID.

      1. Each data move is assigned a unique ID, and a DataMoveMetaData is persisted for the data move
      2. Data moves are scheduled and executed based on DataMoveMetaData

    Code-Reviewer Section

    The general guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by liquid-helium 151
  • Add physical shard DD core

    Add physical shard DD core

    This PR introduces PhysicalShard concept to Data Distribution The feature is protected by ENABLE_DD_PHYSICAL_SHARD. ENABLE_DD_PHYSICAL_SHARD replies on SHARD_ENCODE_LOCATION_METADATA. Please make sure SHARD_ENCODE_LOCATION_METADATA is set when setting ENABLE_DD_PHYSICAL_SHARD.

    The core data structure is PhysicalShardCollection, which is responsible for the creation and maintenance of physical shards in data distribution (A physical shard contains one or multiple key ranges, aka shards). PhysicalShardCollection has three functional parts:

    1. Creating physical shards. Once a physical shard is created for a team, the physical shard will not change the team. For one DC setting, a physical shard always belongs to a particular primary team. For two DCs setting, a physical shard always belongs to a particular primary team and a particular remote team.
    2. Updating physical shard metrics. A physical shard metric is updated by the trackers of the shards (key ranges) that belong to the physical shard.
    3. Transition. If a DD with no PhysicalShard concept restarts with the physicalShard concept, all keyRanges are in the anonymousShard. We gradually move the keyRanges out of the anonymousShard until the system enters a state where no anonymousShard is in the system.

    PhysicalShardCollection is initialized when loading iShard in resumeFromShards. PhysicalShardCollection is updated when selecting dest teams in dataDistributionRelocator. When a dest team is decided, PhysicalShardCollection picks a physical shard from the team. If the team has no physical shard, PhysicalShardCollection creates a physical shard for the team.

    If the cluster has multiple DCs, PhysicalShardCollection uses two steps to select dest teams and dest physical shard.

    • Step 1: Select a primary team by getTeam().
    • Step 2: Randomly pick a physical shard of the primary team.

    Once the physical shard is selected, a remote team is automatically decided. Note that a remote team selected in this way may be an unhealthy or heavy or overloaded team. If this is the case, PhysicalShardCollection "re"-selects the remote team by getTeam.

    Note that (1) the current design of PhysicalShardCollection assumes that there exist at most two teamCollections (one primary team and one remote team); (2) When ENABLE_DD_PHYSICAL_SHARD is set, the optimization of saving traffic for data move between DCs is disabled.

    This PR fixes a ddstuck issue triggered by restoring data moves For a restored data move, the destination team does not change. As a result, the restored data move may repeatedly move data to a busy destination team. To solve this issue, this PR simply cancels the data move as the case when ddstuckCount > 50. Currently, this fix is protected by the feature flag. Further discussion on safety is required.

    Correctness test: ENABLE_DD_PHYSICAL_SHARD off and SHARD_ENCODE_LOCATION_METADATA off: 20220819-181102-zhewang-72cd218a8272e01f compressed=True data_size=36942829 duration=4025970 ended=100000 fail_fast=10 max_runs=100000 pass=100000 priority=100 remaining=0 runtime=0:29:46 sanity=False started=100495 stopped=20220819-184048 submitted=20220819-181102 timeout=5400 username=zhewang

    ENABLE_DD_PHYSICAL_SHARD off and SHARD_ENCODE_LOCATION_METADATA on (according to addr2line, the four failures are the same crash, which seems related to get range operation of sharded rocksdb): 20220819-161333-zhewang-d4d373cfc7c413df compressed=True data_size=36942836 duration=5234785 ended=99544 fail=4 fail_fast=10 max_runs=100000 pass=99540 priority=100 remaining=0:00:09 runtime=0:33:50 sanity=False started=100447 submitted=20220819-161333 timeout=5400 username=zhewang

    fff6 0x7f9985aab190
    ?? ??:0
    (anonymous namespace)::ShardedRocksDBKeyValueStore::Reader::action((anonymous namespace)::ShardedRocksDBKeyValueStore::Reader::ReadRangeAction&) at /root/src/foundationdb/fdbserver/KeyValueStoreShardedRocksDB.actor.cpp:2110
    ~ReadRangeAction at /root/src/foundationdb/fdbserver/KeyValueStoreShardedRocksDB.actor.cpp:2049
     (inlined by) operator() at /root/src/foundationdb/flow/include/flow/IThreadPool.h:77
    yield(TaskPriority) at /root/src/foundationdb/flow/include/flow/flow.h:1362
     (inlined by) WorkPool<Coroutine, ThreadUnsafeSpinLock, true>::Worker::run() at /root/src/foundationdb/fdbserver/coroimpl/CoroFlowCoro.actor.cpp:148
    Coroutine::wrapRun() at /root/src/foundationdb/fdbserver/coroimpl/CoroFlowCoro.actor.cpp:85
     (inlined by) Coroutine::entry(void*) at /root/src/foundationdb/fdbserver/coroimpl/CoroFlowCoro.actor.cpp:89
    Coro_StartWithArg at /root/src/foundationdb/fdbrpc/libcoroutine/Coro.c:250
    ?? ??:0
    

    ENABLE_DD_PHYSICAL_SHARD on and SHARD_ENCODE_LOCATION_METADATA on (the failure is likely related to get range operation of sharded rocksdb, as shown previously): 20220819-185046-zhewang-805d47f4e4a78ccf compressed=True data_size=36943136 duration=4612888 ended=100000 fail=1 fail_fast=10 max_runs=100000 pass=99999 priority=100 remaining=0 runtime=0:25:34 sanity=False started=100208 stopped=20220819-191620 submitted=20220819-185046 timeout=5400 username=zhewang

    Code-Reviewer Section

    The general guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by kakaiu 147
  • Authorization

    Authorization

    This PR adds authorization to FDB. When a client connects with TLS (as opposed to mTLS) FDB will now accept the connection but it will expect a valid JWT token for each request. This token can be set through a transaction option

    Code-Reviewer Section

    The general guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-mpilman 138
  • Correctness fixes for snowflake/release-71.2

    Correctness fixes for snowflake/release-71.2

    Before approving this PR we should make sure the PRs below (which are all closed in lieu of this PR) have been approved. (same strategy as #6958)

    • [x] Checkbox means merged to main branch
    • [x] #7903
    • [x] #7914
    • [x] #7925
    • [x] #7905
    • [x] #7912
    • [x] #7924
    • [x] #7940
    • [x] #7946
    • [x] #7954
    • [x] #7959
    • [x] #7960
    • [x] #7963
    • [x] #7966
    • [x] #7970
    • [x] #7971
    • [x] #7972
    • [x] #7975
    • [x] #7983
    • [x] #7990
    • [x] #7991
    • [x] #7996
    • [x] #7997
    • [x] #8010
    • [x] #8011
    • [x] #8005
    • [x] #8026
    • [x] #8027
    • [x] #8029
    • [x] #8030
    • [x] #8031
    • [x] #8032
    • [x] #8042
    • [x] #8036
    • [x] #8060
    • [x] #8064
    • [ ] #8065
    • [x] #8056
    • [x] #8077
    • [x] #8080
    • [ ] #8090
    • [x] #8085
    • [x] #8083
    • [x] #8091
    • [x] #8092
    • [x] #8119
    • [x] #8129
    • [x] #8132
    opened by sfc-gh-jslocum 130
  • Fix/make all python code pythonic

    Fix/make all python code pythonic

    I am going to make the python files more Pythonic and clean up the code. folders TODO:

    • [x] bindings - 32 files
    • [x] contrib - 18 files
    • [x] documentation - 4 files
    • [x] fdbrpc - 1 file
    • [x] layers - 18 files
    • [x] packaging - 3 files
    • [x] recipes - 8 files
    • [x] tests - 13 files

    Code-Reviewer Section

    The general guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by LukasMoll 130
  • Validate data consistency

    Validate data consistency

    Added TriggerAuditRequest CC/DD APIs.

    It can start a storage audit process, currently, it picks two storage server from each side of a HA configuration for a keyrange, and compare the key/values.

    A TriggerAuditRequest is processed first by DD, it breaks down the audit into sub-tasks, and select a storage server to execute each sub-task. E.g., DD can select storage servers based on their keyrange assignment, and each storage server will read the range locally and fetch the same range from a remote server in a different cluster, and compare the data.

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by liquid-helium 122
  • Enable machine attrition injection

    Enable machine attrition injection

    Fix all attrition injection bugs. This passes 100k correctness runs:

    20221114-183209-mpilman-027b9937a4dba4c7

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-mpilman 110
  • Make the storage metrics functions tenant aware

    Make the storage metrics functions tenant aware

    Make the storage metrics functions tenant aware.

    Also add a workload to test the getEstimatedRangeSizeBytes functionality.

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-akejriwal 109
  • Support for 3.x clusters to live migrate

    Support for 3.x clusters to live migrate

    I remember @ajbeamon mentioning that there's something that needs to be changed in the 3.x code (IIRC?) that will allow us to ship a foundationdb-client package that can talk to a both a 3.x cluster and a 5.x cluster. I can't seem to find that in the code, can someone point us in the right direction?

    opened by panghy 106
  • Fix compilation for compiling with clang15

    Fix compilation for compiling with clang15

    This change backports the contents of cmake/Jemalloc.cmake and cmake/awssdk.cmake from the main branch, and fixes some compilation errors in fdb_c.h. After this change we should now be able to compile 7.1 again.

    Replace this text with your description here...

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-anoyes 6
  • Bump setuptools from 65.3.0 to 65.5.1 in /documentation/sphinx

    Bump setuptools from 65.3.0 to 65.5.1 in /documentation/sphinx

    Bumps setuptools from 65.3.0 to 65.5.1.

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    v65.4.0

    Changes ^^^^^^^

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 7
  • Add fdbcli commands for checking the status of and clearing old idempotency ids

    Add fdbcli commands for checking the status of and clearing old idempotency ids

    Add fdbcli command idempotencyids for checking the status of and clearing stored idempotency Ids older than the given age.

    The command is currently a hidden command since the automatic idempotency feature is currently not recommended for production usage. (It is not visible in the help text, but can still be used.)

    The PR also includes reformatting for the fdbcli_tests.py file (as per pre-commit hooks) in a separate git commit.

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-akejriwal 7
  • Improved SHARD_ENCODE_LOCATION_METADATA migration.

    Improved SHARD_ENCODE_LOCATION_METADATA migration.

    Minor improvement for migration to SHARD_ENCODE_LOCATION_METADATA.

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by liquid-helium 12
  • Fix incorrect JSON access of blob metadata partitions in REST KMS test

    Fix incorrect JSON access of blob metadata partitions in REST KMS test

    A unit test was trying to access a JSON field at the wrong location, and this triggered assertions in debug builds.

    @sfc-gh-ahusain I'm not sure why this doesn't fail the test in non-debug builds, but you may want to investigate that.

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.

    For Release-Branches

    If this PR is made against a release-branch, please also check the following:

    • [ ] This change/bugfix is a cherry-pick from the next younger branch (younger release-branch or main if this is the youngest branch)
    • [ ] There is a good reason why this PR needs to go into a release branch and this reason is documented (either in the description above or in a linked GitHub issue)
    opened by sfc-gh-ajbeamon 7
  • Add Golang binding for FreeBSD

    Add Golang binding for FreeBSD

    The build of Go language bindings for FoundationDB has failed on FreeBSD. I added new binding for FreeBSD using the Darwin (MacOS) binding as an example.

    Code-Reviewer Section

    The general pull request guidelines can be found here.

    Please check each of the following things and check all boxes before accepting a PR.

    • [ ] The PR has a description, explaining both the problem and the solution.
    • [ ] The description mentions which forms of testing were done and the testing seems reasonable.
    • [ ] Every function/class/actor that was touched is reasonably well documented.
    opened by iClaus21 7
Releases(7.2.0)
Kreon is a key-value store library optimized for flash-based storage

Kreon is a key-value store library optimized for flash-based storage, where CPU overhead and I/O amplification are more significant bottlenecks compared to I/O randomness.

Computer Architecture and VLSI Systems (CARV) Laboratory 24 Jul 14, 2022
BerylDB is a data structure data manager that can be used to store data as key-value entries.

BerylDB is a data structure data manager that can be used to store data as key-value entries. The server allows channel subscription and is optimized to be used as a cache repository. Supported structures include lists, sets, and keys.

BerylDB 203 Dec 16, 2022
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8.6k Dec 26, 2022
Simple constant key/value storage library, for read-heavy systems with infrequent large bulk inserts.

Sparkey is a simple constant key/value storage library. It is mostly suited for read heavy systems with infrequent large bulk inserts. It includes bot

Spotify 989 Dec 14, 2022
LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Authors: Sanjay Ghem

Google 31.6k Jan 7, 2023
Kunlun distributed DBMS is a NewSQL OLTP relational distributed database management system

Kunlun distributed DBMS is a NewSQL OLTP relational distributed database management system. Application developers can use Kunlun to build IT systems that handles terabytes of data, without any effort on their part to implement data sharding, distributed transaction processing, distributed query processing, crash safety, high availability, strong consistency, horizontal scalability. All these powerful features are provided by Kunlun.

zettadb 114 Dec 26, 2022
An Embedded NoSQL, Transactional Database Engine

UnQLite - Transactional Embedded Database Engine

PixLab | Symisc Systems 1.8k Dec 24, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 834 Dec 24, 2022
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

yugabyte 7.4k Jan 7, 2023
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

OceanBase 5.1k Jan 4, 2023
BaikalDB, A Distributed HTAP Database.

BaikalDB supports sequential and randomised realtime read/write of structural data in petabytes-scale. BaikalDB is compatible with MySQL protocol and it supports MySQL style SQL dialect, by which users can migrate their data storage from MySQL to BaikalDB seamlessly.

Baidu 1k Dec 28, 2022
GalaxyEngine is a MySQL branch originated from Alibaba Group, especially supports large-scale distributed database system.

GalaxyEngine is a MySQL branch originated from Alibaba Group, especially supports large-scale distributed database system.

null 281 Jan 4, 2023
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider 132 Sep 8, 2022
Distributed PostgreSQL as an extension

What is Citus? Citus is a PostgreSQL extension that transforms Postgres into a distributed database—so you can achieve high performance at any scale.

Citus Data 7.7k Dec 30, 2022
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 14.3k Jan 2, 2023
PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL.

PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.

Alibaba 2.5k Dec 31, 2022
DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.

DB Browser for SQLite What it is DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files c

null 17.5k Jan 2, 2023
The open-source database for the realtime web.

RethinkDB What is RethinkDB? Open-source database for building realtime web applications NoSQL database that stores schemaless JSON documents Distribu

RethinkDB 25.9k Jan 9, 2023
GridDB is a next-generation open source database that makes time series IoT and big data fast,and easy.

Overview GridDB is Database for IoT with both NoSQL interface and SQL Interface. Please refer to GridDB Features Reference for functionality. This rep

GridDB 2k Jan 8, 2023