BaikalDB, A Distributed HTAP Database.


BaikalDB:A Distributed HTAP Database

Build Status

BaikalDB supports sequential and randomised realtime read/write of structural data in petabytes-scale. BaikalDB is compatible with MySQL protocol and it supports MySQL style SQL dialect, by which users can migrate their data storage from MySQL to BaikalDB seamlessly.

BaikalDB internally provides projections, filter operators (corresponding with SQL WHERE or HAVING clause), aggregation operators (corresponding with GROPY BY clause) and sort operators (corresponding with SQL ORDER BY), with which users can fulfill their complex and time-critical analytical and transactional requirement by writing SQL statements. In a typical scenario, hundreds of millions of rows can be scanned and aggregated in few seconds.

BaikalDB also supports full-text search by building inverted indices after words segmentation. Users can harness fuzzy search feature simply by adding a FULLTEXT KEY type index when creating tables and then use LIKE clause in their queries.

See the github wiki for more explanation.


baidu/BaikalDB is licensed under the Apache License 2.0


  • We are especially grateful to the teams of RocksDB, brpc and braft, who built powerful and stable libraries to support important features of BaikalDB.
  • We give special thanks to TiDB team and Impala team. We referred their design schemes when designing and developing BaikalDB.
  • Thanks our friend team -- The Baidu TafDB team, who provide the space efficient snapshot scheme based on braft.
  • Last but not least, we give special thanks to the authors of all libraries that BaikalDB depends on, without whom BaikalDB could not have been developed and built so easily.
  • cmake 编译失败

    cmake 编译失败

    编译环境: centos 7.7.1908 gcc 8.3.1

    In file included from /home/work/workspace/BaikalDB/include/common/log.h:22,
                     from /home/work/workspace/BaikalDB/include/common/common.h:48,
                     from /home/work/workspace/BaikalDB/include/engine/rocks_wrapper.h:25,
                     from /home/work/workspace/BaikalDB/include/raft/log_entry_reader.h:17,
                     from /home/work/workspace/BaikalDB/src/raft/log_entry_reader.cpp:15:
    /usr/local/include/braft/storage.h: In function 'int braft::gc_dir(const string&)':
    /usr/local/include/braft/storage.h:116:9: error: 'COMPACT_GOOGLE_LOG_NOTICE' was not declared in this scope
             LOG(NOTICE) << "Target path not exist, so no need to gc, path: "
    /usr/local/include/braft/storage.h:116:9: note: suggested alternative: 'COMPACT_GOOGLE_LOG_FATAL'
    In file included from /home/work/workspace/BaikalDB/src/raft/log_entry_reader.cpp:16:
    /home/work/workspace/BaikalDB/include/raft/my_raft_log_storage.h: At global scope:
    /home/work/workspace/BaikalDB/include/raft/my_raft_log_storage.h:99:9: error: 'int baikaldb::MyRaftLogStorage::append_entries(const std::vector<braft::LogEntry*>&)' marked 'override', but does not override
         int append_entries(const std::vector<braft::LogEntry*>& entries
    make[2]: *** [CMakeFiles/baikaldb.dir/build.make:245: CMakeFiles/baikaldb.dir/src/raft/log_entry_reader.cpp.o] Error 1
    make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/baikaldb.dir/all] Error 2

    看起来是glog和braft没找到。我已经把这两个库都安装到了/usr/local/include 和 /usr/local/lib。 求问下怎么处理。

    另外,CMakeLists.txt第15行, find_library(RAPIDJSON_LIB NAMES glog) 是不是应该改为find_library(RAPIDJSON_LIB NAMES rapidjson)

    opened by GOGOYAO 16
  • {

    {"errcode":"NOT_LEADER","errmsg":"not leader","leader":""}

    使用baikal-all-v2.0.1-centos-7.tgz根据Ansible-for-BaikalDB写的步骤进行部署,报错 {"errcode":"NOT_LEADER","errmsg":"not leader","leader":""},请问是什么原因,如何解决呢?

    opened by SunBeau 9
  • 请问生产环境有多少个region呢?



    当region比较多的时候(5节点,5k-10k region),3副本的话平均每个节点3k-6k个region,其中1k-2k是leader

    1. 如果网络抖动,大量do_leader_stop后,容易出现pre_vote雪崩的情况
    2. 另一个心跳也比较多,如果选举超时时间是1s的话,那就是1s发2次心跳,整个节点来看QPS上万了,开销也不小


    opened by Yriuns 9
  • Build error in Ubuntu 18

    Build error in Ubuntu 18

    hi Admin, I compiled BaikalDB from source git clone --recurse cd BaikalDB mkdir -p _build && cd _build cmake -DWITH_BAIKAL_CLIENT=ON -DWITH_SYSTEM_LIBS=OFF -DCMAKE_BUILD_TYPE:STRING=Release ../ make all -j 4

    I see these errors :

    /mnt/data/temp/baidu/BaikalDB/_build/third-party/rocksdb/src/extern_rocksdb/./util/compression.h:115: undefined reference to ZSTD_freeDCtx' /mnt/data/temp/baidu/BaikalDB/_build/third-party/rocksdb/src/extern_rocksdb/./util/compression.h:367: undefined reference toZSTD_freeCCtx' /mnt/data/temp/baidu/BaikalDB/_build/third-party/rocksdb/src/extern_rocksdb/./util/compression.h:191: undefined reference to `ZSTD_freeCDict' collect2: error: ld returned 1 exit status CMakeFiles/baikalMeta.dir/build.make:419: recipe for target 'baikalMeta' failed make[2]: *** [baikalMeta] Error 1 CMakeFiles/Makefile2:131: recipe for target 'CMakeFiles/baikalMeta.dir/all' failed make[1]: *** [CMakeFiles/baikalMeta.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    opened by linuxpham 8
  • MySQL Compatibility ?

    MySQL Compatibility ?

    Hi admin,

    I try to test :

    Screen Shot 2020-07-13 at 21 53 53

    mysql --host= --port=5000 -u root -p: CREATE DATABASE menagerie show databases; => empty

    Screen Shot 2020-07-13 at 21 52 29

    How I can use BaiKalDB from my source code? Must I use "baikal-client" library? Is there any document to guide usage? I can not understand about namespace, database, table ??

    opened by linuxpham 7
  • 关于BaikalDB快照安装的疑问


    在BaikalDB里面,每个Region的没有使用braft自带了定时快照功能,而是使用单独一个线程,同一个时刻只允许指定个数的Region做快照。 也就是说,其实快照不是实时的, 我记得一个文档里面好像说是实时的。

    在Region里面也不是说做快照就做快照的, 也是通过时间和log条数相差到达一定时才让做。

    那为啥在install snapshot,node从leader获取快照时却是使用rocksdb接口get_snapshot,直接使用当前的快照?每个SnapshotContext 是install snapshot时才构造出来的。

    是不是因为在on_apply中保存了_applied_index, 重复的数据可以通过 if (iter.index() <= _applied_index) { continue; } 过滤掉呢?

    这样是可以过滤掉, 但节点所installed snaoshot 其实并不是leader on_snapshot_save时所保存的snaoshot 了。

    opened by ghost 7
  • 编译报错和 protobuf 有关吗?

    编译报错和 protobuf 有关吗?

    ERROR: /home/happen/mycode/BaikalDB/BUILD:506:1: Linking of rule '//:baikalMeta' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -o bazel-out/k8-fastbuild/bin/baikalMeta -pthread '-fuse-ld=gold' -Wl,-no-as-needed -Wl,-z,relro,-z,now -B/usr/bin -pass-exit-codes -Wl,-S ... (remaining 1 argument(s) skipped)

    Use --sandbox_debug to see verbose messages from the sandbox bazel-out/k8-fastbuild/bin/external/com_github_brpc_brpc/_objs/brpc/gzip_compress.pic.o:gzip_compress.cpp:function brpc::policy::GzipCompress(google::protobuf::Message const&, butil::IOBuf*): error: undefined reference to 'google::protobuf::io::GzipOutputStream::Options::Options()'

    opened by HappenLee 7
  • 关于事务正确性的疑问:请问如果primary region还没收到任何请求BaikalDB就宕机了会怎么样?

    关于事务正确性的疑问:请问如果primary region还没收到任何请求BaikalDB就宕机了会怎么样?

    假设有一条DML语句涉及1、2、3 3个region,region 1被选为primary region。 region 2和3收到了DML语句并执行,region 1还没收到,BaikalDB宕机了。 按照文档描述:

    second region如果一定时间阈值未收到commit/rollback则通过反查primary region来判断事务是commit还是rollback。 反查时首先查看事务是否还存在,如果存在则不做操作,如果不存在再读取RocksDB判断是否有事务的rollback记录,如果有就执行rollback,如果没有就执行commit

    这样region 2和region 3就会commit,从而形成了部分提交?

    opened by Yriuns 6
  • 通过hint braft库来实现重启不load raft日志

    通过hint braft库来实现重启不load raft日志

    问题描述:这里有提到baikaldb实现了一种加速重启的方案,同时 这里提到<applied_index 就不apply _状态机,也就是所谓的null操作。 但是当有大量写操作时,重启后仍有大量未snapshot的log回放,这导致了大量的io操作实际上影响了混合部署的其他进程,我粗看了baikdb似乎有实现put data与applied_index一起持久化的动作(或者我们应该这么做?),这样就实现了@ipconfigme提到的hint raft库跳过log 避免大量io的操作。 一种可能的方案: 重写一下SnapshotReader::load_meta (或为其提供专门的FileAdaptor,并重写read), 针对重启不回放的场景,通过MetaWriter::read_applied_index获取base数据里实际写入的applied_index进行比较替换以指导raft跳过已应用日志?

    opened by lyng-x 6
  • CMake build issue

    CMake build issue

    I have installed bison and flex, but CMake still reported the error following: CMake Error at CMakeLists.txt:278 (message): failed to generate expression, please install flex and bison first

    -- Configuring incomplete, errors occurred!

    opened by Cr1t-GYM 3
  • 在update时,where columnName=null返回不为NULL

    在update时,where columnName=null返回不为NULL

    在业务使用BaikalDB时,存在误把 is null 写成 =null 的情况, MySQL的处理:NULL 值与任何其它值的比较(即使是 NULL)永远返回 NULL,即 NULL = NULL 返回 NULL BaikalDB的处理:select时,返回null,update时返回所有,问题复现:

    mysql> CREATE TABLE `test`(
           `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键',
           `name` varchar(100) NOT NULL DEFAULT '',
           `age` bigint(20) NOT NULL DEFAULT '0',
           PRIMARY KEY (`id`),
           KEY `ix_name_age` (`name`,`age`)
    mysql> insert into test values(1,"aaa",2),(2,"bbb",2);
    Query OK, 2 rows affected (0.01 sec)
    mysql> select * from test;
    | id | name | age |
    |  1 | aaa  |   2 |
    |  2 | bbb  |   2 |
    2 rows in set (0.01 sec)
    mysql> select * from test where name=null;
    Empty set (0.00 sec)
    mysql> update test set age=0 where name=null;
    Query OK, 2 rows affected (0.00 sec)
    mysql> select * from test;
    | id | name | age |
    |  1 | aaa  |   0 |
    |  2 | bbb  |   0 |
    2 rows in set (0.00 sec)
    opened by hualiyang 2
  • v2.1.0(May 5, 2022)


    滚动升级顺序:BaikalMeta=>BaikalStore=>BaikalDB(本次升级需要严格按照这个顺序,否则会出现兼容性问题) 回滚步骤反过来:BaikalDB=>BaikalStore=>BaikalMeta 如没特殊说明,后续升级步骤都应该按上述顺序进行

    New Features:

    • rocksdb升级到6.26.0(protobuf内部是V3.11.2比较稳定,欢迎有能力同学升级下,可以获得更好的性能)
    • comb multi in predicate by @wy1433 #160
    • local index ddl 与 gobal index ddl流程统一
    • 新增handle/show sql代替脚本
    • 优化旁路探测问题
    • where条件中datetime类似与数字20201212比较兼容mysql
    • 支持主机房概念,可以设置主机房,让leader在主机房选出
    • 虚拟索引过meta收集任意sql的影响面
    • 支持substring_index
    • 支持按照网段进行负载均衡
    • region split主动add peer,防止单副本运行
    • partition表兼容mysql语法
    • 支持online TTL
    • delete、update支持子查询
    • 支持regexp
    • drop index force可以直接走删除流程
    • 新增函数: timestamp, lpad, rpad, time_format, convert_tz, isnull, database, cast, convert, default @wy1433
    • 兼容jdbc8.0驱动 @wy1433
    • 支持表注释,并支持alter table修改表注释,及comment里面内容 @wy1433
    • drop index忽略索引名称大小写 @wy1433
    • 支持sql单行注释写法 @wy1433
    • 增加manual_split_region可以手工分裂
    • 增加内存限制功能
    • 建表使用unique index默认改成global的,可以通过unique index local来指定成local的,可以通过-unique_index_default_global=false改成默认是local的
    • 建表使用普通索引index默认还是local的,可以通过index global指定成global的,可以通过-normal_index_default_global=true改成默认是global的
    • 通过-open_nonboolean_sql_forbid=true可以禁止非bool表达式参与and/or计算,默认false
    • 通过-open_non_where_sql_forbid=true可以禁止没有where条件的update/delete,默认false
    • 通过-limit_unappropriate_sql=true可以限制手工sql(每个db执行次数少于3次)每个store并发到1,防止store被打挂,默认false
    • 支持ingest sst方式的快速导入功能(暂未适配开源编译,将在后续开放)

    Bug Fixes:

    • 修复子查询中常量不能替换’?‘的bug
    • 修复insert select未更新自增主键
    • 修复tdigest agg问题
    • 修复各种事务问题
    • 修复1pc和2pc混用卡住问题
    • 修复update set now()时3副本不一致问题,rand()函数不是const
    • 修复全局索引并发写primary和secodary的问题
    • 修复baikaldb死循环,出core问题
    • 修复子查询join全局索引表出core
    • 修复fake binlog 不反查primary region
    • add peer过程中正好触发延时删除ingest失败
    • 修复包含slot的like predict bug
    • 修复current_timestamp问题
    • 修复标量子查询core和多列问题
    • 修复全局索引+row_ttl
    • 修复order by超过1个表达式计算问题
    • write binlog错误码修复
    • 子查询kill问题修复
    • datediff 问题修复
    • 修复match against类型推导
    • 修复全局索引+insert duplicate中values函数不生效问题

    Performance Improvements:

    • region、qos采用双buf降低锁冲突
    • join on驱动表扫描出的等值value进行去重作为被驱动表的条件,降低数据量
    • userinfo使用双buf提升性能
    • qos优化
    • meta cf不使用ingest
    • 优化tdigest存储
    • CompactionFilter只做后2层
    • region split优化降低禁写时间
    • meta性能优化和锁优化
    • limit下推 @wy1433 #174
    • 注释解析性能优化
    • MemRowDescriptor缓存降低多列情况下开销
    • 局部索引内存优化
    • 索引key解析优化
    • db到store改用异步请求,降低高并发下bthread数量
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Jul 30, 2021)

    Bug Fixes:

    修复snapshot发送时遇到重试可能丢数据的问题 install snapshot时做校验

    New Features:

    binlog元数据表支持ttl 兼容.netcore对string与binary的处理差异,支持_binary EXPR用法及blob类型返回,返回编码field_charsetnr_set_by_client=true可以设置为client编码,否则是0 增加函数last_insert_id(expr),可以在session级别保存表达式执行结果 支持库名,表名,别名,大小写不敏感用法,schema_ignore_case=true开启 支持多表查询时,不同表字段相同,查询时没有指定表名的歧义消除功能,disambiguate_select_name=true开启 peer_load_balance增加balance_add_peer_num参数,默认每张表迁移10个,可不重启meta动态修改以调整迁移速度 leader_load_balance放宽transfer leader的候选peer的选择限制

    Source code(tar.gz)
    Source code(zip)
    baikal-all-v2.0.1-centos-7.tgz(162.42 MB)
    baikal-all-v2.0.1-centos-7.tgz.sha256sum(108 bytes)
    baikal-all-v2.0.1-centos-8.tgz(168.77 MB)
    baikal-all-v2.0.1-centos-8.tgz.sha256sum(108 bytes)
    baikal-all-v2.0.1-ubuntu-16.04.tgz(140.44 MB)
    baikal-all-v2.0.1-ubuntu-16.04.tgz.sha256sum(112 bytes)
    baikal-all-v2.0.1-ubuntu-18.04.tgz(145.64 MB)
    baikal-all-v2.0.1-ubuntu-18.04.tgz.sha256sum(112 bytes)
    baikal-all-v2.0.1-ubuntu-20.04.tgz(176.34 MB)
    baikal-all-v2.0.1-ubuntu-20.04.tgz.sha256sum(112 bytes)
  • v2.0.0(May 20, 2021)


    滚动升级顺序:BaikalMeta=>BaikalStore=>BaikalDB 如没特殊说明,后续升级步骤都应该按上述顺序进行

    New Features:

    • 支持cmake编译
    • 增加bvar监控
    • 修复部分统计信息问题
    • 支持b'11'、 0b11、 x'AA'、 0xAA 字面
    • 支持子查询
    • 支持selfjoin:table t1 join table t2
    • 支持binlog
    • 增加全局索引的online ddl操作,alter table xxx add index global idx(filed);
    • 代价相关会结合一些规则判断,增加索引选择正确率
    • 暂时使用直方图+内部的distinct count做等值判断,cmskectch在大表情况下准确性较低
    • 支持information_schema流程 ,需要添加新的information_schema的表可以参考src/common/information_schema.cpp
    • 统计信息和规则选择索引部分做了修改,选择更准确些
    • 增加虚拟索引:alter table id_name_c add VIRTUAL index idx_name (name); 查看受影响的sql:show virtual index;用来新建索引前评估影响
    • using docker-compose to build a minimal three node cluster
    • 增加roaringbitmap,可以用于精确统计一些uv,用法与hll类似
    • 增加TDIGEST用来做分位值的估算
    • 增加db探测store的功能,在单个store假死的情况下可以快速恢复系统,集群更稳定
    • addpeer前会检查rocksdb是否stalling,提前拒绝addpeer,减少禁写风险
    • 增加dataindex,在install snapshot时会判断follower的数据是否已经是新的
    • 默认从snappy改成增加lz4压缩
    • 最底层增加zstd压缩,通过-enable_bottommost_compression开启
    • add funcs: date_add, date_sub, utc_timestamp, last_insert_id
    • store增加令牌桶流控(qos),对每类sql做并发限制,降低突发流量对正常sql的影响,默认不开启
    • 对每个请求做内存限制,通过db_sql_memory_bytes_limit(8G)和store_sql_memory_bytes_limit(16G)控制
    • 定期调用tcmalloc接口回收内存src/common/memory_profile.cpp
    • 支持loaddata语句,load的文件目前只能放在server端(相对于baikaldb的路径)
    • 一个表多机房时可以设置一个主机房(main_logical_room),leader总是在主机房内
    • 增加索引屏蔽概念,已屏蔽的索引不会被选择,在线索引操作调整:
      • drop index设置屏蔽状态,2天(table_tombstone_gc_time_s)后自动发起删除流程;目的是删错索引可以快速restore恢复,快速止损
      • add index流程走完后,索引为屏蔽状态,需要二次确认restore;防止流程走完后,索引影响老业务,并且无人关注到
      • 屏蔽状态的索引转为正常索引:alter table xxx restore index idx

    Bug Fixes:

    • 修复meta双buf问题
    • 修复多次删除创建同名后restore恢复表不一致问题
    • 单语句事务可能会导致raft卡住的bug修复
    • 修复之前GetApproximateSizes不准的bug
    • 修复一些可能的内存泄露问题
    • 修复只读事务bug,优化部分流程
    • prepare insert current time default value
    • fix count(*) return NULL expected 0 when filter expr is always false
    • ignore non-json format comment for sql string
    • update last_insert_id when client set the value

    Performance Improvements:

    • 主表、索引覆盖、全局索引seek优化,去除table_record到mem_row的转化
    • 通过plan复用优化select prepare stmt性能
    • rocksdb事务锁改成bthread锁;不阻塞pthread
    • 提升cstore扫描性能
    • 降低qos,memlimit的开销
    • 增加了推荐的conf配置
    Source code(tar.gz)
    Source code(zip)
    baikal-all-v2.0.0-centos-7.tgz(159.89 MB)
    baikal-all-v2.0.0-centos-7.tgz.sha256sum(108 bytes)
    baikal-all-v2.0.0-centos-8.tgz(168.05 MB)
    baikal-all-v2.0.0-centos-8.tgz.sha256sum(108 bytes)
    baikal-all-v2.0.0-ubuntu-16.04.tgz(139.85 MB)
    baikal-all-v2.0.0-ubuntu-16.04.tgz.sha256sum(112 bytes)
    baikal-all-v2.0.0-ubuntu-18.04.tgz(144.99 MB)
    baikal-all-v2.0.0-ubuntu-18.04.tgz.sha256sum(112 bytes)
    baikal-all-v2.0.0-ubuntu-20.04.tgz(175.65 MB)
    baikal-all-v2.0.0-ubuntu-20.04.tgz.sha256sum(112 bytes)
  • v1.1.3(Jul 10, 2020)

    New Features

    • 增加rocks_use_partitioned_index_filters配置决定是否使用Partitioned Index Filters,默认为false;开启后需要通过rocks_block_cache_size_mb配置内存占用,同时rocks_max_open_files失效,永远打开全部sst
    • 索引范围可以结合IN和between
    • 采用re2代替boost.regex
    • DeleteRange前先条用DeleteFilesInRange,可以先删除完全覆盖的文件,降低compaction压力

    Bug Fixes

    • 修复扫描cstore解析问题
    • 修复部分场景primary解析问题
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Jun 3, 2020)


    从v1.0.x升级到v1.1.x版本,必须先更新兼容版本v1.1.0,否则可能出现数据丢失情况 v1.1.1 is abandoned


    1. 滚动更新store到v1.1.0
    2. 滚动更新db到v1.1.0
    3. 滚动更新store到v1.1.2
    4. 滚动更新db到v1.1.2 回滚步骤相反,只能回滚到v1.0.2



    • 增加了统计信息功能
    • 增加利用统计信息计算代价选择索引功能
    • sst备份增加stream接口,减少内存占用,解决大region备份失败问题
    • 支持部分时间函数,支持常用字符串函数
    • print_time_us适用于baikaldb的notice日志打印,需要全部打印请配置为0
    • alias别名兼容mysql,where条件不再支持alias
    • brpc更新到0.9.7
    • braft跟新到v1.1.1
    • rocksdb更新到6.8.1
    • rocksdb使用format_version=4,因此升级后只能回滚到v1.0.2(rocksdb 5.12.4不支持)
    • 使用Partitioned Index Filters,rocksdb主要内存占用可以通过rocks_block_cache_size_mb配置,默认8G;线上使用建议调大这个参数
    • 倒排拉链支持使用arrow存储

    Bug Fix:

    • 修复sst备份接口不释放snapshot导致内存无法释放问题
    • 修复多次删除创建同名后restore恢复表不一致问题


    • 主表、索引覆盖、全局索引seek优化,去除table_record到mem_row的转化
    • 通过plan复用优化select prepare stmt性能
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Jun 1, 2020)

  • v1.1.0(Jun 1, 2020)


    本次版本是事务版本兼容优化 v1.1.0以后的版本滚动更新,必须先滚动更新v1.1.0版本,否则可能出现数据丢失情况

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Mar 13, 2020)

    Bug Fixes: 修复返回包超过-max_body_size后直接返回失败 #79 like前缀匹配路由修复 #79 增加FLAGS_servitysinglelog控制是否分开打印glog #78

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Mar 10, 2020)

Baidu Open Source Projects
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 807 Jun 30, 2022
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

yugabyte 6.6k Jul 1, 2022
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

OceanBase 4.4k Jun 27, 2022
GalaxyEngine is a MySQL branch originated from Alibaba Group, especially supports large-scale distributed database system.

GalaxyEngine is a MySQL branch originated from Alibaba Group, especially supports large-scale distributed database system.

null 229 Jun 30, 2022
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8k Jul 4, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 3 Nov 3, 2021
FoundationDB - the open source, distributed, transactional key-value store

FoundationDB is a distributed database designed to handle large volumes of structured data across clusters of commodity servers. It organizes data as

Apple 11.5k Jun 29, 2022
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider 127 Jun 24, 2022
Distributed PostgreSQL as an extension

What is Citus? Citus is a PostgreSQL extension that transforms Postgres into a distributed database—so you can achieve high performance at any scale.

Citus Data 6.8k Jun 28, 2022
ESE is an embedded / ISAM-based database engine, that provides rudimentary table and indexed access.

Extensible-Storage-Engine A Non-SQL Database Engine The Extensible Storage Engine (ESE) is one of those rare codebases having proven to have a more th

Microsoft 780 Jun 13, 2022
DuckDB is an in-process SQL OLAP Database Management System

DuckDB is an in-process SQL OLAP Database Management System

DuckDB 5.4k Jun 27, 2022
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 13.3k Jun 27, 2022
Beryl-cli is a client for the BerylDB database server

Beryl-cli is a client for the BerylDB database server. It offers multiple commands and is designed to be fast and user-friendly.

BerylDB 11 Apr 21, 2022
PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL.

PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.

Alibaba 2.3k Jun 30, 2022
A MariaDB-based command line tool to connect to OceanBase Database.

什么是 OceanBase Client OceanBase Client(简称 OBClient) 是一个基于 MariaDB 开发的客户端工具。您可以使用 OBClient 访问 OceanBase 数据库的集群。OBClient 采用 GPL 协议。 OBClient 依赖 libobclie

OceanBase 47 Mar 16, 2022
A proxy server for OceanBase Database.

OceanBase Database Proxy TODO: some badges here OceanBase Database Proxy (ODP for short) is a dedicated proxy server for OceanBase Database. OceanBase

OceanBase 76 Apr 6, 2022
SOCI - The C++ Database Access Library

Originally, SOCI was developed by Maciej Sobczak at CERN as abstraction layer for Oracle, a Simple Oracle Call Interface. Later, several database backends have been developed for SOCI, thus the long name has lost its practicality. Currently, if you like, SOCI may stand for Simple Open (Database) Call Interface or something similar.

SOCI 1.1k Jun 25, 2022
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks 2.7k Jun 26, 2022
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.

Velox is a C++ database acceleration library which provides reusable, extensible, and high-performance data processing components

Facebook Incubator 893 Jun 27, 2022