The database built for IoT streaming data storage and real-time stream processing.

Overview

GitHub top language GitHub Workflow Status Docker Pulls Slack Twitter Community YouTube

HStreamDB

hstream-db

The database built for IoT streaming data storage and real-time stream processing.

Main Features

  • Push real-time data to your apps

    By subscribing to streams in HStreamDB, any update of the data stream will be pushed to your apps in real-time, and this promotes your apps to be more responsive.

    You can also replace message brokers with HStreamDB and everything you do with message brokers can be done better with HStreamDB.

  • Stream processing with familiar SQL

    HStreamDB provides built-in support for event-time based stream processing. You can use your familiar SQL to perform basic filtering and transformation operations, statistics and aggregation based on multiple kinds of time windows and even joining between multiple streams.

  • Easy integration with a variety of external systems

    With connectors provided, you can easily integrate HStreamDB with other external systems, such as MQTT Broker, MySQL, Redis and ElasticSearch. More connectors will be added.

  • Real-time query based on live materailze views

    With maintaining materialized views incrementally, HStreamDB enables you to gain ahead-of-the-curve data insights that response to your business quickly.

  • Reliable persistent storage with low latency

    With an optimized storage design based on LogDevice, not only can HStreamDB provide reliable and persistent storage but also guarantee excellent performance despite large amounts of data written to it.

  • Seamless scaling and high availability

    With the architecture that separates compute from storage, both compute and storage layers of HStreamDB can be independently scaled seamlessly. And with the consensus algorithm based on the optimized Paxos, data is securely replicated to multiple nodes which ensures high availability of our system.

For more information, please visit HStreamDB homepage.

Quickstart

For detailed instructions, follow HStreamDB quickstart.

  1. Install HStreamDB.
  2. Start a local standalone HStream server.
  3. Start HStreamDB's interactive CLI and create your first stream.
  4. Run a continuous query.
  5. Start another interactive CLI, then insert some data into the stream and get query results.

Documentation

Check out the documentation.

Community, Discussion, Construction and Support

You can reach the HStreamDB community and developers via the following channels:

Please submit any bugs, issues, and feature requests to hstreamdb/hstream.

How to build (for developers only)

Pre-requirements

  1. Make sure you have Docker installed, and can run docker as a non-root user.
  2. You have python3 installed.
  3. You can clone the GitHub repository by ssh key.

Get the source code

git clone --recursive [email protected]:hstreamdb/hstream.git
cd hstream/

Update images

script/dev-tools update-images

Start all required services

You must have all required services started before entering an interactive shell to do further development (especially for running tests).

script/dev-tools start-services

To see information about all started services, run

script/dev-tools info

A dev-cluster is required while running tests. All data are stored under your-project-root/local-data/logdevice

Enter in an interactive shell

script/dev-tools shell

Build as other Haskell projects

Inside the interactive shell, you have all extra dependencies installed.

I have no [email protected]:~$ make
I have no [email protected]:~$ cabal build all

License

HStreamDB is under the BSD 3-Clause license. See the LICENSE file for details.

Acknowledgments

  • Thanks LogDevice for the powerful storage engine.
Issues
  • bug

    bug

    [email protected]:/# hadmin server status --host 172.16.3.179 --port 6570 +---------+---------+-------------------+ | node_id | state | address | +---------+---------+-------------------+ | 1 | Running | 172.16.3.179:6570 | | 2 | Running | 172.16.3.181:6570 | | 3 | Running | 172.16.3.182:6570 | +---------+---------+-------------------+ [email protected]:/# hadmin store status +----+---------+----------+-------+-----------+----------+---------+-------------+---------------+------------+---------------+ | ID | NAME | PACKAGE | STATE | UPTIME | LOCATION | SEQ. | DATA HEALTH | STORAGE STATE | SHARD OP. | HEALTH STATUS | +----+---------+----------+-------+-----------+----------+---------+-------------+---------------+------------+---------------+ | 0 | store-0 | 99.99.99 | ALIVE | 8 min ago | | ENABLED | HEALTHY(1) | READ_WRITE(1) | ENABLED(1) | HEALTHY | | 1 | store-1 | 99.99.99 | ALIVE | 8 min ago | | ENABLED | HEALTHY(1) | READ_WRITE(1) | ENABLED(1) | HEALTHY | | 2 | store-2 | 99.99.99 | ALIVE | 8 min ago | | ENABLED | HEALTHY(1) | READ_WRITE(1) | ENABLED(1) | HEALTHY | +----+---------+----------+-------+-----------+----------+---------+-------------+---------------+------------+---------------+

    show streams; Succeeded. No results. create stream demo; demo INSERT INTO demo (temperature, humidity) VALUES (22, 80); Failed to get any available server.

    According to the manual docker deployment, the cluster seems to be normal, but an error is reported when inserting data? Why?

    opened by 2779382063 7
  • [skip-ci] dev-deploy: add docker node-exporter

    [skip-ci] dev-deploy: add docker node-exporter

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes: dev-deploy: add docker node-exporter


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 6
  • [skip ci] dev-deploy: add cfg for memory and cpus

    [skip ci] dev-deploy: add cfg for memory and cpus

    PR Description

    Type of change

    • [x] New feature

    Main changes: add cfg for memory and cpus


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 6
  • write test

    write test

    I manually deployed docker according to the document, and when I wrote multiple data tests through hstram-java, I found that the writing speed was relatively slow, only less than 1,000 lines per second. This speed is too slow. I don’t know if it is my test program method. Is it wrong or what is the reason, are there any instructions or examples for stress testing?

    Test program, write 1 million pieces of data:

    package com.evoc.hstream; import java.util.ArrayList; import java.util.Date; import java.util.List; import java.util.Random; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException;

    import io.hstream.BufferedProducer; import io.hstream.HRecord; import io.hstream.HStreamClient; import io.hstream.Producer; import io.hstream.Record; import io.hstream.RecordId;

    //@SpringBootApplication public class HstreamTestApplication {

    public static void main(String[] args) {
    

    // SpringApplication.run(HstreamTestApplication.class, args); try { final String serviceUrl = "172.16.3.179:6570,172.16.3.181:6570,172.16.3.182:6570"; // final String serviceUrl = "172.16.3.221:6570"; HStreamClient client = HStreamClient.builder().serviceUrl(serviceUrl).build(); //Producer producer = client.newProducer().stream("demo").build(); BufferedProducer producer = client.newBufferedProducer() .stream("demo2") .recordCountLimit(100) .flushIntervalMs(10) .maxBytesSize(819200) .build(); Random random = new Random(); List list = new ArrayList(); for (int i = 0; i < 1000000; i++) { long time = new Date().getTime(); int id= random.nextInt(1000000); HRecord hRecord = HRecord.newBuilder() .put("id",id) .put("tt",time) .put("test", 10) .build(); list.add(hRecord); } list.parallelStream().forEach(e->{ synchronized (HstreamTestApplication.class) { Record record = Record.newBuilder().hRecord(e).build(); // producer.write(record); CompletableFuture future = producer.write(record); try { future.get().getBatchId(); } catch (InterruptedException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (ExecutionException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } }); producer.close(); client.close(); } catch (Exception e1) { e1.printStackTrace(); } } }

    opened by 2779382063 5
  • feat: use https as submodule url

    feat: use https as submodule url

    Signed-off-by: Alex Chi [email protected]

    Pull Request Template

    Description

    Not everyone has SSH key added to their local machine. For most cases, developers simply run git clone with only HTTPS credentials or no credentials. Therefore, this PR changes submodule to HTTPS upstream instead of the SSH ones.

    Type of change

    Please delete options that are not relevant.

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [x] This change requires a documentation update

    How Has This Been Tested?

    ~/Work/hstream/external/gRPC-haskell
    $ git remote get-url origin
    https://github.com/hstreamdb/gRPC-haskell.git
    

    Checklist:

    Must:

    • [x] I have run format.sh under script
    • [x] I have performed a self-review of my own code
    • [x] I have commented my code, particularly in hard-to-understand areas
    • [x] New and existing unit tests pass locally with my changes

    Semi-Must

    • [x] I have added new tests that prove my fix is effective or that my feature works
    • [x] I have checked my code and corrected any misspellings

    Optional:

    • [x] My code follows the style guidelines of this project
    • [x] I have made corresponding changes to the documentation
    • [x] My changes generate no new warnings
    • [x] Any dependent changes have been merged and published in downstream modules
    opened by skyzh 5
  •  (ConnectionFailure Network.Socket.connect: <socket: 12>: does not exist (Connection refused))

    (ConnectionFailure Network.Socket.connect: : does not exist (Connection refused))

    创建流,ctl 报错

    CREATE STREAM demo WITH (FORMAT = "JSON");

    HttpExceptionRequest Request { host = "localhost" port = 6570 secure = False requestHeaders = [("Content-Type","application/json; charset=utf-8")] path = "/create/query" queryString = "" method = "POST" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 proxySecureMode = ProxySecureWithConnect } (ConnectionFailure Network.Socket.connect: <socket: 12>: does not exist (Connection refused))

    HStream-Server 报错 hstream-server: LOGS_SECTION_MISSING {name:LOGS_SECTION_MISSING, description:LOGS_SECTION_MISSING: Configuration file misses logs section, callstack:CallStack (from HasCallStack): throwStreamErrorIfNotOK', called at ./HStream/Store/Exception.hs:324:28 in hstream-store-0.1.0.0-00fd332cb31e900c02128ac35e4d993df94b7e2480f7781981c70e854378ff8e:HStream.Store.Exception}

    opened by freezeding521716 5
  • [skip ci] dev-deploy: upload store conf

    [skip ci] dev-deploy: upload store conf

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes: upload store configuration to remote host


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 4
  • hstreamdb/hstream:v0.6.0 deploy  failure

    hstreamdb/hstream:v0.6.0 deploy failure

    hstreamdb/hstream:v0.6.0 deploy failure

    CentOS7, kernel 5.4.163-1.el7.elrepo.x86_64, Docker 20.10

    Deploy a LogDevice cluster: a ld-admin-server and three logdeviced

    • zookeeper and ld-admin-server in 10.0.0.2
    • the 1st logdeviced in 10.0.0.3
    • the 2ed logdeviced in 10.0.0.4
    • the 3rd logdeviced in 10.0.0.5

    1. run a zookeeper in 10.0.0.2
        docker run
            -e 'ZOO_CONF_DIR=/conf'
            -e 'ZOO_DATA_DIR=/data'
            -e 'ZOO_DATA_LOG_DIR=/datalog'
            -e 'ZOO_LOG_DIR=/logs'
            -e 'ZOO_TICK_TIME=3000'
            -e 'ZOO_INIT_LIMIT=5'
            -e 'ZOO_SYNC_LIMIT=2'
            -e 'ZOO_AUTOPURGE_PURGEINTERVAL=0'
            -e 'ZOO_AUTOPURGE_SNAPRETAINCOUNT=3'
            -e 'ZOO_MAX_CLIENT_CNXNS=60'
            -e 'ZOO_STANDALONE_ENABLED=true'
            -e 'ZOO_ADMINSERVER_ENABLED=true'
            -v /etc/localtime:/etc/localtime:ro
            -p 2181:2181
            --restart always
            --name zookeeper
            -d zookeeper:3.5.6
    
    1. run ld-admin-server in 10.0.0.2
        mkdir -p $HOME/logdevice_conf
        docker run -d --network host --name logdevice_admin
            -v $HOME/logdevice_conf:/etc/logdevice/
            hstreamdb/hstream:v0.6.0 ld-admin-server
            --config-path /etc/logdevice/logdevice.conf
            --admin-port 6440
            --enable-maintenance-manager
            --enable-safety-check-periodic-metadata-update
            --maintenance-log-snapshotting
    

    3.run the 1st logdeviced in 10.0.0.3

        mkdir -p $HOME/logdeviced/{conf,store}
        mkdir -p $HOME/logdeviced/store/shard0
        echo 1 | tee $HOME/logdeviced/store/NSHARDS
        docker run -d --network host --name logdeviced
            -v $HOME/logdeviced/conf:/etc/logdevice/
            -v $HOME/logdeviced/store:/data/logdevice/
            hstreamdb/hstream:v0.6.0 logdeviced
            --config-path /etc/logdevice/logdevice.conf
            --name server-0
            --address 10.0.0.3
            --local-log-store-path /data/logdevice
            --roles storage,sequencer
            --port 4440
            --gossip-port 4441
            --admin-port 6440
            --num-shards 1
    

    4.run the 2nd logdeviced in 10.0.0.4

        mkdir -p $HOME/logdeviced/{conf,store}
        mkdir -p $HOME/logdeviced/store/shard0
        echo 1 | tee $HOME/logdeviced/store/NSHARDS
        docker run -d --network host --name logdeviced
            -v $HOME/logdeviced/conf:/etc/logdevice/
            -v $HOME/logdeviced/store:/data/logdevice/
            hstreamdb/hstream:v0.6.0 logdeviced
            --config-path /etc/logdevice/logdevice.conf
            --name server-1
            --address 10.0.0.4
            --local-log-store-path /data/logdevice
            --roles storage,sequencer
            --port 4440
            --gossip-port 4441
            --admin-port 6440
            --num-shards 1
    

    5.run the 3rd logdeviced in 10.0.0.5

        mkdir -p $HOME/logdeviced/{conf,store}
        mkdir -p $HOME/logdeviced/store/shard0
        echo 1 | tee $HOME/logdeviced/store/NSHARDS
        docker run -d --network host --name logdeviced
            -v $HOME/logdeviced/conf:/etc/logdevice/
            -v $HOME/logdeviced/store:/data/logdevice/
            hstreamdb/hstream:v0.6.0 logdeviced
            --config-path /etc/logdevice/logdevice.conf
            --name server-2
            --address 10.0.0.5
            --local-log-store-path /data/logdevice
            --roles storage,sequencer
            --port 4440
            --gossip-port 4441
            --admin-port 6440
            --num-shards 1
    

    1. logdevice.conf
    {
        "cluster": "logdevice-first",
        "server_settings": {
            "enable-node-self-registration": "true",
            "enable-nodes-configuration-manager": "true",
            "enable-cluster-maintenance-state-machine": "true",
            "use-nodes-configuration-manager-nodes-configuration": "true"
        },
        "client_settings": {
            "enable-nodes-configuration-manager": "true",
            "use-nodes-configuration-manager-nodes-configuration": "true",
            "admin-client-capabilities": "true"
        },
        "internal_logs": {
            "config_log_deltas": {
                "replicate_across": {
                    "node": 3
                }
            },
            "config_log_snapshots": {
                "replicate_across": {
                    "node": 3
                }
            },
            "event_log_deltas": {
                "replicate_across": {
                    "node": 3
                }
            },
            "event_log_snapshots": {
                "replicate_across": {
                    "node": 3
                }
            },
            "maintenance_log_deltas": {
                "replicate_across": {
                    "node": 3
                }
            },
            "maintenance_log_snapshots": {
                "replicate_across": {
                    "node": 3
                }
            }
        },
        "metadata_logs": {
            "nodeset": [
                0,1,2
            ],
            "replicate_across": {
              "node": 3
            }
        },
        "zookeeper": {
            "zookeeper_uri": "ip://10.0.0.2:2181",
            "timeout": "30s"
        }
    }
    

    1. run the bootstrapping the logdevice
    docker run -it --rm \
        --name nodes-config \
        --network host \
        hstreamdb/hstream:v0.6.0 \
        hadmin --host 10.0.0.2 --port 6440 nodes-config bootstrap --metadata-replicate-across node:3 
    
    Successfully bootstrapped the cluster, new nodes configuration version: 7
    Took 0.019s
    

    issue #664 and #663 are the same deployment method three logdeviceds cannot do node-self-registration the sequencer cannot be found Similar message: Could not send gossip to WF0:N1:2 (10.0.0.4:4441): CONNFAILED: connection failed. Trying another node. No available sequencer node for log 4611686018427387899. All sequencer nodes are unavailable. Error during sequencer lookup for log 4611686018427387899 (NOSEQUENCER), reporting NOSEQUENCER.

    opened by DerekSun96 4
  • Done. No results.

    Done. No results.

    CentOS7, Docker 20.10.11

    Run command:

    docker run -v /etc/localtime:/etc/localtime:ro -p 2181:2181 --restart always --name zookeeper -d zookeeper
    docker run -td --rm --name some-hstream-store -v $HOME/dbdata:/data/store --network host hstreamdb/hstream:v0.6.2 ld-dev-cluster --root /data/store --use-tcp
    docker run -td --rm --name some-hstream-server -v $HOME/dbdata:/data/store --network host hstreamdb/hstream:v0.6.2 hstream-server --port 6570 --store-config /data/store/logdevice.conf --server-id 10
    docker run -it --rm --name some-hstream-cli -v $HOME/dbdata:/data/store --network host hstreamdb/hstream:v0.6.2 hstream-client --port 6570 --client-id 1
    docker exec -it some-hstream-cli hstream-client --port 6570 --client-id 2
    

    In hstream-client:

    > CREATE STREAM test;
    > INSERT INTO test (m1, m2) VALUES (3, 4);
    Done. No results.
    But it doesn't return the normal result: {"m1":3,"m2":4}
    
    opened by DerekSun96 3
  • hstream-server: error while loading shared libraries: libgrpc.so.14: cannot open shared object file: No such file or directory

    hstream-server: error while loading shared libraries: libgrpc.so.14: cannot open shared object file: No such file or directory

    Describe the bug env: docker 20.10.8 MacOs 11.3.1

    Start HStream Storage with docker, run the command $ docker run -td --rm --name some-hstream-store -v /dbdata:/data/store --network host hstreamdb/hstream ld-dev-cluster --root /data/store --use-tcp

    but report a error: hstream-server: error while loading shared libraries: libgrpc.so.14: cannot open shared object file: No such file or directory

    please give me a help, why the error

    opened by SongOf 3
  • dev-deploy: add Grafana

    dev-deploy: add Grafana

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes: add Grafana to dev-deploy


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 2
  • [skip ci] dev-deploy: opts for `node_exporter`

    [skip ci] dev-deploy: opts for `node_exporter`

    PR Description

    Type of change

    • [x] Bug fix

    Summary of the change and which issue is fixed

    Main changes: as

    docker run -d \
      --net="host" \
      --pid="host" \
      -v "/:/host:ro,rslave" \
      quay.io/prometheus/node-exporter:latest \
      --path.rootfs=/host
    

    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 1
  • [skip ci] dev-deploy: add HStream http

    [skip ci] dev-deploy: add HStream http

    PR Description

    Type of change

    • [x] New feature
    • [x] Documentation updates required

    Summary of the change and which issue is fixed

    Main changes: start http server in dev deploy


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 0
  • HStream IO

    HStream IO

    PR Description

    Type of change

    • [ ] Bug fix
    • [X] New feature
    • [ ] Breaking change
    • [ ] Documentation updates required

    Summary of the change and which issue is fixed

    Main changes: HStream IO


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by s12f 0
  • implement new shard model

    implement new shard model

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes:

    • Add shardCount field to stream proto
    • Create all shards when server receive CreateStreamRPC
    • New getShard function accepts streamName and orderingKey as parameters and returns the logId of the corresponding shard
    • Add readShardRPC to support read records from specific shard
    • Support create subscription from fixed offset

    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by YangKian 0
Releases(v0.8.0)
Owner
HStreamDB
the open-source, real-time streaming database for IoT
HStreamDB
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8k Jul 4, 2022
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.

Velox is a C++ database acceleration library which provides reusable, extensible, and high-performance data processing components

Facebook Incubator 893 Jun 27, 2022
Serverless SQLite database read from and write to Object Storage Service, run on FaaS platform.

serverless-sqlite Serverless SQLite database read from and write to Object Storage Service, run on FaaS platform. NOTES: This repository is still in t

老雷 7 May 12, 2022
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks 2.7k Jun 26, 2022
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 13.3k Jun 27, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 3 Nov 3, 2021
A very fast lightweight embedded database engine with a built-in query language.

upscaledb 2.2.1 Fr 10. Mär 21:33:03 CET 2017 (C) Christoph Rupp, [email protected]; http://www.upscaledb.com This is t

Christoph Rupp 531 Jun 20, 2022
An open-source big data platform designed and optimized for the Internet of Things (IoT).

An open-source big data platform designed and optimized for the Internet of Things (IoT).

null 18.6k Jul 1, 2022
SiriDB is a highly-scalable, robust and super fast time series database

SiriDB is a highly-scalable, robust and super fast time series database. Build from the ground up SiriDB uses a unique mechanism to operate without a global index and allows server resources to be added on the fly. SiriDB's unique query language includes dynamic grouping of time series for easy analysis over large amounts of time series.

SiriDB 464 Jun 13, 2022
以简单、易用、高性能为目标、开源的时序数据库,支持Linux和Windows, Time Series Database

松果时序数据库(pinusdb) 松果时序数据库是一款针对中小规模(设备数少于10万台,每天产生的数据量少于10亿条)场景设计的时序数据库。以简单、易用、高性能为设计目标。使用SQL语句进行交互,拥有极低的学习、使用成本, 提供了丰富的功能、较高的性能。 我们的目标是成为最简单、易用、健壮的单机时序

null 94 Apr 27, 2022
LogMessage is one of the output format of database incremental data

LogMessage LogMessage是一种数据库增量数据的输出格式,oceanbase的增量采集模块liboblog正是使用的这种消息格式来输出增量数据,LogMessage支持oceanbase中不同数据类型的增量数据的写入,具有序列化和反序列化的能力。 如何编译 LogMessage的编译

OceanBase 7 Jan 21, 2022
Simple constant key/value storage library, for read-heavy systems with infrequent large bulk inserts.

Sparkey is a simple constant key/value storage library. It is mostly suited for read heavy systems with infrequent large bulk inserts. It includes bot

Spotify 965 Jun 19, 2022
Kreon is a key-value store library optimized for flash-based storage

Kreon is a key-value store library optimized for flash-based storage, where CPU overhead and I/O amplification are more significant bottlenecks compared to I/O randomness.

Computer Architecture and VLSI Systems (CARV) Laboratory 20 Jun 10, 2022
LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Authors: Sanjay Ghem

Google 29.7k Jul 4, 2022
OrioleDB – building a modern cloud-native storage engine

OrioleDB is a new storage engine for PostgreSQL, bringing a modern approach to database capacity, capabilities and performance to the world's most-loved database platform.

OrioleDB 1.1k Jul 3, 2022
BerylDB is a data structure data manager that can be used to store data as key-value entries.

BerylDB is a data structure data manager that can be used to store data as key-value entries. The server allows channel subscription and is optimized to be used as a cache repository. Supported structures include lists, sets, and keys.

BerylDB 195 Jun 24, 2022
A friendly and lightweight C++ database library for MySQL, PostgreSQL, SQLite and ODBC.

QTL QTL is a C ++ library for accessing SQL databases and currently supports MySQL, SQLite, PostgreSQL and ODBC. QTL is a lightweight library that con

null 155 Jun 26, 2022
ObjectBox C and C++: super-fast database for objects and structs

ObjectBox Embedded Database for C and C++ ObjectBox is a superfast C and C++ database for embedded devices (mobile and IoT), desktop and server apps.

ObjectBox 131 Jun 17, 2022
dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover. The acron

Canonical 3k Jun 27, 2022