The database built for IoT streaming data storage and real-time stream processing.

Overview

GitHub top language GitHub Workflow Status Docker Pulls Slack Twitter Community YouTube

HStreamDB

hstream-db

The database built for IoT streaming data storage and real-time stream processing.

Main Features

  • Push real-time data to your apps

    By subscribing to streams in HStreamDB, any update of the data stream will be pushed to your apps in real-time, and this promotes your apps to be more responsive.

    You can also replace message brokers with HStreamDB and everything you do with message brokers can be done better with HStreamDB.

  • Stream processing with familiar SQL

    HStreamDB provides built-in support for event-time based stream processing. You can use your familiar SQL to perform basic filtering and transformation operations, statistics and aggregation based on multiple kinds of time windows and even joining between multiple streams.

  • Easy integration with a variety of external systems

    With connectors provided, you can easily integrate HStreamDB with other external systems, such as MQTT Broker, MySQL, Redis and ElasticSearch. More connectors will be added.

  • Real-time query based on live materailze views

    With maintaining materialized views incrementally, HStreamDB enables you to gain ahead-of-the-curve data insights that response to your business quickly.

  • Reliable persistent storage with low latency

    With an optimized storage design based on LogDevice, not only can HStreamDB provide reliable and persistent storage but also guarantee excellent performance despite large amounts of data written to it.

  • Seamless scaling and high availability

    With the architecture that separates compute from storage, both compute and storage layers of HStreamDB can be independently scaled seamlessly. And with the consensus algorithm based on the optimized Paxos, data is securely replicated to multiple nodes which ensures high availability of our system.

For more information, please visit HStreamDB homepage.

Quickstart

For detailed instructions, follow HStreamDB quickstart.

  1. Install HStreamDB.
  2. Start a local standalone HStream server.
  3. Start HStreamDB's interactive CLI and create your first stream.
  4. Run a continuous query.
  5. Start another interactive CLI, then insert some data into the stream and get query results.

Documentation

Check out the documentation.

Community, Discussion, Construction and Support

You can reach the HStreamDB community and developers via the following channels:

Please submit any bugs, issues, and feature requests to hstreamdb/hstream.

How to build (for developers only)

Pre-requirements

  1. Make sure you have Docker installed, and can run docker as a non-root user.
  2. You have python3 installed.
  3. You can clone the GitHub repository by ssh key.

Get the source code

git clone --recursive [email protected]:hstreamdb/hstream.git
cd hstream/

Update images

script/dev-tools update-images

Start all required services

You must have all required services started before entering an interactive shell to do further development (especially for running tests).

script/dev-tools start-services

To see information about all started services, run

script/dev-tools info

A dev-cluster is required while running tests. All data are stored under your-project-root/local-data/logdevice

Enter in an interactive shell

script/dev-tools shell

Build as other Haskell projects

Inside the interactive shell, you have all extra dependencies installed.

I have no [email protected]:~$ make
I have no [email protected]:~$ cabal build all

License

HStreamDB is under the BSD 3-Clause license. See the LICENSE file for details.

Acknowledgments

  • Thanks LogDevice for the powerful storage engine.
Comments
  • Unable to connect to JAVA client through JDK 11 -

    Unable to connect to JAVA client through JDK 11 -

    Exception in thread "main" java.lang.NoSuchMethodError: kotlinx.coroutines.AbstractCoroutine.(Lkotlin/coroutines/CoroutineContext;ZZ)V at kotlinx.coroutines.future.CompletableFutureCoroutine.(Future.kt:51) at kotlinx.coroutines.future.FutureKt.future(Future.kt:42) at kotlinx.coroutines.future.FutureKt.future$default(Future.kt:34) at io.hstream.impl.UtilsKt.futureForIO(Utils.kt:124) at io.hstream.impl.UtilsKt.futureForIO$default(Utils.kt:123) at io.hstream.impl.UtilsKt.unaryCallWithCurrentUrlsAsync(Utils.kt:115) at io.hstream.impl.UtilsKt.unaryCallWithCurrentUrls(Utils.kt:119) at io.hstream.impl.HStreamClientKtImpl.(HStreamClientKtImpl.kt:53) at io.hstream.impl.HStreamClientBuilderImpl.build(HStreamClientBuilderImpl.java:74) at MainClass.main(MainClass.java:6)

    Process finished with exit code 1

    CODE:

    import io.hstream.*;

    public class MainClass { public static void main(String[] args) throws Exception { final String serviceUrl = "127.0.0.1:6570"; HStreamClient client = HStreamClient.builder().serviceUrl(serviceUrl).build(); System.out.println("Connected"); } }

    #quick-start.yaml

    version: "3.5"

    services: hserver0: image: hstreamdb/hstream:v0.9.3 depends_on: - zookeeper - hstore ports: - "127.0.0.1:6570:6570" expose: - 6570 networks: - hstream-quickstart volumes: - /var/run/docker.sock:/var/run/docker.sock - /tmp:/tmp - data_store:/data/store command: - bash - "-c" - | set -e /usr/local/script/wait-for-storage.sh hstore 6440 zookeeper 2181 600
    /usr/local/bin/hstream-server
    --host 0.0.0.0 --port 6570
    --internal-port 6571
    --server-id 100
    --seed-nodes "$$(hostname -I | awk '{print $$1}'):6571,hserver1:6573"
    --address $$(hostname -I | awk '{print $$1}')
    --zkuri zookeeper:2181
    --store-config /data/store/logdevice.conf
    --store-admin-host hstore --store-admin-port 6440
    --io-tasks-path /tmp/io/tasks
    --io-tasks-network hstream-quickstart

    hserver1: image: hstreamdb/hstream:v0.9.3 depends_on: - zookeeper - hstore ports: - "127.0.0.1:6572:6572" expose: - 6572 networks: - hstream-quickstart volumes: - /var/run/docker.sock:/var/run/docker.sock - /tmp:/tmp - data_store:/data/store command: - bash - "-c" - | set -e /usr/local/script/wait-for-storage.sh hstore 6440 zookeeper 2181 600
    /usr/local/bin/hstream-server
    --host 0.0.0.0 --port 6572
    --internal-port 6573
    --server-id 101
    --seed-nodes "hserver0:6571,$$(hostname -I | awk '{print $$1}'):6573"
    --address $$(hostname -I | awk '{print $$1}')
    --zkuri zookeeper:2181
    --store-config /data/store/logdevice.conf
    --store-admin-host hstore --store-admin-port 6440
    --io-tasks-path /tmp/io/tasks
    --io-tasks-network hstream-quickstart

    hserver-init: image: hstreamdb/hstream:v0.9.3 depends_on: - hserver0 - hserver1 networks: - hstream-quickstart command: - bash - "-c" - | timeout=60 until (
    /usr/local/bin/hadmin server --host hserver0 --port 6570 status &&
    /usr/local/bin/hadmin server --host hserver1 --port 6572 status
    ) >/dev/null 2>&1; do >&2 echo 'Waiting for servers ...' sleep 1 timeout=$$((timeout - 1)) [ $$timeout -le 0 ] && echo 'Timeout!' && exit 1; done;
    /usr/local/bin/hadmin server --host hserver0 init

    hstore: image: hstreamdb/hstream:v0.9.3 networks: - hstream-quickstart volumes: - data_store:/data/store command: - bash - "-c" - | set -ex /usr/local/bin/ld-dev-cluster --root /data/store
    --use-tcp --tcp-host $$(hostname -I | awk '{print $$1}')
    --user-admin-port 6440
    --no-interactive

    zookeeper: image: zookeeper expose: - 2181 networks: - hstream-quickstart volumes: - data_zk_data:/data - data_zk_datalog:/datalog

    networks: hstream-quickstart: name: hstream-quickstart

    volumes: data_store: name: quickstart_data_store data_zk_data: name: quickstart_data_zk_data data_zk_datalog: name: quickstart_data_zk_datalog

    opened by sc-aniagr 11
  • hstream client: multi-line mode for sql

    hstream client: multi-line mode for sql

    PR Description

    Type of change

    • [x] New feature
    • [x] Documentation updates required

    Summary of the change and which issue is fixed

    Main changes: multi-line mode for sql, for edit and update

          __  _________________  _________    __  ___
         / / / / ___/_  __/ __ \/ ____/   |  /  |/  /
        / /_/ /\__ \ / / / /_/ / __/ / /| | / /|_/ /
       / __  /___/ // / / _, _/ /___/ ___ |/ /  / /
      /_/ /_//____//_/ /_/ |_/_____/_/  |_/_/  /_/
      
    
    Command
      :h                           To show these help info
      :q                           To exit command line interface
      :help [sql_operation]        To show full usage of sql statement
      \{                           To enter multi-line mode
    
    SQL STATEMENTS:
      To create a simplest stream:
        CREATE STREAM stream_name;
    
      To create a query select all fields from a stream:
        SELECT * FROM stream_name EMIT CHANGES;
    
      To insert values to a stream:
        INSERT INTO stream_name (field1, field2) VALUES (1, 2);
      
    > \{
    -- Entering multi-line mode. Press <Ctrl-D> to finish.
    \ select
    \ *
    \ from
    \ s
    \ emit
    \ changes
    \ ;
    \ 
    ^CTerminated
    > 
    

    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 8
  • bug

    bug

    [email protected]:/# hadmin server status --host 172.16.3.179 --port 6570 +---------+---------+-------------------+ | node_id | state | address | +---------+---------+-------------------+ | 1 | Running | 172.16.3.179:6570 | | 2 | Running | 172.16.3.181:6570 | | 3 | Running | 172.16.3.182:6570 | +---------+---------+-------------------+ [email protected]:/# hadmin store status +----+---------+----------+-------+-----------+----------+---------+-------------+---------------+------------+---------------+ | ID | NAME | PACKAGE | STATE | UPTIME | LOCATION | SEQ. | DATA HEALTH | STORAGE STATE | SHARD OP. | HEALTH STATUS | +----+---------+----------+-------+-----------+----------+---------+-------------+---------------+------------+---------------+ | 0 | store-0 | 99.99.99 | ALIVE | 8 min ago | | ENABLED | HEALTHY(1) | READ_WRITE(1) | ENABLED(1) | HEALTHY | | 1 | store-1 | 99.99.99 | ALIVE | 8 min ago | | ENABLED | HEALTHY(1) | READ_WRITE(1) | ENABLED(1) | HEALTHY | | 2 | store-2 | 99.99.99 | ALIVE | 8 min ago | | ENABLED | HEALTHY(1) | READ_WRITE(1) | ENABLED(1) | HEALTHY | +----+---------+----------+-------+-----------+----------+---------+-------------+---------------+------------+---------------+

    show streams; Succeeded. No results. create stream demo; demo INSERT INTO demo (temperature, humidity) VALUES (22, 80); Failed to get any available server.

    According to the manual docker deployment, the cluster seems to be normal, but an error is reported when inserting data? Why?

    opened by 2779382063 7
  • [skip-ci] dev-deploy: add docker node-exporter

    [skip-ci] dev-deploy: add docker node-exporter

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes: dev-deploy: add docker node-exporter


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 6
  • [skip ci] dev-deploy: add cfg for memory and cpus

    [skip ci] dev-deploy: add cfg for memory and cpus

    PR Description

    Type of change

    • [x] New feature

    Main changes: add cfg for memory and cpus


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 6
  • cli: enter multi-line mode when throwing EOF

    cli: enter multi-line mode when throwing EOF

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes:

    > select
    | *
    | from
    | s
    | emit
    | changes
    | ;
    ^CTerminated
    > show streams
    | ;
    +----------------------+---------+----------------+-------------+
    | Stream Name          | Replica | Retention Time | Shard Count |
    +----------------------+---------+----------------+-------------+
    | juqlTVOFCAvGfHHMIPqE | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | ZSHhwSTZAiACReNLwkfQ | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | aKyGZDCkdeUmFIBamDtW | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | TgDNyMLAxEsbkKVMEuGx | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | DPpClhNohpBXbYYTQYjZ | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | IuAYjTuvkjdwEURatdcj | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | WHvdvbOSNAYZNRGBqhSK | 1       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    | s                    | 3       | 0sec           | 1           |
    +----------------------+---------+----------------+-------------+
    > show stream
    Parse exception at <line 1,column 6>: syntax error before `STREAM'.
    > 
    

    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 5
  • write test

    write test

    I manually deployed docker according to the document, and when I wrote multiple data tests through hstram-java, I found that the writing speed was relatively slow, only less than 1,000 lines per second. This speed is too slow. I don’t know if it is my test program method. Is it wrong or what is the reason, are there any instructions or examples for stress testing?

    Test program, write 1 million pieces of data:

    package com.evoc.hstream; import java.util.ArrayList; import java.util.Date; import java.util.List; import java.util.Random; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException;

    import io.hstream.BufferedProducer; import io.hstream.HRecord; import io.hstream.HStreamClient; import io.hstream.Producer; import io.hstream.Record; import io.hstream.RecordId;

    //@SpringBootApplication public class HstreamTestApplication {

    public static void main(String[] args) {
    

    // SpringApplication.run(HstreamTestApplication.class, args); try { final String serviceUrl = "172.16.3.179:6570,172.16.3.181:6570,172.16.3.182:6570"; // final String serviceUrl = "172.16.3.221:6570"; HStreamClient client = HStreamClient.builder().serviceUrl(serviceUrl).build(); //Producer producer = client.newProducer().stream("demo").build(); BufferedProducer producer = client.newBufferedProducer() .stream("demo2") .recordCountLimit(100) .flushIntervalMs(10) .maxBytesSize(819200) .build(); Random random = new Random(); List list = new ArrayList(); for (int i = 0; i < 1000000; i++) { long time = new Date().getTime(); int id= random.nextInt(1000000); HRecord hRecord = HRecord.newBuilder() .put("id",id) .put("tt",time) .put("test", 10) .build(); list.add(hRecord); } list.parallelStream().forEach(e->{ synchronized (HstreamTestApplication.class) { Record record = Record.newBuilder().hRecord(e).build(); // producer.write(record); CompletableFuture future = producer.write(record); try { future.get().getBatchId(); } catch (InterruptedException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (ExecutionException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } }); producer.close(); client.close(); } catch (Exception e1) { e1.printStackTrace(); } } }

    opened by 2779382063 5
  • feat: use https as submodule url

    feat: use https as submodule url

    Signed-off-by: Alex Chi [email protected]

    Pull Request Template

    Description

    Not everyone has SSH key added to their local machine. For most cases, developers simply run git clone with only HTTPS credentials or no credentials. Therefore, this PR changes submodule to HTTPS upstream instead of the SSH ones.

    Type of change

    Please delete options that are not relevant.

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [x] This change requires a documentation update

    How Has This Been Tested?

    ~/Work/hstream/external/gRPC-haskell
    $ git remote get-url origin
    https://github.com/hstreamdb/gRPC-haskell.git
    

    Checklist:

    Must:

    • [x] I have run format.sh under script
    • [x] I have performed a self-review of my own code
    • [x] I have commented my code, particularly in hard-to-understand areas
    • [x] New and existing unit tests pass locally with my changes

    Semi-Must

    • [x] I have added new tests that prove my fix is effective or that my feature works
    • [x] I have checked my code and corrected any misspellings

    Optional:

    • [x] My code follows the style guidelines of this project
    • [x] I have made corresponding changes to the documentation
    • [x] My changes generate no new warnings
    • [x] Any dependent changes have been merged and published in downstream modules
    opened by skyzh 5
  •  (ConnectionFailure Network.Socket.connect: <socket: 12>: does not exist (Connection refused))

    (ConnectionFailure Network.Socket.connect: : does not exist (Connection refused))

    创建流,ctl 报错

    CREATE STREAM demo WITH (FORMAT = "JSON");

    HttpExceptionRequest Request { host = "localhost" port = 6570 secure = False requestHeaders = [("Content-Type","application/json; charset=utf-8")] path = "/create/query" queryString = "" method = "POST" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 proxySecureMode = ProxySecureWithConnect } (ConnectionFailure Network.Socket.connect: <socket: 12>: does not exist (Connection refused))

    HStream-Server 报错 hstream-server: LOGS_SECTION_MISSING {name:LOGS_SECTION_MISSING, description:LOGS_SECTION_MISSING: Configuration file misses logs section, callstack:CallStack (from HasCallStack): throwStreamErrorIfNotOK', called at ./HStream/Store/Exception.hs:324:28 in hstream-store-0.1.0.0-00fd332cb31e900c02128ac35e4d993df94b7e2480f7781981c70e854378ff8e:HStream.Store.Exception}

    opened by freezeding521716 5
  • [skip ci] dev-deploy: upload store conf

    [skip ci] dev-deploy: upload store conf

    PR Description

    Type of change

    • [x] New feature

    Summary of the change and which issue is fixed

    Main changes: upload store configuration to remote host


    Checklist

    • I have run format.sh under script
    • I have commented my code, particularly in hard-to-understand areas
    • New and existing unit tests pass locally with my changes
    opened by alissa-tung 4
  • hstreamdb/hstream:v0.6.0 deploy  failure

    hstreamdb/hstream:v0.6.0 deploy failure

    hstreamdb/hstream:v0.6.0 deploy failure

    CentOS7, kernel 5.4.163-1.el7.elrepo.x86_64, Docker 20.10

    Deploy a LogDevice cluster: a ld-admin-server and three logdeviced

    • zookeeper and ld-admin-server in 10.0.0.2
    • the 1st logdeviced in 10.0.0.3
    • the 2ed logdeviced in 10.0.0.4
    • the 3rd logdeviced in 10.0.0.5

    1. run a zookeeper in 10.0.0.2
        docker run
            -e 'ZOO_CONF_DIR=/conf'
            -e 'ZOO_DATA_DIR=/data'
            -e 'ZOO_DATA_LOG_DIR=/datalog'
            -e 'ZOO_LOG_DIR=/logs'
            -e 'ZOO_TICK_TIME=3000'
            -e 'ZOO_INIT_LIMIT=5'
            -e 'ZOO_SYNC_LIMIT=2'
            -e 'ZOO_AUTOPURGE_PURGEINTERVAL=0'
            -e 'ZOO_AUTOPURGE_SNAPRETAINCOUNT=3'
            -e 'ZOO_MAX_CLIENT_CNXNS=60'
            -e 'ZOO_STANDALONE_ENABLED=true'
            -e 'ZOO_ADMINSERVER_ENABLED=true'
            -v /etc/localtime:/etc/localtime:ro
            -p 2181:2181
            --restart always
            --name zookeeper
            -d zookeeper:3.5.6
    
    1. run ld-admin-server in 10.0.0.2
        mkdir -p $HOME/logdevice_conf
        docker run -d --network host --name logdevice_admin
            -v $HOME/logdevice_conf:/etc/logdevice/
            hstreamdb/hstream:v0.6.0 ld-admin-server
            --config-path /etc/logdevice/logdevice.conf
            --admin-port 6440
            --enable-maintenance-manager
            --enable-safety-check-periodic-metadata-update
            --maintenance-log-snapshotting
    

    3.run the 1st logdeviced in 10.0.0.3

        mkdir -p $HOME/logdeviced/{conf,store}
        mkdir -p $HOME/logdeviced/store/shard0
        echo 1 | tee $HOME/logdeviced/store/NSHARDS
        docker run -d --network host --name logdeviced
            -v $HOME/logdeviced/conf:/etc/logdevice/
            -v $HOME/logdeviced/store:/data/logdevice/
            hstreamdb/hstream:v0.6.0 logdeviced
            --config-path /etc/logdevice/logdevice.conf
            --name server-0
            --address 10.0.0.3
            --local-log-store-path /data/logdevice
            --roles storage,sequencer
            --port 4440
            --gossip-port 4441
            --admin-port 6440
            --num-shards 1
    

    4.run the 2nd logdeviced in 10.0.0.4

        mkdir -p $HOME/logdeviced/{conf,store}
        mkdir -p $HOME/logdeviced/store/shard0
        echo 1 | tee $HOME/logdeviced/store/NSHARDS
        docker run -d --network host --name logdeviced
            -v $HOME/logdeviced/conf:/etc/logdevice/
            -v $HOME/logdeviced/store:/data/logdevice/
            hstreamdb/hstream:v0.6.0 logdeviced
            --config-path /etc/logdevice/logdevice.conf
            --name server-1
            --address 10.0.0.4
            --local-log-store-path /data/logdevice
            --roles storage,sequencer
            --port 4440
            --gossip-port 4441
            --admin-port 6440
            --num-shards 1
    

    5.run the 3rd logdeviced in 10.0.0.5

        mkdir -p $HOME/logdeviced/{conf,store}
        mkdir -p $HOME/logdeviced/store/shard0
        echo 1 | tee $HOME/logdeviced/store/NSHARDS
        docker run -d --network host --name logdeviced
            -v $HOME/logdeviced/conf:/etc/logdevice/
            -v $HOME/logdeviced/store:/data/logdevice/
            hstreamdb/hstream:v0.6.0 logdeviced
            --config-path /etc/logdevice/logdevice.conf
            --name server-2
            --address 10.0.0.5
            --local-log-store-path /data/logdevice
            --roles storage,sequencer
            --port 4440
            --gossip-port 4441
            --admin-port 6440
            --num-shards 1
    

    1. logdevice.conf
    {
        "cluster": "logdevice-first",
        "server_settings": {
            "enable-node-self-registration": "true",
            "enable-nodes-configuration-manager": "true",
            "enable-cluster-maintenance-state-machine": "true",
            "use-nodes-configuration-manager-nodes-configuration": "true"
        },
        "client_settings": {
            "enable-nodes-configuration-manager": "true",
            "use-nodes-configuration-manager-nodes-configuration": "true",
            "admin-client-capabilities": "true"
        },
        "internal_logs": {
            "config_log_deltas": {
                "replicate_across": {
                    "node": 3
                }
            },
            "config_log_snapshots": {
                "replicate_across": {
                    "node": 3
                }
            },
            "event_log_deltas": {
                "replicate_across": {
                    "node": 3
                }
            },
            "event_log_snapshots": {
                "replicate_across": {
                    "node": 3
                }
            },
            "maintenance_log_deltas": {
                "replicate_across": {
                    "node": 3
                }
            },
            "maintenance_log_snapshots": {
                "replicate_across": {
                    "node": 3
                }
            }
        },
        "metadata_logs": {
            "nodeset": [
                0,1,2
            ],
            "replicate_across": {
              "node": 3
            }
        },
        "zookeeper": {
            "zookeeper_uri": "ip://10.0.0.2:2181",
            "timeout": "30s"
        }
    }
    

    1. run the bootstrapping the logdevice
    docker run -it --rm \
        --name nodes-config \
        --network host \
        hstreamdb/hstream:v0.6.0 \
        hadmin --host 10.0.0.2 --port 6440 nodes-config bootstrap --metadata-replicate-across node:3 
    
    Successfully bootstrapped the cluster, new nodes configuration version: 7
    Took 0.019s
    

    issue #664 and #663 are the same deployment method three logdeviceds cannot do node-self-registration the sequencer cannot be found Similar message: Could not send gossip to WF0:N1:2 (10.0.0.4:4441): CONNFAILED: connection failed. Trying another node. No available sequencer node for log 4611686018427387899. All sequencer nodes are unavailable. Error during sequencer lookup for log 4611686018427387899 (NOSEQUENCER), reporting NOSEQUENCER.

    opened by DerekSun96 4
Releases(v0.12.0)
Owner
HStreamDB
the open-source, real-time streaming database for IoT
HStreamDB
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8.6k Dec 26, 2022
Serverless SQLite database read from and write to Object Storage Service, run on FaaS platform.

serverless-sqlite Serverless SQLite database read from and write to Object Storage Service, run on FaaS platform. NOTES: This repository is still in t

老雷 7 May 12, 2022
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.

Velox is a C++ database acceleration library which provides reusable, extensible, and high-performance data processing components

Facebook Incubator 2k Jan 8, 2023
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks 3.7k Dec 30, 2022
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 14.3k Jan 2, 2023
A mini database for learning database

A mini database for learning database

Chuckie Tan 4 Nov 14, 2022
A very fast lightweight embedded database engine with a built-in query language.

upscaledb 2.2.1 Fr 10. Mär 21:33:03 CET 2017 (C) Christoph Rupp, [email protected]; http://www.upscaledb.com This is t

Christoph Rupp 542 Dec 30, 2022
An open-source big data platform designed and optimized for the Internet of Things (IoT).

An open-source big data platform designed and optimized for the Internet of Things (IoT).

null 20.3k Dec 29, 2022
SiriDB is a highly-scalable, robust and super fast time series database

SiriDB is a highly-scalable, robust and super fast time series database. Build from the ground up SiriDB uses a unique mechanism to operate without a global index and allows server resources to be added on the fly. SiriDB's unique query language includes dynamic grouping of time series for easy analysis over large amounts of time series.

SiriDB 471 Jan 9, 2023
以简单、易用、高性能为目标、开源的时序数据库,支持Linux和Windows, Time Series Database

松果时序数据库(pinusdb) 松果时序数据库是一款针对中小规模(设备数少于10万台,每天产生的数据量少于10亿条)场景设计的时序数据库。以简单、易用、高性能为设计目标。使用SQL语句进行交互,拥有极低的学习、使用成本, 提供了丰富的功能、较高的性能。 我们的目标是成为最简单、易用、健壮的单机时序

null 99 Nov 19, 2022
Simple constant key/value storage library, for read-heavy systems with infrequent large bulk inserts.

Sparkey is a simple constant key/value storage library. It is mostly suited for read heavy systems with infrequent large bulk inserts. It includes bot

Spotify 989 Dec 14, 2022
Kreon is a key-value store library optimized for flash-based storage

Kreon is a key-value store library optimized for flash-based storage, where CPU overhead and I/O amplification are more significant bottlenecks compared to I/O randomness.

Computer Architecture and VLSI Systems (CARV) Laboratory 24 Jul 14, 2022
LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Authors: Sanjay Ghem

Google 31.6k Jan 7, 2023
OrioleDB – building a modern cloud-native storage engine

OrioleDB is a new storage engine for PostgreSQL, bringing a modern approach to database capacity, capabilities and performance to the world's most-loved database platform.

OrioleDB 1.3k Dec 31, 2022
LogMessage is one of the output format of database incremental data

LogMessage LogMessage是一种数据库增量数据的输出格式,oceanbase的增量采集模块liboblog正是使用的这种消息格式来输出增量数据,LogMessage支持oceanbase中不同数据类型的增量数据的写入,具有序列化和反序列化的能力。 如何编译 LogMessage的编译

OceanBase 7 Dec 14, 2022
BerylDB is a data structure data manager that can be used to store data as key-value entries.

BerylDB is a data structure data manager that can be used to store data as key-value entries. The server allows channel subscription and is optimized to be used as a cache repository. Supported structures include lists, sets, and keys.

BerylDB 203 Dec 16, 2022
A friendly and lightweight C++ database library for MySQL, PostgreSQL, SQLite and ODBC.

QTL QTL is a C ++ library for accessing SQL databases and currently supports MySQL, SQLite, PostgreSQL and ODBC. QTL is a lightweight library that con

null 173 Dec 12, 2022
ObjectBox C and C++: super-fast database for objects and structs

ObjectBox Embedded Database for C and C++ ObjectBox is a superfast C and C++ database for embedded devices (mobile and IoT), desktop and server apps.

ObjectBox 152 Dec 23, 2022
dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover. The acron

Canonical 3.3k Jan 9, 2023