GridDB is a next-generation open source database that makes time series IoT and big data fast,and easy.

Overview

GridDB

Visit Website GitHub All Releases GitHub release

Overview

GridDB is Database for IoT with both NoSQL interface and SQL Interface.

Please refer to GridDB Features Reference for functionality.

This repository includes server and Java client. And jdbc repository includes JDBC Driver.

Quick start (Using source code)

We have confirmed the operation on CentOS 7.6 (gcc 4.8.5), Ubuntu 18.04 (gcc 4.8.5) and openSUSE Leap 15.1 (gcc 4.8.5).

Note: Please install tcl like "yum install tcl.x86_64" in advance.

Build a server and client(Java)

$ ./bootstrap.sh
$ ./configure
$ make

Note: When you use maven build for Java client, please run the following command. Then gridstore-X.X.X.jar file is created on target/.

$ cd java_client
$ ./make_source_for_mvn.sh
$ mvn clean
$ mvn install

Start a server

$ export GS_HOME=$PWD
$ export GS_LOG=$PWD/log
$ export PATH=${PATH}:$GS_HOME/bin

$ bin/gs_passwd admin
  #input your_password
$ vi conf/gs_cluster.json
  #    "clusterName":"your_clustername" #<-- input your_clustername

$ bin/gs_startnode
$ bin/gs_joincluster -c your_clustername -u admin/your_password

Execute a sample program

$ export CLASSPATH=${CLASSPATH}:$GS_HOME/bin/gridstore.jar
$ mkdir gsSample
$ cp $GS_HOME/docs/sample/program/Sample1.java gsSample/.
$ javac gsSample/Sample1.java
$ java gsSample/Sample1 239.0.0.1 31999 your_clustername admin your_password
  --> Person:  name=name02 status=false count=2 lob=[65, 66, 67, 68, 69, 70, 71, 72, 73, 74]

Stop a server

$ bin/gs_stopcluster -u admin/your_password
$ bin/gs_stopnode -u admin/your_password

Quick start (Using RPM or DEB)

We have confirmed the operation on CentOS 7.8/8.1, Ubuntu 18.04 and openSUSE Leap 15.1.

Note:

  • When you install this package, a gsadm OS user are created in the OS.
    Execute the operating command as the gsadm user.
  • You don't need to set environment vatiable GS_HOME and GS_LOG.
  • There is Java client library (gridstore.jar) on /usr/share/java and a sample on /usr/gridb-XXX/docs/sample/programs.
  • The packages don't include trigger function.
  • Please install Python2 in advance except CentOS7.

Install

(CentOS)
$ sudo rpm -ivh griddb-X.X.X-linux.x86_64.rpm

(Ubuntu)
$ sudo dpkg -i griddb_X.X.X_amd64.deb

(openSUSE)
$ sudo rpm -ivh griddb-X.X.X-opensuse.x86_64.rpm

Note: X.X.X is the GridDB version.

Start a server

[gsadm]$ gs_passwd admin
  #input your_password
[gsadm]$ vi conf/gs_cluster.json
  #    "clusterName":"your_clustername" #<-- input your_clustername
[gsadm]$ gs_startnode
[gsadm]$ gs_joincluster -c your_clustername -u admin/your_password

Execute a sample program

$ export CLASSPATH=${CLASSPATH}:/usr/share/java/gridstore.jar
$ mkdir gsSample
$ cp /usr/griddb-X.X.X/docs/sample/program/Sample1.java gsSample/.
$ javac gsSample/Sample1.java
$ java gsSample/Sample1 239.0.0.1 31999 your_clustername admin your_password
  --> Person:  name=name02 status=false count=2 lob=[65, 66, 67, 68, 69, 70, 71, 72, 73, 74]

Stop a server

[gsadm]$ gs_stopcluster -u admin/your_password
[gsadm]$ gs_stopnode -u admin/your_password

If necessary, please refer to Installation Troubleshooting.

Document

Refer to the file below for more detailed information.

Client and Connector

There are other clients and API for GridDB.

(NoSQL Interface)

(SQL Interface)

(NoSQL & SQL Interface)

There are some connectors for other OSS.

Packages

Community

  • Issues
    Use the GitHub issue function if you have any requests, questions, or bug reports.
  • PullRequest
    Use the GitHub pull request function if you want to contribute code. You'll need to agree GridDB Contributor License Agreement(CLA_rev1.1.pdf). By using the GitHub pull request function, you shall be deemed to have agreed to GridDB Contributor License Agreement.

License

The server source license is GNU Affero General Public License (AGPL), while the Java client library license and the operational commands is Apache License, version 2.0. See 3rd_party/3rd_party.md for the source and license of the third party.

Comments
  • gs_startnode -releaseUnusedFileBlocks

    gs_startnode -releaseUnusedFileBlocks

    Describe the bug

    I have been changing (lowering) the expiration time on bunch of collections, (copied data over to a new time series collection).

    This resulted in a much smaller data store but the size of the checkpoint file did not decrease.

    Started the node with -releaseUnusedFileBlocks option which decreased the database size drastically. But over a days operation the size of the checkpoint file is now doubled compared to earlier. After issuing the release start option once again the size is now four times the original size when no data had been expired. Running auto expire in the config.

    Can anyone understand how to correctly reduce the size of the database and keep it at a decent size? I fear after one year operation, disk consumption will be ridiculous... and as of now I have no method to shrink else than starting fresh with an empty db...

    To Reproduce Steps to reproduce the behavior:

    1. Start with a database in normal operation
    2. Recreated the collections with 1 day expiration time (new name) and copy the data from old collection
    3. Drop old collections
    4. Now with a set collection with only 1day expiration time the store is pretty small.
    5. Start gs_node with option -releaseUnusedFileBlocks
    6. After a few hours the checkpoint file is back at size or even larger than before with still only 1 day of data in the store.

    Expected behavior A "non increasing" size of checkpoint file.

    Additional context During the day with normal operation when file increased there were no major changes to the database except for roughly 100 values per minute inserted to db.

    Store is currently about 100MByte. Checkpoint file grow up to about 2GByte second time I tried releasing unused blocks. Thats 20 times larger than the data store.

    opened by nordlings 8
  • GridDB v4.5 on CentOS8 installation failure

    GridDB v4.5 on CentOS8 installation failure

    OS: CentOS Linux release 8.2.2004

    /usr/bin/python が無い?ようなエラーメッセージが出るので、ご教示のほどよろしくお願い致します。

    $ sudo rpm -ivh --test https://github.com/griddb/griddb/releases/download/v4.5.0/griddb-4.5.0-1.linux.x86_64.rpm
    Retrieving https://github.com/griddb/griddb/releases/download/v4.5.0/griddb-4.5.0-1.linux.x86_64.rpm
    error: Failed dependencies:
            /usr/bin/python is needed by griddb-4.5.0-1.linux.x86_64
    

    pythonはあり、Version 3.6.8が入っています。

    $ /usr/bin/python --version
    Python 3.6.8
    

    OS: CentOS Linux release 8.2.2004

    installation failure GridDB on CentOS 8. Error Message say /usr/bin/python is missing(?) Please Help me...

    $ sudo rpm -ivh --test https://github.com/griddb/griddb/releases/download/v4.5.0/griddb-4.5.0-1.linux.x86_64.rpm
    Retrieving https://github.com/griddb/griddb/releases/download/v4.5.0/griddb-4.5.0-1.linux.x86_64.rpm
    error: Failed dependencies:
            /usr/bin/python is needed by griddb-4.5.0-1.linux.x86_64
    

    But, I had Python v3.6.8.

    $ /usr/bin/python --version
    Python 3.6.8
    

    tahnks.

    拙い英語ですが伝わっていただければ幸いです。。。

    opened by tsekino62 8
  • New Logo/Icon Proposal

    New Logo/Icon Proposal

    Good day sir. I am a graphic designer and i am interested in designing a logo for your good project. I will be doing it as a gift for free. I just need your permission first before I begin my design. Hoping for your positive feedback. Thanks

    opened by mansya 8
  • GSConnectionException: [145029:JC_CONNECTION_TIMEOUT] Connection timed out on receiving

    GSConnectionException: [145029:JC_CONNECTION_TIMEOUT] Connection timed out on receiving

    I have installed version 5.0 on the server ubuntu 18.04. The firewall has been closed. You can execute the following commands on the server: java gsSample/Sample1 239.0.0.1 31999 myCluster admin admin But when I try to connect with another computer, a connection timeout error occurs

    Properties props = new Properties();
    props.setProperty("notificationAddress", "239.0.0.1");
    props.setProperty("notificationPort", "31999");
    props.setProperty("clusterName", "myCluster");
    props.setProperty("user", "admin");
    props.setProperty("password", "admin");
    GridStore store = GridStoreFactory.getInstance().getGridStore(props);
    Collection<String, Person> col = store.putCollection("col01", Person.class);
    System.out.println("success!");
    Person person = new Person();
    person.setName("name011212");
    person.setStatus(false);
    person.setCount(1);
    person.setLob(new byte[] { 65, 66, 67, 68, 69, 70, 71, 72, 73, 74 });
    boolean update = true;
    col.put(person);
    col.commit();
    System.out.println("success!");
    

    The bug occurred in

    Collection<String, Person> col = store.putCollection("col01", Person.class);
    

    gs_cluster.json { "dataStore":{ "partitionNum":128, "storeBlockSize":"64KB" }, "cluster":{ "clusterName":"myCluster", "replicationNum":2, "notificationAddress":"239.0.0.1", "notificationPort":20000, "notificationInterval":"5s", "heartbeatInterval":"5s", "loadbalanceCheckInterval":"180s" }, "sync":{ "timeoutInterval":"30s" }, "transaction":{ "notificationAddress":"239.0.0.1", "notificationPort":31999, "notificationInterval":"5s", "replicationMode":0, "replicationTimeoutInterval":"10s" }, "sql":{ "notificationAddress":"239.0.0.1", "notificationPort":41999, "notificationInterval":"5s" } }

    opened by tom055 7
  • Error when starting node

    Error when starting node

    I am trying a FIXED_LIST configuration, and every time I run gs_startnode I get the following error:

    2020-05-18T21:20:47.812Z ip-172-31-47-124.eu-west-1.compute.internal 1070 INFO SYSTEM_SERVICE [50900:SC_EVENT_LOG_STARTED] GridDB version 4.3.0 build 36424 Community Edition
    2020-05-18T21:20:47.812Z ip-172-31-47-124.eu-west-1.compute.internal 1070 WARNING CLUSTER_OPERATION [40904:CS_OPERATION] Recommended specify fixed list service address
    2020-05-18T21:20:47.819Z ip-172-31-47-124.eu-west-1.compute.internal 1070 ERROR MAIN [40048:CS_CONFIG_ERROR] Failed to check notification member. Duplicate member is included in this list <VVZn06FBL9ALCSrbplsh3hoQZJqhQXWfWxpi07oFK5pbFWPcrRI3ikNZaMvofHjADxxn97BMZMMPEGXc6AEu0BcMecatXV7AHgt826tKL9ALCSrEp0ZlkzgVf8G8SnPgHgt826tKO4kSF2PGoU5t2gEcIv+pQWDUHgtZ17wJKJMXEGTX9RwygFsiPoL4GzmJOCpV8YdhR/o8Jk/gmmBT7ls/a9ukSmWTDxYq0aBKYthbF2XGoUlo0BoNY92mD2zWFhtvwOYPRcYLFWPRqVtkkxYcZ9CtXSHaCFlj3KtDdNceHSrbpg912xIKKt6hXHWTGQAq57tKc/YDGm/CvEZu3VtXJdGkWnLHHgtVwa1dd9oYHCTRuF8hxRQQbpKLQ3TADxx44a1dd9oYHDCIoUFoxxIYZtuySin+Ghdr1a1dUtYPXyOSpEZv1kZKOITodDWDS00yiIt8XvA0N0z7j3BE4Sk2WO/oaWDaFxxukrxAIdATHGnZ6EFuxxIfY9GpW2jcFVln16VNZMFVWU7HuENo0BoNb5KlSmzRHgsq27sPaN0YFX/WrUsh2hVZftqhXCHfEgp+>
    

    My /var/lib/gridstore/conf/gs_cluster.json is:

    {
            "dataStore":{
                    "partitionNum":128,
                    "storeBlockSize":"64KB"
            },
            "cluster":{
                    "clusterName":"defaultCluster",
                    "replicationNum":2,
                    "notificationInterval":"5s",
                    "heartbeatInterval":"5s",
                    "loadbalanceCheckInterval":"180s",
                    "notificationMember": [
                            {
                                    "cluster": {"address":"x.x.x.x", "port":10010},
                                    "sync": {"address":"x.x.x.x", "port":10020},
                                    "system": {"address":"x.x.x.x", "port":10080},
                                    "transaction": {"address":"x.x.x.x", "port":10001}
                            }
                    ]
            },
            "sync":{
                    "timeoutInterval":"30s"
            }
    }
    
    opened by jack-burridge-tp 6
  • No Go Client

    No Go Client

    It really needs one. Dont have the time for that though. Also gccgo inflicts probably a bit much of a performance penalty, to be really useful wrapping the c library, but it might be a starting point.

    opened by sandrom 6
  • Dockerfiles and Jenkinsfile for Continuous Build.

    Dockerfiles and Jenkinsfile for Continuous Build.

    I made two Dockerfiles and a Jenkinsfile for the Continuous Build.

    At first, please make two Docker Images from Dockerfiles.

    $ cd docker_for_dev/centos6
    $ docker build -t centos:6.8_dev .
    $ cd docker_for_dev/centos7
    $ docker build -t centos:7.2.1511_dev .
    

    Next, please install Jenkins with the Pipeline Plugin and the Docker Pipeline Plugin, and create/configure Jenkins Pipeline Job to use git repository with this Jenkinsfile.

    Then start the Jenkins Pipeline Job by any trigger.

    jenkinspipelinestageview

    opened by nobusugi246 6
  • Problem with gym surcharges

    Problem with gym surcharges

    Good morning, we realized now that many products that have the surcharges, in the gym account does not display them and purchases that make the gym accounts, result with the standard price without the various surcharges. This is the link of one of these products: https://www.fightclubstore.com/karategi-bushido-nero.html Urgent! Thank you

    opened by fcs2001 4
  • Grecimar Rodriguez sección 11
Proyecto java

    Grecimar Rodriguez sección 11 Proyecto java

    import java.io.IOException; import java.util.Scanner;

    public class Main {

    public static void main(String[] args) throws IOException {
        Scanner sc = new Scanner(System.in);
        String texto;
        char caracter;
        int numeroDeVeces = 0;
        do {
            System.out.println("Introduce texto: ");
            texto = sc.nextLine();
        } while (texto.isEmpty());
        System.out.print("Introduce un carácter: ");
        caracter = (char) System.in.read();
        numeroDeVeces = contarCaracteres(texto, caracter);
        System.out.println("El caracter " + caracter + " aparece " + numeroDeVeces + " veces");                   
    }
    
    //calcular el número de veces que se repite un carácter en un String
    public static int contarCaracteres(String cadena, char caracter) {
        int posicion, contador = 0;
        //se busca la primera vez que aparece
        posicion = cadena.indexOf(caracter);
        while (posicion != -1) { //mientras se encuentre el caracter
            contador++;           //se cuenta
            //se sigue buscando a partir de la posición siguiente a la encontrada
            posicion = cadena.indexOf(caracter, posicion + 1);
        }
        return contador;
    

    } }

    opened by Roro31389324 3
  • WHERE conditions is not working

    WHERE conditions is not working

    Describe the bug The issue happens query having multiple ANDed WHERE conditions. If I execute two query "SELECT * WHERE A AND B" and "SELECT * WHERE B AND A", I could not get the same results. It happens under the version of v4.5.1 for both GriddB server and c_client library. (It is same for other versions > 4.2.1.) However when I execute the same query under 4.2.1 GridDB (+ v4.2.0 c_client library), I can get expected output.

    To Reproduce We can reproduce this issue using below program.

    #include "gridstore.h"
    #include 
    #include 
    
    typedef struct {
            int c1;
    } Test;
    
    GS_STRUCT_BINDING(Test, GS_STRUCT_BINDING_KEY(c1, GS_TYPE_INTEGER));
    
    static void execute(GSCollection *col, char *tql)
    {
            GSQuery *query; GSRowSet *rs; GSResult ret; Test test;
            ret = gsQuery(col, tql, &query); if (!GS_SUCCEEDED(ret)) exit(-1);
            ret = gsFetch(query, GS_TRUE, &rs); if (!GS_SUCCEEDED(ret)) exit(-1);
            printf("query: %s\n", tql);
            while (gsHasNextRow(rs)) {
                    gsGetNextRow(rs, &test);
                    ret = gsUpdateCurrentRow(rs, &test); if (!GS_SUCCEEDED(ret)) exit(-1);
                    printf(" c1=%d\n", test.c1);
            }
    }
    
    void main(void)
    {
            GSGridStore *store; GSCollection *col; GSResult ret; Test test; int i;
            const GSPropertyEntry props[] = { { "notificationAddress", "239.0.0.1" },
                                              { "notificationPort", "31999" },
                                              { "clusterName", "griddbfdwTestCluster" },
                                              { "user", "admin" },
                                              { "password", "testadmin" } };
            const size_t propCount = sizeof(props) / sizeof(*props);
    
            gsGetGridStore(gsGetDefaultFactory(), props, propCount, &store);
            gsPutCollection(store, "test01", GS_GET_STRUCT_BINDING(Test), NULL, GS_FALSE, &col);
            gsSetAutoCommit(col, GS_FALSE);
            for (i=0; i <10; i++) {
              test.c1 = i; gsPutRow(col, NULL, &test, NULL);
            }
            gsCommit(col);
            execute(col, "SELECT * WHERE (c1 > 8) AND (c1 > 1)");
            execute(col, "SELECT * WHERE (c1 > 1) AND (c1 > 8)");
            gsCommit(col);
            gsCloseGridStore(&store, GS_TRUE);
    }
    

    In 4.5.1 I got the following results.

    query: SELECT * WHERE (c1 > 8) AND (c1 > 1)
     c1=2
     c1=3
     c1=4
     c1=5
     c1=6
     c1=7
     c1=8
     c1=9
    query: SELECT * WHERE (c1 > 1) AND (c1 > 8)
     c1=9
    

    Expected behavior The result should be below.

    query: SELECT * WHERE (c1 > 8) AND (c1 > 1)
     c1=9
    query: SELECT * WHERE (c1 > 1) AND (c1 > 8)
     c1=9
    
    opened by hrkuma 3
  • Visual Lista de Procesos

    Visual Lista de Procesos

    No scrollea para mostrar todos los procesos. Tambien estaria bueno achicar un poco los renglones para ver más. Entran 9 visibles, son como 15 o 20 los productos que más tienen. image

    opened by mbustaadmin 3
  • Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=10000)

    Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=10000)

    Unable to query after a successful connection for a period of time. The program is blocked for query, and the log has the following problems

    2022-12-30 11:02:04.134  INFO 27220 --- [ctor-http-nio-7] o.j.c.g.s.s.ReactiveGridSearchService    : select * where (name='message-count') AND count > 0 AND  timestamp < TIMESTAMP('2022-12-30T11:02:01.567+0800') AND timestamp > TIMESTAMP('2022-12-29T11:02:01.567+0800') order by timestamp
    2022-12-30 11:02:05.266  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=10000)
    2022-12-30 11:02:15.270  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=20004)
    2022-12-30 11:02:25.272  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=30006)
    2022-12-30 11:02:35.275  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=40009)
    2022-12-30 11:02:45.278  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=50012)
    2022-12-30 11:02:55.280  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=60014)
    2022-12-30 11:03:05.282  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=70016)
    2022-12-30 11:03:15.304  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=80038)
    2022-12-30 11:03:25.307  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=90041)
    2022-12-30 11:03:35.309  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=100043)
    2022-12-30 11:03:45.312  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=110046)
    2022-12-30 11:03:55.314  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=120048)
    2022-12-30 11:04:05.316  INFO 27220 --- [ics-publisher-3] c.t.m.gs.GridStoreLogger.Heartbeat       : Sending heartbeat (statement=CREATE_SESSION, address=/192.168.1.134:10001, partition=7, statementId=15, elapsedMillis=130050)
    
    
    
    opened by tom055 1
  • How about the average calculation in the query interval

    How about the average calculation in the query interval

    I want to get the average value of data in each interval. How do I query? TIME_SAMPLING and TIME_AVG, How to merge them for query? Thank you very much 微信图片_20221226174134

    opened by tom055 2
  • com.toshiba.mwcloud.gs.common.GSStatementException: [1007:CM_NOT_SUPPORTED] not support Row Expiration

    com.toshiba.mwcloud.gs.common.GSStatementException: [1007:CM_NOT_SUPPORTED] not support Row Expiration

    An error occurred when I wanted to create a time series expiration

        ContainerInfo containerInfo = new ContainerInfo();
        List<ColumnInfo> columnList = new ArrayList<ColumnInfo>();
        columnList.add(new ColumnInfo("timestamp", GSType.TIMESTAMP));
        columnList.add(new ColumnInfo("value", GSType.FLOAT));
        containerInfo.setColumnInfoList(columnList);
        containerInfo.setRowKeyAssigned(true);
    
        TimeSeriesProperties tsProp = new TimeSeriesProperties();
        tsProp.setRowExpiration(1, TimeUnit.MINUTE);
        tsProp.setExpirationDivisionCount(5);
        containerInfo.setTimeSeriesProperties(tsProp);
    
        Properties props = new Properties();
        props.setProperty("host", "192.168.1.134");
        props.setProperty("port", "10001");
        props.setProperty("clusterName", "myCluster");
        props.setProperty("user", "admin");
        props.setProperty("password", "admin");
        props.setProperty("database","public");
        props.setProperty("transactionTimeout","5000");
        props.setProperty("failoverTimeout","5000");
        GridStore store = GridStoreFactory.getInstance().getGridStore(props);
        TimeSeries<Row> lctime1 = store.putTimeSeries("lctime", containerInfo, false);
        System.out.println("create");
        store.close();
    

    Exception in thread "main" com.toshiba.mwcloud.gs.common.GSStatementException: [1007:CM_NOT_SUPPORTED] not support Row Expiration (address=192.168.1.134:10001, partitionId=10)

    Location where the error occurred

    TimeSeries<Row> lctime1 = store.putTimeSeries("lctime", containerInfo, false);
    

    How to solve this problem? thank you

    opened by tom055 1
  • Default value for gs_joincluster clusterName is defaultCluster whereas the sample app and running as a service uses myCluster

    Default value for gs_joincluster clusterName is defaultCluster whereas the sample app and running as a service uses myCluster

    https://github.com/griddb/griddb/blob/5ca6c4015f8aab3cd604ab3e24822321d36e43e4/bin/gs_joincluster#L22-L31

    Issue When running GridDB manually using gs_startnode, gs_joincluster, etc. kept having issues with incorrect usage of cluster name. Fixed both gs_cluster.json in /var/lib/gridstore/conf and /usr/griddb/ conf/ but the issue persisted until it was found that the default cluster name when running gs_joincluster is not set to the common clustername (myCluster) but defaultCluster.

    Proposed solution The default clusterName should be unified to myCluster or defaultCluster to avoid similar issues occurring to other users.

    opened by AnggaSuherman 0
  • Document upgrade and downgrade processes

    Document upgrade and downgrade processes

    Hi.

    After reading the documentation it is not clear for me, how upgrade and downgrade processes should be performed over the database. Could you please describe somewhere in the documentation:

    • How to upgrade the database to the newer version (hopefully with no downtime)
    • How to downgrade the database to the older version (hopefully with no downtime)
    • Maybe some compatibility policies and notes between versions

    Thanks in advance!

    opened by zamazan4ik 2
  • Document supported hardware architectures

    Document supported hardware architectures

    Hi!

    Could you please put the information about supported architectures to the documentation please? E.e. about supported architectures for different operating systems, some specific requirements to the supported instructions, if you have any (e.g. maybe AVX is required - I do not know).

    This kind of information is important for the end-users.

    Thanks in advance!

    opened by zamazan4ik 1
Releases(add-license-1)
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8.6k Dec 26, 2022
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 14.3k Jan 2, 2023
The database built for IoT streaming data storage and real-time stream processing.

The database built for IoT streaming data storage and real-time stream processing.

HStreamDB 575 Dec 26, 2022
SiriDB is a highly-scalable, robust and super fast time series database

SiriDB is a highly-scalable, robust and super fast time series database. Build from the ground up SiriDB uses a unique mechanism to operate without a global index and allows server resources to be added on the fly. SiriDB's unique query language includes dynamic grouping of time series for easy analysis over large amounts of time series.

SiriDB 471 Jan 9, 2023
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks 3.7k Dec 30, 2022
以简单、易用、高性能为目标、开源的时序数据库,支持Linux和Windows, Time Series Database

松果时序数据库(pinusdb) 松果时序数据库是一款针对中小规模(设备数少于10万台,每天产生的数据量少于10亿条)场景设计的时序数据库。以简单、易用、高性能为设计目标。使用SQL语句进行交互,拥有极低的学习、使用成本, 提供了丰富的功能、较高的性能。 我们的目标是成为最简单、易用、健壮的单机时序

null 99 Nov 19, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 834 Dec 24, 2022
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider 132 Sep 8, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 4 Nov 14, 2022
DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.

DB Browser for SQLite What it is DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files c

null 17.5k Jan 2, 2023
PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL.

PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.

Alibaba 2.5k Dec 31, 2022
The open-source database for the realtime web.

RethinkDB What is RethinkDB? Open-source database for building realtime web applications NoSQL database that stores schemaless JSON documents Distribu

RethinkDB 25.9k Jan 9, 2023
High-performance time-series aggregation for PostgreSQL

PipelineDB has joined Confluent, read the blog post here. PipelineDB will not have new releases beyond 1.0.0, although critical bugs will still be fix

PipelineDB 2.5k Dec 26, 2022
ObjectBox C and C++: super-fast database for objects and structs

ObjectBox Embedded Database for C and C++ ObjectBox is a superfast C and C++ database for embedded devices (mobile and IoT), desktop and server apps.

ObjectBox 152 Dec 23, 2022
A very fast lightweight embedded database engine with a built-in query language.

upscaledb 2.2.1 Fr 10. Mär 21:33:03 CET 2017 (C) Christoph Rupp, [email protected]; http://www.upscaledb.com This is t

Christoph Rupp 542 Dec 30, 2022
libmdbx is an extremely fast, compact, powerful, embedded, transactional key-value database, with permissive license

One of the fastest embeddable key-value ACID database without WAL. libmdbx surpasses the legendary LMDB in terms of reliability, features and performance.

Леонид Юрьев (Leonid Yuriev) 1.1k Dec 19, 2022
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.

Velox is a C++ database acceleration library which provides reusable, extensible, and high-performance data processing components

Facebook Incubator 2k Jan 8, 2023
LogMessage is one of the output format of database incremental data

LogMessage LogMessage是一种数据库增量数据的输出格式,oceanbase的增量采集模块liboblog正是使用的这种消息格式来输出增量数据,LogMessage支持oceanbase中不同数据类型的增量数据的写入,具有序列化和反序列化的能力。 如何编译 LogMessage的编译

OceanBase 7 Dec 14, 2022
BerylDB is a data structure data manager that can be used to store data as key-value entries.

BerylDB is a data structure data manager that can be used to store data as key-value entries. The server allows channel subscription and is optimized to be used as a cache repository. Supported structures include lists, sets, and keys.

BerylDB 203 Dec 16, 2022