An Embedded NoSQL, Transactional Database Engine

Overview

UnQLite - Transactional Embedded Database Engine https://unqlite.org

Build Status Maintenance GitHub license

Release 1.1.9 (April 2018): Fixed memory leak in unqlite_commit() that caused data loss under some circumstances.

As of January 2018 - Symisc Systems has decided to revive the UnQLite project. All known data corruption bugs have been fixed, and expect to see new features (LZ compression), performance improvements, etc to be pushed here. You should rely for your production build on the amalgamation file and its header file available here or to be downloaded directly from https://unqlite.org/downloads.html

UnQLite is a in-process software library which implements a self-contained, serverless, zero-configuration, transactional NoSQL database engine. UnQLite is a document store database similar to MongoDB, Redis, CouchDB etc. as well a standard Key/Value store similar to BerkeleyDB, LevelDB, etc.

UnQLite is an embedded NoSQL (Key/Value store and Document-store) database engine. Unlike most other NoSQL databases, UnQLite does not have a separate server process. UnQLite reads and writes directly to ordinary disk files. A complete database with multiple collections, is contained in a single disk file. The database file format is cross-platform, you can freely copy a database between 32-bit and 64-bit systems or between big-endian and little-endian architectures. UnQLite features includes:

Serverless, NoSQL database engine.
Transactional (ACID) database.
Zero configuration.
Single database file, does not use temporary files.
Cross-platform file format.
UnQLite is a Self-Contained C library without dependency.
Standard Key/Value store.
Document store (JSON) database via Jx9.
Support cursors for linear records traversal.
Pluggable run-time interchangeable storage engine.
Support for on-disk as well in-memory databases.
Built with a powerful disk storage engine which support O(1) lookup.
Thread safe and full reentrant.
Simple, Clean and easy to use API.
Support Terabyte sized databases.
BSD licensed product.
Amalgamation: All C source code for UnQLite and Jx9 are combined into a single source file.

UnQLite is a self-contained C library without dependency. It requires very minimal support from external libraries or from the operating system. This makes it well suited for use in embedded devices that lack the support infrastructure of a desktop computer. This also makes UnQLite appropriate for use within applications that need to run without modification on a wide variety of computers of varying configurations.

UnQLite is written in ANSI C, Thread-safe, Full reentrant, compiles unmodified and should run in most platforms including restricted embedded devices with a C compiler. UnQLite is extensively tested on Windows and UNIX systems especially Linux, FreeBSD, Oracle Solaris and Mac OS X.

http://unqlite.org

Comments
  • Encounter problem with memory copy for record delete.

    Encounter problem with memory copy for record delete.

    I ported unqlite to ARM M4 chip. I am able to use Jx script to add new records and fetch new records by ID. But, I can't drop a record. The firmware will crash for sure. It crashes when it tries to copy some data with this below macro in the syBlobAppend function.

    Here is the header of the macro. #define SX_MACRO_FAST_MEMCPY(SRC, DST, SIZ) {...}

    UnQlite works well on Ubuntu Linux. So, I am not sure what I may have done wrong in porting it to an ARM chip. I set the page size to 512 bytes instead of 4096 bytes.

    I printed the pointers' address and data size for where the firmware crashed.

    pData: 3140 ( 3140, 0x c44) //Internal RAM zBlob: -1877127508 ( 2417839788, 0x901d4eac) // External SDRAM nSize: -4 ( 4294967292, 0xfffffffc) ??? This is not good!

    This is the only clue I have. I suspect some sort of calculation errors. But, it is not always -4 when it crashes.

    Thank you for any help you can provide, Michael

    opened by mightylastingcode 27
  • retrieving all keys in db with a cursor causes subsequent cursors to not see some keys

    retrieving all keys in db with a cursor causes subsequent cursors to not see some keys

    When I run the following, the second printf displays 483 rather than 500. Removing the first while() loop that retrieves all of the keys causes the second printf to display 500. Is this expected?

    #include "unqlite.h"
    #include <stdio.h>
    
    #define N       500
    #define KEYLEN  8
    #define DATALEN 100
    
    int main(void) {
    
      unqlite *db;
      unqlite_kv_cursor *cursor;
      char key[KEYLEN];
      char data[DATALEN];
      char key_array[N][KEYLEN];
      int ikey;
      unqlite_int64 idata;
      int i, j, rc;
    
      unqlite_open(&db, "test.db", UNQLITE_OPEN_CREATE);
    
      for (i=0; i<N; i++) {
        sprintf(key, "key_%i", i);
        for (j=0; j<DATALEN; j++)
          data[j] = 'x';
        unqlite_kv_store(db, key, sizeof(key), data, sizeof(data));
      }
    
      unqlite_close(db);
    
      unqlite_open(&db, "test.db", UNQLITE_OPEN_CREATE);
    
      unqlite_kv_cursor_init(db, &cursor);
      unqlite_kv_cursor_reset(cursor);
    
      j = 0;
      while (unqlite_kv_cursor_valid_entry(cursor)) {
        unqlite_kv_cursor_key(cursor, &key_array[j], &ikey);
        unqlite_kv_cursor_next_entry(cursor);
        j += 1;
      }
      unqlite_kv_cursor_release(db, cursor);
      printf("%i\n", j);
    
      unqlite_kv_cursor_init(db, &cursor);
      unqlite_kv_cursor_reset(cursor);
    
      j = 0;
      while (unqlite_kv_cursor_valid_entry(cursor)) {
        unqlite_kv_cursor_key(cursor, &key, &ikey);
        unqlite_kv_fetch(db, key, sizeof(key), &data, &idata);
        unqlite_kv_cursor_next_entry(cursor);
        j += 1;
      }
      unqlite_kv_cursor_release(db, cursor);
      printf("%i\n", j);
    }
    

    I'm building the above against unqlite 1.1.6 with Apple LLVM 7.3.0 on MacOS 10.11.6.

    opened by lebedov 18
  • unqlite_close() is not closing file handle when database is corrupt

    unqlite_close() is not closing file handle when database is corrupt

    I am having file handle issue when using unqlite 1.1.9 on Windows. The following code reliably produces this behavior and has been tested on Windows 7 and 10.

    Basically the code creates a corrupt file and tries to load a record from it which results in UNQLITE_CORRUPT. It then calls unqlite_close() followed by the Windows routine DeleteFileA() which returns ERROR_SHARING_VIOLATION because the file handle is still open.

    In looking at the unqlite code, I think the problem is in unqlitePagerClose(). It does the following check to see if unqliteOsCloseFree() should be called. if( !pPager->is_mem && pPager->iState > PAGER_OPEN )

    Should that be changed to a logical OR?

    #include <stdint.h>
    #include <stdio.h>
    #include <Windows.h>
    
    #include "unqlite.h"
    
    int main()
    {
        // Create a corrupt db file
        char* badData = "badData";
        char* dbPath = "test.db";
        FILE* dbFile = fopen(dbPath, "wb");
        fwrite(badData, strlen(badData), 1, dbFile);
        fclose(dbFile);
    
        // Open the db
        unqlite* db = nullptr;
        if (unqlite_open(&db, dbPath, UNQLITE_OPEN_CREATE) != UNQLITE_OK)
        {
            printf("Error opening %s\n", dbPath);
            return 1;
        }
    
        uint8_t key[10];
        uint8_t record[100];
        unqlite_int64 recordSize = sizeof(record);
    
        if (unqlite_kv_fetch(db, key, sizeof(key), record, &recordSize) == UNQLITE_CORRUPT)
        {
            // Closing the database is not closing the handle because
            // unqlitePagerClose() is not calling unqliteOsCloseFree()
            // Should this line be changed to a logical OR?
            // if( !pPager->is_mem && pPager->iState > PAGER_OPEN ){
            unqlite_close(db);
    
            // Try to delete the file
            if (!DeleteFileA(dbPath) && GetLastError() == ERROR_SHARING_VIOLATION)
            {
                printf("Failed to delete since handle is still open\n");
            }
        }
    
        return 0;
    }
    
    opened by kmvanbrunt 12
  • Fix/crosscompile

    Fix/crosscompile

    A lot of automatic build tools, Travis CI, GitLab CI etc. are using docker containers to build software. Unqlite can be cross-compiled for Windows from Linux. This requires the Mingw-64 Cross Compiler.

    However, within the source code there is an include for <Windows.h> this is correct for Windows platforms but does not work when cross compiling.

    In order to build a Windows binary of Unqlite from a Linux host, the include header has to be changed to <windows.h> (Windows requires uppercase, Linux cross-compile with Ming2-64 requires lower case).

    In order to preserve cross-platform source code, this commit fixes this issues to allow both successful compilation from Windows targets as well as cross-compiling from a Linux host.

    This fix allows Unqlite to be build by CI environments.

    opened by gjrtimmer 10
  • Unqlite lost some key-value pair after commit

    Unqlite lost some key-value pair after commit

    I found strange bug, for some keys unqlite lost some key-value pair after commit, I write code for reproduce it:

    #include <string>
    #include <ctime>
    #include <cstdint>
    
    extern "C" {
    #include "../UnQLite-kv/unqlite.h"
    }
    
    struct TestStruct
    {
        double d1;
        double d2;
    };
    
    int main(int argc, char **argv)
    {
        std::string filename("TEST." + std::to_string(time(nullptr)));
        TestStruct test = { 1.0, 1.1 };
    
        // Fill database 
        {
            unqlite *db = nullptr;
    
            int res = unqlite_open(&db, filename.c_str(), UNQLITE_OPEN_CREATE);
            if (res != UNQLITE_OK)
            {
                return res;
            }
    
            for (int64_t i = 0; i < 165; ++i)
            {
                int res = unqlite_kv_store(db, &i, sizeof(int64_t), &test, sizeof(TestStruct));
                if (res != UNQLITE_OK)
                {
                    return res;
                }
            }
    
            res = unqlite_close(db);
            if (res != UNQLITE_OK)
            {
                return res;
            }
        }
    
        // Reopen
        {
            unqlite *db = nullptr;
    
            int res = unqlite_open(&db, filename.c_str(), UNQLITE_OPEN_CREATE);
            if (res != UNQLITE_OK)
            {
                return res;
            }
    
            // Write pair with key 162 again
            int64_t i_bug = 162;
            res = unqlite_kv_store(db, &i_bug, sizeof(int64_t), &test, sizeof(TestStruct));
            if (res != UNQLITE_OK)
            {
                return res;
            }
    
            // Force commit
            res = unqlite_commit(db);
            if (res != UNQLITE_OK)
            {
                return res;
            }
    
            // Test pair with key 164, it lost
            int64_t i = 164;
            TestStruct test1= {};
            unqlite_int64 buf_size = sizeof(TestStruct);
            res = unqlite_kv_fetch(db, &i, sizeof(int64_t), &test1, &buf_size);
            if (res != UNQLITE_OK)
            {
                printf("%d", res); //Yep, print -6 there
                return res;
            }
    
            res = unqlite_close(db);
            if (res != UNQLITE_OK)
            {
                return res;
            }
        }
    
        return 0;
    }
    

    For this types of key and value, after reopen database and write pair with key 162 and force commit unqlite lost pair with key 164. I check it with last unqlite, unqlite-kv and vedis sources on msvc2013, clang 3.8.1 and gcc 6.2.0

    opened by onto 8
  • Data loss when defragmenting slave page

    Data loss when defragmenting slave page

    I was not able to create a standalone case for that yet, but I experience data loss in production. Basically when I write into database, close it, reopen and try to fetch the data I just wrote, sometimes I get UNQLITE_NOTFOUND.

    I tracked the issue down to lhPageDefragment called for slave page. It is clearly doesn't make sense to defragment slave page, and lhPageDefragment doesn't expect to receive slave page. I added a guard at the very beginning of lhPageDefragment:

    if (pPage->pMaster != pPage)
      return UNQLITE_FULL;
    

    That fixed the immediate issue -- there are no data losses any more. But here could be bigger issue hidden -- it seems to be logical error to call lhPageDefragment for slave page. Probably there should be other way to defragment slave pages.

    opened by Yuras 8
  • unqlite_array_fetch numeric string keys not works on JSON array

    unqlite_array_fetch numeric string keys not works on JSON array

    The documentation states that unqlite_array_fetch function can pass numeric string keys. I try, end that is not works.

    void test_array_fetch() {
        const char *srcName = "src";
        const char *dstName = "dst";
        const char *script = "$dst = $src;";
    
        unqlite *pDb;
        unqlite_vm *pVm;
    
        unqlite_open(&pDb, ":mem:", UNQLITE_OPEN_IN_MEMORY);
        unqlite_compile(pDb, script, -1, &pVm);
    
        unqlite_value *pArray = unqlite_vm_new_array(pVm);
        for (int i = 0x1; i < 0x10; i++) {
            unqlite_value *pVal = unqlite_vm_new_scalar(pVm);
            unqlite_value_int(pVal, i);
            unqlite_array_add_elem(pArray, NULL, pVal);
            unqlite_vm_release_value(pVm, pVal);
        }
    
        unqlite_vm_config(pVm, UNQLITE_VM_CONFIG_CREATE_VAR, srcName, pArray);
        unqlite_vm_release_value(pVm, pArray);
        unqlite_vm_exec(pVm);
    
        pArray = unqlite_vm_extract_variable(pVm, dstName);
        if (unqlite_value_is_json_array(pArray)) {
            printf("array_count: %d\n", unqlite_array_count(pArray));
        }
    
        unqlite_value *pVar = unqlite_array_fetch(pArray, "1", -1);
        printf("array_fetch: %p\n", pVar);
    
        if (pVar) {
            printf("value_is_int: %d\n", unqlite_value_is_int(pVar));
            printf("value_to_int: %d\n", unqlite_value_to_int(pVar));
        } else {
            printf("WARNING unqlite_array_fetch return null\n");
        }
    
        unqlite_vm_release_value(pVm, pArray);
        unqlite_vm_release(pVm);
        unqlite_close(pDb);
    }
    

    Result:

    Version: 1.1.9 (1001009)
    ------------------------
    array_count: 15
    array_fetch: 0x0
    WARNING unqlite_array_fetch return null
    
    opened by jlab13 7
  • Passing a callable to a foreign function

    Passing a callable to a foreign function

    I could not find any documentation on this: how can I pass a callable from jx9 to a foreign function written in C, and call this callable on some data? I tried looking at how db_fetch_all is implemented, but it seems to be using a function that is not in the API (here).

    opened by mdorier 5
  • UNQLITE _CORRUPT

    UNQLITE _CORRUPT

    Making many appends with unqlite_kv_append causes UNQLITE_DB_MISUSE(pDb) to return an error. I understand that this means that something is writing into nMagic, but what is the likely culprit? I am calling unqlite_kv_open, then append, then unqlite_kv_close. Is that the intended usage of append? Is the overwrite happening internally, or is there a way that I am smashing this value based on input data. By the way, the test program that sent many appends was calling thousands of 10 byte appends (sandwiched between open and close calls).

    opened by Gorillarock 5
  • Error while requesting database lock

    Error while requesting database lock

    I am trying to just get started. I'm stuck with this error though. I'm using a Macbook.

    I pulled the source from here. Made a build dir. Inside the build dir, ran cmake .. which worked fine. Ran make. It built fine. So then I copy libunqlite.a and unqlite.h to my project. Made this simple main.c file.

    #include <stdio.h>
    #include <unqlite.h>
    
    int main()
    {
      int rc;
      unqlite *pDb;
    
      // Open our database;
      rc = unqlite_open(&pDb, "test.db", UNQLITE_OPEN_CREATE);
      if (rc != UNQLITE_OK)
      {
        printf("%s", "Failed to open/create database.");
        return 1;
      }
    
      // Store some records
      rc = unqlite_kv_store(pDb, "test", -1, "Hello World", 11); //test => 'Hello World'
      if (rc != UNQLITE_OK)
      {
        printf("%s", "Failed to do kv store.");
        return 1;
      }
    
      unqlite_close(pDb);
      return 0;
    }
    

    1st, I tried just the open, then close statements. This did nothing, not even create a test.db.

    2nd, I added the kv store statement. This made it create a test.db file. But the call to unqlite_kv_store returns -76, which is UNQLITE_LOCKERR or if I use the error printing method from the examples, I get the message Error while requesting database lock.

    I cannot seem to find any information about this error. So I don't really know where to go from here. Any help would be appreciated. Thanks!

    opened by Mimerr 5
  • Possible memory leak in unqlite_commit()

    Possible memory leak in unqlite_commit()

    I am experiencing a memory leak when using unqlite 1.1.9 on Windows. The following code reliably produces this behavior and has been tested on Windows 7 and 10.

    Basically I'm noticing that calling unqlite_commit() less often aggravates the memory leak.

    The code first writes records to a database and only calls unqlite_commit() once. The memory usage is not reduced by that call to unqlite_commit().

    I then close and rewrite the database while calling unqlite_commit() after each write operation. The memory usage after writing all the records is significantly lower that the first approach.

    Your team previously fixed a memory leak that I reported in 1.1.8 by adding a call to pager_release_page() in pager_commit_phase1(). Is there a possibility this isn't freeing all pages?

    The comments in the code explain all the results I am observing. Hopefully this code helps you track down the issue. Thanks.

    #include <stdlib.h>
    #include <stdint.h>
    #include <stdio.h>
    #include <time.h>
    
    #include "unqlite.h"
    
    void randomBytes(uint8_t* bytes, size_t count)
    {
        for (size_t i = 0; i < count; i++)
        {
            bytes[i] = (rand() % 256);
        }
    }
    
    int main()
    {
        char* dbPath = "test.db";
        srand((unsigned int) time(nullptr));
    
        // Delete any existing database file
        unlink(dbPath);
    
        // Open the db
        unqlite* db = nullptr;
        if (unqlite_open(&db, dbPath, UNQLITE_OPEN_CREATE) != UNQLITE_OK)
        {
            printf("Error opening %s\n", dbPath);
            return 1;
        }
    
        // Insert twenty 1 MB records
        size_t numRecords = 20;
        size_t recordSize = 1024 * 1024;
        uint8_t* record = new uint8_t[recordSize];
        randomBytes(record, recordSize);
    
        for (size_t i = 0; i < numRecords; i++)
        {
            size_t key = i;
            if ((unqlite_kv_store(db, &key, (int) sizeof(key),
                                  record, recordSize) != UNQLITE_OK))
            {
                printf("Error saving data\n");
                return 1;
            }
        }
    
        // At this point on Windows, this process is using about 47 MB of RAM
        // Call commit which should free most of this RAM
        unqlite_commit(db);
    
        // Still using about 47 MB after commit
        // Free it with close
        unqlite_close(db);
    
        // Close freed most of the RAM as expected
        // Recreate the DB and this time commit after each call to unqlite_kv_store()
        unlink(dbPath);
        db = nullptr;
        if (unqlite_open(&db, dbPath, UNQLITE_OPEN_CREATE) != UNQLITE_OK)
        {
            printf("Error opening %s\n", dbPath);
            return 1;
        }
    
        for (size_t i = 0; i < numRecords; i++)
        {
            size_t key = i;
            if ((unqlite_kv_store(db, &key, (int) sizeof(key),
                record, recordSize) != UNQLITE_OK))
            {
                printf("Error saving data\n");
                return 1;
            }
    
            // Commit each time
            unqlite_commit(db);
        }
    
        // Now this process is only using about 9 MB of RAM
        // There seems to be a memory leak when committing less often
        // Is unqlite_commit() actually freeing all pages it commits to disk?
        unqlite_close(db);
    
        delete[] record;
        return 0;
    }
    
    opened by kmvanbrunt 5
  • lsm how to implement?

    lsm how to implement?

    1. Hash, B+Tree, R+Tree, lsm <- any references for lsm implementation?

    2. what are the limitations of using unqlite?

    just tested but really curious over these issues.

    opened by hiqsociety 0
  • unqlite NPE on Android

    unqlite NPE on Android

    I encountered a NPE in unqliteOsWrite in Android multithreaded environment. I think it's a multithreaded issue, however UNQLITE_ENABLE_THREADS compiler flag has been enabled and unqlite_lib_is_threadsafe returns true.

    Exception code:

    Process Name: 'xxx'
    Thread Name: 'CACHE_INDEXER'
    signal 11 (SIGSEGV)  code 1 (SEGV_MAPERR)  fault addr 0000000000000000
      x0   0000000000000000  x1   0000007bbf9249d0  x2   0000000000000020  x3   0000000000003224
      x4   0000000000000021  x5   00000000fbd23f84  x6   0000000000000000  x7   00000000017301ae
      x8   0000000000000000  x9   0000000000000000  x10  000000000000000d  x11  0000000000000000
      x12  0000000000000000  x13  0000000000000f99  x14  000000000000000f  x15  0000007a8640a3d0
      x16  0000007a7f3d98e8  x17  0000007cd3817738  x18  0000000000001000  x19  0000007b941e7f14
      x20  0000007a8658c21c  x21  0000000000000000  x22  0000000000000020  x23  0000007bbf9249d0
      x24  0000007bbf9249d0  x25  0000007b7abaf020  x26  0000007b7abae1b0  x27  0000007b7abaf020
      x28  0000007bc5e93414  x29  0000007b7abae310  x30  0000007a7f35e758
      sp   0000007b7abae130  pc   0000007a7f35e884  pstate 0000000000001000
    

    And desymbolized:

    unqliteOsWrite
    /home/admin/jenkins_sigma_k8s2/workspace/android_so_build_2/ccdn/src/unicache/basic/mds/unqlite.c:24507
    lhStoreCell
    /home/admin/jenkins_sigma_k8s2/workspace/android_so_build_2/ccdn/src/unicache/basic/mds/unqlite.c:22721
    lhRecordInstall
    /home/admin/jenkins_sigma_k8s2/workspace/android_so_build_2/ccdn/src/unicache/basic/mds/unqlite.c:23082
    unqlite_kv_store
    /home/admin/jenkins_sigma_k8s2/workspace/android_so_build_2/ccdn/src/unicache/basic/mds/unqlite.c:4246
    

    It seems like there is a NPE in the following code?

    UNQLITE_PRIVATE int unqliteOsWrite(unqlite_file *id, const void *pBuf, unqlite_int64 amt, unqlite_int64 offset)
    {
      return id->pMethods->xWrite(id, pBuf, amt, offset);
    }
    

    Thanks for your help!

    opened by achieverForever 1
  • Performance

    Performance

    Hi,

    I'm investigating to improve our current internal file format to add more flexibility. It is basically based on a tree representation, each node can contain some data. So I implemented an unqlite version where I use one key/value for the tree node (few byte) and one for the data. Here are the results: 10000 nodes with 1024 bytes => 10240000 bytes unqlite: 36511744 bytes W:404017us R:210433us legacy: 10420043 bytes W:12735us R:11907us file size: 3.5x write time: 31.7x read time: 17.67x

    I think 4096 is closer to the internal unqlite chunk size, so let's try it: 10000 nodes * 4096 data = 40960000 bytes unqlite: 89309184 W:850054us R:455387us legacy: 41140043 W:30292us R:20585us file size: 2.7x write time: 28.06x read time: 22.12x

    So, I'm very disappointed with the result in both file size and time. I understand there will be an overhead for file size and time but that looks too much in this case. Any comment ?

    opened by laurentcau 0
  • read/write offset

    read/write offset

    Hi,

    That would be very very very helpful for us to have read and write (unqlite_kv_store/unqlite_kv_fetch) with dataoffset to partial update data or partial read. Do you think it's doable ?

    opened by laurentcau 2
Releases(v1.1.9)
Owner
PixLab | Symisc Systems
Embedded Software, Machine Learning & Beyond
PixLab | Symisc Systems
libmdbx is an extremely fast, compact, powerful, embedded, transactional key-value database, with permissive license

One of the fastest embeddable key-value ACID database without WAL. libmdbx surpasses the legendary LMDB in terms of reliability, features and performance.

Леонид Юрьев (Leonid Yuriev) 1.1k Dec 19, 2022
FoundationDB - the open source, distributed, transactional key-value store

FoundationDB is a distributed database designed to handle large volumes of structured data across clusters of commodity servers. It organizes data as

Apple 12k Dec 31, 2022
ESE is an embedded / ISAM-based database engine, that provides rudimentary table and indexed access.

Extensible-Storage-Engine A Non-SQL Database Engine The Extensible Storage Engine (ESE) is one of those rare codebases having proven to have a more th

Microsoft 792 Dec 22, 2022
A very fast lightweight embedded database engine with a built-in query language.

upscaledb 2.2.1 Fr 10. Mär 21:33:03 CET 2017 (C) Christoph Rupp, [email protected]; http://www.upscaledb.com This is t

Christoph Rupp 542 Dec 30, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 4 Nov 14, 2022
C++11 wrapper for the LMDB embedded B+ tree database library.

lmdb++: a C++11 wrapper for LMDB This is a comprehensive C++ wrapper for the LMDB embedded database library, offering both an error-checked procedural

D.R.Y. C++ 263 Dec 27, 2022
C++ embedded memory database

ShadowDB 一个C++嵌入式内存数据库 语法极简风 支持自定义索引、复合条件查询('<','<=','==','>=','>','!=',&&,||) 能够快速fork出一份数据副本 // ShadowDB简单示例 // ShadowDB是一个可以创建索引、能够快速fork出一份数据分支的C+

null 13 Nov 10, 2022
dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover. The acron

Canonical 3.3k Jan 9, 2023
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 834 Dec 24, 2022
DuckDB is an in-process SQL OLAP Database Management System

DuckDB is an in-process SQL OLAP Database Management System

DuckDB 7.8k Jan 3, 2023
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

yugabyte 7.4k Jan 7, 2023
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 14.3k Jan 2, 2023
Beryl-cli is a client for the BerylDB database server

Beryl-cli is a client for the BerylDB database server. It offers multiple commands and is designed to be fast and user-friendly.

BerylDB 11 Oct 9, 2022
PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL.

PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.

Alibaba 2.5k Dec 31, 2022
A MariaDB-based command line tool to connect to OceanBase Database.

什么是 OceanBase Client OceanBase Client(简称 OBClient) 是一个基于 MariaDB 开发的客户端工具。您可以使用 OBClient 访问 OceanBase 数据库的集群。OBClient 采用 GPL 协议。 OBClient 依赖 libobclie

OceanBase 51 Nov 9, 2022
A proxy server for OceanBase Database.

OceanBase Database Proxy TODO: some badges here OceanBase Database Proxy (ODP for short) is a dedicated proxy server for OceanBase Database. OceanBase

OceanBase 79 Dec 9, 2022
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

OceanBase 5.1k Jan 4, 2023
SOCI - The C++ Database Access Library

Originally, SOCI was developed by Maciej Sobczak at CERN as abstraction layer for Oracle, a Simple Oracle Call Interface. Later, several database backends have been developed for SOCI, thus the long name has lost its practicality. Currently, if you like, SOCI may stand for Simple Open (Database) Call Interface or something similar.

SOCI 1.2k Jan 9, 2023
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks 3.7k Dec 30, 2022