A very fast lightweight embedded database engine with a built-in query language.

Related tags

Database upscaledb
Overview
upscaledb 2.2.1                                   Fr 10. Mär 21:33:03 CET 2017
(C) Christoph Rupp, [email protected]; http://www.upscaledb.com

This is the README file of upscaledb.

Contents:

1. About

upscaledb is a database engine written in C/C++. It is fast, production-proven
and easy to use.

This release has a bunch of bug fixes, performance improvements and a new
API for bulk operations.

2. Changes

New Features
* Added a new API function for bulk operations (ups_db_bulk_operations in
	ups/upscaledb_int.h)

Bugfixes
* Fixed compiler error related to inline assembly on gcc 4.8.x
* Fixed bug when ups_cursor_overwrite would overwrite a transactional record
	instead of the (correct) btree record
* Fixed several bugs in the duplicate key consolidation
* issue #80: fixed streamvbyte compilation for c++11
* issue #79: fixed crc32 failure when reusing deleted blobs spanning
	multiple pages
* Fixed a bug when recovering duplicates that were inserted with one of the
	UPS_DUPLICATE_INSERT_* flags
* Minor improvements for the journalling performance
* Fixed compilation issues w/ gcc 6.2.1 (Thanks, Roel Brook)

Other Changes
* Performance improvements when appending keys at the end of the database
* The flags UPS_HINT_APPEND and UPS_HINT_PREPEND are now deprecated
* Removed the libuv dependency; switched to boost::asio instead
* Performance improvements when using many duplicate keys (with a
	duplicate table spanning multiple pages)
* Committed transactions are now batched before they are flushed to disk
* The integer compression codecs UPS_COMPRESSOR_UINT32_GROUPVARINT and
	UPS_COMPRESSOR_UINT32_STREAMVBYTE are now deprecated
* The integer compression codec UPS_COMPRESSOR_UINT32_MASKEDVBYTE is now a
	synonym for UPS_COMPRESSOR_UINT32_VARBYTE, but uses the MaskedVbyte
	library under the hood.
* Added Mingw compiler support (thanks, topilski)

To see a list of all changes, look in the file ChangeLog.

3. Features

- Very fast sorted B+Tree with variable length keys
- Basic schema support for POD types (i.e. uint32, uint64, real32 etc)
- Very fast analytical functions
- Can run as an in-memory database
- Multiple databases in one file
- Record number databases ("auto-increment")
- Duplicate keys
- Logging and recovery
- Unlimited number of parallel Transactions
- Transparent AES encryption
- Transparent CRC32 verification
- Various compression codecs for journal, keys and records using 
    zlib, snappy, lzf
- Compression for uint32 keys
- Network access (remote databases) via TCP/Protocol Buffers
- Very fast bi-directional database cursors
- Configurable page size, cache size, key sizes etc
- Runs on Linux, Unices, Microsoft Windows and other architectures
- Uses memory mapped I/O for fast disk access (but falls back to read/write if
    mmap is not available)
- Uses 64bit file pointers and supports huge files (>2 GB)
- Easy to use and well-documented
- Open source and released under the APL 2.0 license
- Wrappers for C++, Java, .NET, Erlang, Python, Ada and others

4. Known Issues/Bugs

See https://github.com/cruppstahl/upscaledb/issues.

5. Compiling

5.1 Linux, MacOS and other Unix systems

To compile upscaledb, run ./configure, make, make install.

Run `./configure --help' for more options (i.e. static/dynamic library,
build with debugging symbols etc).

5.2 Microsoft Visual Studio

A Solution file is provided for Microsoft Visual C++ in the "win32" folder
for MSVC 2013.
All libraries can be downloaded precompiled from the upscaledb webpage.

To download Microsoft Visual Studio Express Edition for free, go to
http://msdn.microsoft.com/vstudio/express/visualc/default.aspx.

5.3 Dependencies

On Ubuntu, the following packages are required:
  - libdb-dev (optional)
  - protobuf-compiler
  - libprotobuf-dev
  - libgoogle-perftools-dev
  - libboost-system-dev
  - libboost-filesystem-dev
  - libboost-thread-dev
  - libboost-dev

For Windows, precompiled dependencies are available here:
https://github.com/cruppstahl/upscaledb-alien

6. Testing and Example Code

Make automatically compiles several example programs in the directory
'samples'. To see upscaledb in action, just run 'samples/db1'
or any other sample. (or 'win32/out/samples/db1/db1.exe' on Windows platforms).

7. API Documentation

The header files in 'include/ups' have extensive comments. Also, a doxygen
script is available; run 'make doc' to start doxygen. The generated
documentation is also available on the upscaledb web page.

8. Porting upscaledb

Porting upscaledb shouldn't be too difficult. All operating
system dependend functions are declared in '1os/*.h' and defined
in '1os/os_win32.cc' or '1os/os_posix.cc'.
Other compiler- and OS-specific macros are in 'include/ups/types.h'.
Most likely, these are the only files which have to be touched. Also see item
9) for important macros.

9. Migrating files from older versions

Usually, upscaledb releases are backwards compatible. There are some exceptions,
though. In this case tools are provided to migrate the database. First, export
your existing database with ups_export linked against the old version.
(ups_export links statically and will NOT be confused if your system has a
newer version of upscaledb installed). Then use the newest version of
ups_import to import the data into a new database. You can find ups_export
and ups_import in the "tools" subdirectory.

    Example (ups_export of 2.1.2 was renamed to ups_export-2.1.2 for clarity):

    ups_export-2.1.2 input.db | ups_import --stdin output.db

10. Licensing

upscaledb is released under the Apache Public License (APL) 2.0. See the
file COPYING for more information.

For commercial use licenses are available. Visit http://upscaledb.com
for more information.

11. Contact

Author of upscaledb is
    Christoph Rupp
    Paul-Preuss-Str. 63
    80995 Muenchen/Germany
    email: [email protected]
    web: http://www.upscaledb.com

12. Other Copyrights

The Google Protocol Buffers ("protobuf") library is Copyright 2008, Google Inc.
It has the following license:

    Copyright 2008, Google Inc.
    All rights reserved.

    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are
    met:

    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above
      copyright notice, this list of conditions and the following disclaimer
      in the documentation and/or other materials provided with the
      distribution.
    * Neither the name of Google Inc. nor the names of its
      contributors may be used to endorse or promote products derived from
      this software without specific prior written permission.

    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Code generated by the Protocol Buffer compiler is owned by the owner
    of the input file used when generating it.  This code is not
    standalone and requires a support library to be linked with it.  This
    support library is itself covered by the above license.

Issues
  • reserved identifier violation

    reserved identifier violation

    I would like to point out that identifiers like "HAM_HAMSTERDB_HPP__" and "HAM_TYPES_H__" do not fit to the expected naming convention of the C++ language standard. Would you like to adjust your selection for unique names?

    opened by elfring 12
  • Cursor iteration not returning results in sorted order

    Cursor iteration not returning results in sorted order

    The code below can be pasted into dotnet\unittests\CursorTest.cs and run with either ReproduceBug(true) or ReproduceBug(false). The point in the test at which each fails has been annotated within the code. I've tried to provide a minimal example of the error, however the following things seems to be required to obtain the errors: 1) Use trasactions, 2) Inserting unequal length arrays for the record values, and 3) Using Cursor.Find followed by Cursor.Move to iterate over the contents of the DB.

    By the way, the .NET project is missing the file Properties\AssemblyInfo.cs in the GitHub repo.

            private void WithTransaction(Action<Transaction> f)
            {
                var txn = env.Begin();
                try
                {
                    f(txn);
                    txn.Commit();
                }
                catch
                {
                    txn.Abort();
                    throw;
                }
            }
    
            private void ReproduceBug(bool bugVariation)
            {
                env = new Hamster.Environment();
                env.Create("ntest.db", HamConst.HAM_ENABLE_TRANSACTIONS);   // Note: not using transactions works fine
    
                var prm = new Parameter();
                prm.name = HamConst.HAM_PARAM_KEY_TYPE;
                prm.value = HamConst.HAM_TYPE_UINT64;
                var prms = new Parameter[] { prm };
    
                db = new Database();
                db = env.CreateDatabase(1, HamConst.HAM_ENABLE_DUPLICATE_KEYS, prms);
    
                var k1 = new byte[] { 128, 93, 150, 237, 49, 178, 92, 8 };
                var k2 = new byte[] { 0, 250, 234, 1, 199, 250, 128, 8 };
                var k3 = new byte[] { 128, 17, 181, 113, 1, 220, 132, 8 };
    
                // print keys (note they are in ascending order as UInt64)
                Console.WriteLine("{0}", BitConverter.ToUInt64(k1, 0));
                Console.WriteLine("{0}", BitConverter.ToUInt64(k2, 0));
                Console.WriteLine("{0}", BitConverter.ToUInt64(k3, 0));
                Console.WriteLine();
    
                var v1  = new byte[46228];   // Note: using equal size value byte arrays works fine!
                var v11 = new byte[446380];
                var v12 = new byte[525933];
                var v21 = new byte[334157];
                var v22 = new byte[120392];
                WithTransaction(txn => db.Insert(txn, k1, v1, Hamster.HamConst.HAM_DUPLICATE));
                WithTransaction(txn => db.Insert(txn, k2, v11, Hamster.HamConst.HAM_DUPLICATE));
                WithTransaction(txn => db.Insert(txn, k2, v12, Hamster.HamConst.HAM_DUPLICATE));
                WithTransaction(txn => db.Insert(txn, k3, v21, Hamster.HamConst.HAM_DUPLICATE));
                WithTransaction(txn => db.Insert(txn, k3, v22, Hamster.HamConst.HAM_DUPLICATE));
    
                WithTransaction(txn =>
                {
                    using (var c = new Cursor(db, txn))
                    {
                        // Note: calling c.Move(HamConst.HAM_CURSOR_NEXT) instead works fine!
                        if (bugVariation)                   
                            c.Find(k1, HamConst.HAM_FIND_GEQ_MATCH);    
                        else
                            c.Find(k1);
    
                        var s1 = c.GetKey();
                        Console.WriteLine("{0}", BitConverter.ToUInt64(s1, 0));
    
                        c.Move(HamConst.HAM_CURSOR_NEXT);
                        var s2 = c.GetKey();
                        Console.WriteLine("{0}", BitConverter.ToUInt64(s2, 0));
    
                        c.Move(HamConst.HAM_CURSOR_NEXT);
                        var s3 = c.GetKey();
                        Console.WriteLine("{0}", BitConverter.ToUInt64(s3, 0));
    
                        c.Move(HamConst.HAM_CURSOR_NEXT);
                        var s4 = c.GetKey();
                        Console.WriteLine("{0}", BitConverter.ToUInt64(s4, 0));
    
                        c.Move(HamConst.HAM_CURSOR_NEXT);   // fails here when bugVariation == false
                        var s5 = c.GetKey();
                        Console.WriteLine("{0}", BitConverter.ToUInt64(s5, 0));
    
                        checkEqual(k1, s1);
                        checkEqual(k2, s2); // fails here when bugVariation == true
                        checkEqual(k2, s3);
                        checkEqual(k3, s4);
                        checkEqual(k3, s5);
                    }
                });
    
                env.Close();
    
                return;
            }
    
    opened by mjmckp 10
  • osx build fails with google perftools installed, undefined symbols

    osx build fails with google perftools installed, undefined symbols

    build fails with google perftools (2.0) is installed

    Undefined symbols for architecture x86_64: "MallocExtension::instance()", referenced from: hamsterdb::Memory::get_global_metrics(ham_env_metrics_t_) in libhamsterdb.a(mem.o) hamsterdb::Memory::release_to_system() in libhamsterdb.a(mem.o) "tc_calloc", referenced from: HamsterdbFixture::callocTest() in hamsterdb.o hamsterdb::BtreeCursor::uncouple(unsigned int) in libhamsterdb.a(btree_cursor.o) hamsterdb::BtreeCursor::clone(hamsterdb::BtreeCursor) in libhamsterdb.a(btree_cursor.o) hamsterdb::BtreeCursor::get_duplicate_table(hamsterdb::PDupeTable*, bool) in libhamsterdb.a(btree_cursor.o) "tc_free", referenced from: hamsterdb::ByteArray::clear() in db.o hamsterdb::ExtKeyCache::ExtKeyHelper::remove_if(hamsterdb::ExtKeyCache::ExtKey) in extkeys.o hamsterdb::ExtKeyCache::remove(unsigned long) in extkeys.o HamsterdbFixture::callocTest() in hamsterdb.o hamsterdb::JournalFixture::compareJournal(hamsterdb::Journal_, hamsterdb::LogEntry_, unsigned int) in journal.o hamsterdb::JournalFixture::appendEraseTest() in journal.o hamsterdb::JournalFixture::appendPartialInsertTest() in journal.o ... "tc_malloc", referenced from: hamsterdb::ExtKeyCache::insert(unsigned long, unsigned int, unsigned char const) in extkeys.o hamsterdb::Database::copy_key(ham_key_t const_, ham_key_t_) in misc.o hamsterdb::BtreeIndex::copy_key(hamsterdb::PBtreeKey const_, ham_key_t_) in libhamsterdb.a(btree.o) hamsterdb::ExtKeyCache::insert(unsigned long, unsigned int, unsigned char const_) in libhamsterdb.a(btree_insert.o) hamsterdb::Database::copy_key(ham_key_t const_, ham_key_t_) in libhamsterdb.a(btree_insert.o) hamsterdb::Database::copy_key(ham_key_t const_, ham_key_t_) in libhamsterdb.a(btree_cursor.o) hamsterdb::DiskDevice::read_page(hamsterdb::Page_) in libhamsterdb.a(env.o) ... "tc_realloc", referenced from: void* hamsterdb::Memory::reallocate(void, unsigned long) in libhamsterdb.a(btree.o) void_ hamsterdb::Memory::reallocate(void_, unsigned long) in libhamsterdb.a(txn_cursor.o) void_ hamsterdb::Memory::reallocate(void_, unsigned long) in libhamsterdb.a(db.o) void_ hamsterdb::Memory::reallocate(void_, unsigned long) in libhamsterdb.a(blob_manager_disk.o) void_ hamsterdb::Memory::reallocate(void*, unsigned long) in libhamsterdb.a(blob_manager_inmem.o) ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation)

    opened by jconley 9
  • applying the whitespace police so I'll have an easier job merging other bits

    applying the whitespace police so I'll have an easier job merging other bits

    See also private email dd. 2012/mar/05; the difference between this bugger and your latest should be the introduction of a little C tool called wsclean (tools/wsclean.c) to process whitespace in sourcecode (remove trailing WS, that sort of thing).

    The other changes should all be WS, as a lot of trailing WS has been removed and TABs removed (you can always go and rerun the tool to 'entab' or 'retab' the code; I suggest you use 'retab' as it'll only put TABs in the leading WS, keeping everything else spaces so alignment/layout stays neat, whatever you do.

    Manual WS editing has happened to a few sources, particularly rb.h

    wsclean comes with a msvc2010/wsclean project; I haven't yet added it to the Makefiles.

    opened by GerHobbelt 8
  • Unexpected cursor behaviour with UPS_ENABLE_TRANSACTIONS

    Unexpected cursor behaviour with UPS_ENABLE_TRANSACTIONS

    Please look at the sample:

    int main()
    {
        ups_env_t* env;
        //ups_env_create(&env, "test.db", UPS_ENABLE_TRANSACTIONS, 0664, 0);
        ups_env_create(&env, "test.db", 0, 0664, 0);
    
        ups_parameter_t params[] = {
        {UPS_PARAM_KEY_TYPE, UPS_TYPE_UINT32},
        {0, }
        };
    
        ups_db_t* db;
        ups_env_create_db(env, &db, 1, 0, &params[0]);
    
        for (int i = 0; i < 4; i++)
        {
            ups_key_t key = ups_make_key(&i, sizeof(i));
            ups_record_t record = {0};
    
            ups_db_insert(db, 0, &key, &record, 0);
        }
    
        ups_cursor_t* cur;
        ups_cursor_create(&cur, db, 0, 0);
    
        ups_key_t cur_key;
    
        int key_val = -1;
    
        ups_cursor_move(cur, &cur_key, 0, UPS_CURSOR_LAST);
        std::cout << (key_val = *(int*)cur_key.data) << std::endl;
        ups_cursor_move(cur, &cur_key, 0, UPS_CURSOR_NEXT);
        std::cout << (key_val = *(int*)cur_key.data) << std::endl;
        ups_cursor_move(cur, &cur_key, 0, UPS_CURSOR_PREVIOUS);
        std::cout << (key_val = *(int*)cur_key.data) << std::endl;
    
        return 0;
    }
    

    With transactions disabled in standard output I read: 3 3 2

    With transactions enabled in standard output I read: 3 3 3

    opened by Amadeszueusz 7
  • CentOS 6.8 - Error Building Upscaledb 2.2.1 - ./configure: line 21049:

    CentOS 6.8 - Error Building Upscaledb 2.2.1 - ./configure: line 21049:

    Steps for Build Upscaledb

    Install Step For uname -a Linux SERVER 2.6.32-642.15.1.el6.x86_64 #1 SMP Fri Feb 24 14:31:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux cat /etc/redhat-release CentOS release 6.8 (Final)

    Library yum install -y binutils make gc gcc gcc-c++ libtool autoconf automake ; yum install -y protobuf-compiler protobuf-devel ;

    Libray Boost #yum install -y boost boost-devel ; yum erase -y boost boost-* ; mkdir -p /dados_ssd/setup/boost ; cd /dados_ssd/setup/boost ; wget https://sourceforge.net/projects/boost/files/boost/1.63.0/boost_1_63_0.tar.gz ; tar xzf boost_1_63_0.tar.gz . cd boost_1_63_0 ; sh bootstrap.sh ; ./b2 && ./b2 install;

    Download mkdir -p /dados_ssd/setup/upscaledb ; cd /dados_ssd/setup/upscaledb ; wget https://github.com/cruppstahl/upscaledb/archive/topic/2.2.1.zip -O upscaledb-2.2.1.zip ; unzip upscaledb-2.2.1.zip ;

    Compile cd /dados_ssd/setup/upscaledb/upscaledb-topic-2.2.1 ; sh bootstrap.sh ; ./configure --prefix=/opt/upscaledb-2.2.1 --disable-java ; make -j 8 ; make install ;

    Log

    [[email protected] setup]# uname -a Linux SERVER 2.6.32-642.15.1.el6.x86_64 #1 SMP Fri Feb 24 14:31:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [[email protected] setup]# cat /etc/redhat-release CentOS release 6.8 (Final) [[email protected] setup]# yum install -y binutils make gc gcc gcc-c++ libtool autoconf automake ; Loaded plugins: fastestmirror Setting up Install Process Loading mirror speeds from cached hostfile

    • base: centos.xpg.com.br
    • epel: mirror.globo.com
    • extras: centos.xpg.com.br
    • updates: centos.xpg.com.br Package binutils-2.20.51.0.2-5.44.el6.x86_64 already installed and latest version Package 1:make-3.81-23.el6.x86_64 already installed and latest version Package gc-7.1-12.el6_4.x86_64 already installed and latest version Package gcc-4.4.7-17.el6.x86_64 already installed and latest version Package gcc-c++-4.4.7-17.el6.x86_64 already installed and latest version Package libtool-2.2.6-15.5.el6.x86_64 already installed and latest version Package autoconf-2.63-5.1.el6.noarch already installed and latest version Package automake-1.11.1-4.el6.noarch already installed and latest version Nothing to do [[email protected] setup]# yum install -y protobuf-compiler protobuf-devel ; Loaded plugins: fastestmirror Setting up Install Process Loading mirror speeds from cached hostfile
    • base: centos.xpg.com.br
    • epel: mirror.globo.com
    • extras: centos.xpg.com.br
    • updates: centos.xpg.com.br Package protobuf-compiler-2.3.0-9.el6.x86_64 already installed and latest version Package protobuf-devel-2.3.0-9.el6.x86_64 already installed and latest version Nothing to do [[email protected] setup]#

    [[email protected] upscaledb]# # Compile [[email protected] upscaledb]# cd /dados_ssd/setup/upscaledb/upscaledb-topic-2.2.1 ; [[email protected] upscaledb-topic-2.2.1]# sh bootstrap.sh ;

    • libtoolize libtoolize: putting auxiliary files in .'. libtoolize: linking file./ltmain.sh' libtoolize: putting macros in AC_CONFIG_MACRO_DIR, m4'. libtoolize: linking filem4/libtool.m4' libtoolize: linking file m4/ltoptions.m4' libtoolize: linking filem4/ltsugar.m4' libtoolize: linking file m4/ltversion.m4' libtoolize: linking filem4/lt~obsolete.m4'
    • aclocal -I m4
    • autoconf
    • automake --add-missing configure.ac:34: installing ./compile' configure.ac:14: installing./config.guess' configure.ac:14: installing ./config.sub' configure.ac:11: installing./install-sh' configure.ac:11: installing ./missing' 3rdparty/json/Makefile.am: installing./depcomp' [[email protected] upscaledb-topic-2.2.1]# ./configure --prefix=/opt/upscaledb-2.2.1 --disable-java ; checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for gcc option to accept ISO C99... -std=gnu99 checking for gcc -std=gnu99 option to accept ISO Standard C... (cached) -std=gnu99 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking for a BSD-compatible install... /usr/bin/install -c checking for a sed that does not truncate output... /bin/sed checking how to run the C preprocessor... gcc -std=gnu99 -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for an ANSI C-conforming const... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for size_t... yes checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for a sed that does not truncate output... (cached) /bin/sed checking for fgrep... /bin/grep -F checking for ld used by gcc -std=gnu99... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1966080 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for ar... ar checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc -std=gnu99 object... ok checking for dlfcn.h... yes checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking how to run the C++ preprocessor... g++ -E checking for objdir... .libs checking if gcc -std=gnu99 supports -fno-rtti -fno-exceptions... no checking for gcc -std=gnu99 option to produce PIC... -fPIC -DPIC checking if gcc -std=gnu99 PIC flag -fPIC -DPIC works... yes checking if gcc -std=gnu99 static flag -static works... no checking if gcc -std=gnu99 supports -c -o file.o... yes checking if gcc -std=gnu99 supports -c -o file.o... (cached) yes checking whether the gcc -std=gnu99 linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... no checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether gcc -std=gnu99 and cc understand -c and -o together... yes checking for off_t... yes checking for stdlib.h... (cached) yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for mmap... yes checking for munmap... yes checking for madvise... yes checking for getpagesize... (cached) yes checking for fdatasync... yes checking for fsync... yes checking for writev... yes checking for pread... yes checking for pwrite... yes checking for posix_fadvise... yes checking for usleep... yes checking for sched_yield... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking for unistd.h... (cached) yes checking whether the compiler supports GCC C++ ABI name demangling... yes checking for Boost headers version >= 1.53.0... yes checking for Boost's header version... 1_63 checking for the toolset name used by Boost for g++... gcc44 -gcc checking boost/system/error_code.hpp usability... yes checking boost/system/error_code.hpp presence... yes checking for boost/system/error_code.hpp... yes checking for the Boost system library... yes checking for boost/system/error_code.hpp... (cached) yes checking for the Boost system library... (cached) yes checking boost/filesystem/path.hpp usability... yes checking boost/filesystem/path.hpp presence... yes checking for boost/filesystem/path.hpp... yes checking for the Boost filesystem library... (cached) yes checking for the flags needed to use pthreads... -pthread checking for boost/system/error_code.hpp... (cached) yes checking for the Boost system library... (cached) yes checking boost/thread.hpp usability... yes checking boost/thread.hpp presence... yes checking for boost/thread.hpp... yes checking for the Boost thread library... (cached) yes ./configure: line 21049: syntax error near unexpected token ;' ./configure: line 21049:; then' [[email protected] upscaledb-topic-2.2.1]#
    opened by fmgdias 7
  • Problem compiling on Centos 6 with tc_malloc

    Problem compiling on Centos 6 with tc_malloc

    Hello,

    I'm trying to compile version 2.1.1 on centos 6 + EPEL (for gperftools).

    configure doesn't seems very happy about tcmalloc libraries, because one of the check fails :

    checking google/tcmalloc.h usability... yes
    checking google/tcmalloc.h presence... yes
    checking for google/tcmalloc.h... yes
    checking for tc_malloc in -ltcmalloc_minimal... no
    

    But in some way, tc_malloc support is still enabled, because the build fails in sample :

    Making all in samples
    make[2]: Entering directory `/root/hamsterdb-2.1.1/samples'
      CC     db1.o
      CCLD   db1
    ../src/.libs/libhamsterdb.so: undefined reference to `tc_free'
    ../src/.libs/libhamsterdb.so: undefined reference to `tc_malloc'
    ../src/.libs/libhamsterdb.so: undefined reference to `tc_calloc'
    ../src/.libs/libhamsterdb.so: undefined reference to `tc_realloc'
    ../src/.libs/libhamsterdb.so: undefined reference to `MallocExtension::instance()'
    collect2: ld returned 1 exit status
    make[2]: *** [db1] Error 1
    make[2]: Leaving directory `/root/hamsterdb-2.1.1/samples'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/root/hamsterdb-2.1.1'
    make: *** [all] Error 2
    

    gperftools are from EPEL. Synced from upstream svn r219.

    opened by brancaleone 7
  • Cannot compile topic/2.2.1 branch

    Cannot compile topic/2.2.1 branch

    There seems to be a couple of files missing: ..\..\3rdparty\for\for.c and ..\..\unittests\error.cpp. Full list of errors below.

    Error	3	error C1083: Cannot open source file: '..\..\3rdparty\for\for.c': No such file or directory	D:\upscaledb\win32\msvc2013\c1	lib
    Error	4	error C1083: Cannot open source file: '..\..\3rdparty\for\for.c': No such file or directory	D:\upscaledb\win32\msvc2013\c1	dll
    Error	6	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_db2
    Error	21	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_db4
    Error	22	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_env1
    Error	23	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_db3
    Error	24	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_db5
    Error	25	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_db1
    Error	26	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_env2
    Error	27	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	ups_dump
    Error	33	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_env3
    Error	34	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_db6
    Error	35	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	ups_info
    Error	36	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	ups_import
    Error	37	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	ups_export
    Error	46	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_client1
    Error	47	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	sample_server1
    Error	52	error LNK1104: cannot open file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	upszilla
    Error	64	error LNK1181: cannot open input file 'D:\upscaledb\win32\msvc2013\out\lib_debug\libupscaledb-2.2.1.lib'	D:\upscaledb\win32\msvc2013\LINK	server_dll
    Error	169	error C1083: Cannot open source file: '..\..\unittests\error.cpp': No such file or directory	D:\upscaledb\win32\msvc2013\c1xx	unittests
    Error	45	error C1083: Cannot open include file: 'jni.h': No such file or directory	d:\upscaledb\java\src\de_crupp_upscaledb_cursor.h	2	1	java_dll
    Error	82	error C1189: #error :  WinSock.h has already been included	d:\upscaledb-alien\boost\boost\asio\detail\socket_types.hpp	24	1	ups_bench
    Error	140	error C2059: syntax error : '}'	d:\upscaledb\unittests\os.cpp	253	1	unittests
    Error	141	error C2065: 'v' : undeclared identifier	d:\upscaledb\unittests\os.cpp	253	1	unittests
    Error	142	error C1903: unable to recover from previous error(s); stopping compilation	d:\upscaledb\unittests\os.cpp	253	1	unittests
    
    opened by mjmckp 6
  • Compile error with VS2013

    Compile error with VS2013

    I'm attempting to compile a debug build of upscaledb on Windows with Visual Studio 2013. I have a clone of both the upscaledb and upscaledb-alien repos. However, the compilation fails with:

    2>d:\upscaledb\src\4env\env_remote.cc(270): error C2039: 'set_compare_name' : is not a member of 'upscaledb::EnvCreateDbRequest'
    2>          d:\upscaledb\win32\msvc2013\2protobuf\messages.pb.h(2268) : see declaration of 'upscaledb::EnvCreateDbRequest'
    
    opened by mjmckp 5
  • Add Cursor.TryFind to hamsterdb-dotnet

    Add Cursor.TryFind to hamsterdb-dotnet

    Existing method Cursor.Find throws an exception when input key is not found. Cursor.TryFind returns a boolean value indicating whether or not the key was found.

    opened by mjmckp 5
  • Make include guards unique

    Make include guards unique

    opened by elfring 5
  • Fails to find boost in

    Fails to find boost in "make" step

    Despite the configure script successfully finding the required boost libraries (system, filesystem, thread, and chrono) when I pass --with-boost=/path/to/my/installation/of/boost, the make step fails with this error:

    In file included from database.cc:19:
    configuration.h:27:10: fatal error: boost/cstdint.hpp: No such file or directory
       27 | #include <boost/cstdint.hpp> // MSVC 2008 does not have stdint
    

    I checked and my installation of boost does have this header.

    I'm on an Ubuntu machine. Boost was installed manually to a custom prefix (not as a system package).

    opened by mdorier 0
  • Can't build from outside the source tree

    Can't build from outside the source tree

    Usually projects are built by creating a build directory and calling configure and make from there, so as to not pollute the source tree. However, when doing so, upscaledb's Makefile fails to find some headers from the 3rdparty directory.

    Steps to reproduce:

    ./bootstrap.sh
    mkdir build
    cd build
    ../configure -with-boost=...
    make
    
    opened by mdorier 0
  • Compilation fails on non-SSE platform

    Compilation fails on non-SSE platform

    I'm trying to cross-compile UpscaleDB for a Cortex A8, but this fails because the toolchain misses some headers required for SSE-instructions (which is logical, since the processor doesn't support SSE)

    I have added '--disable-simd' to the configure-arguments, but the problem still occurs.

    | Making all in unittests
    | make[2]: Entering directory '/home/slotmv/pb/build/workspace/sources/upscaledb/unittests'
    |   CXX      zint32.o
    |   CXX      recovery.o
    |   CXX      issue32.o
    |   CXX      issue101.o
    |   CXX      aes.o
    |   CXX      issue43.o
    |   CXXLD    issue32
    | In file included from ../3rdparty/simdcomp/include/simdcomp.h:12:0,
    |                  from zint32.cpp:25:
    | ../3rdparty/simdcomp/include/simdbitpacking.h:10:23: fatal error: emmintrin.h: No such file or directory
    |  #include <emmintrin.h>
    |                        ^
    | compilation terminated.
    
    
    
    opened by vslotman 0
  • Three unittests failed

    Three unittests failed

    Running unittests I got following result:

    make test
    cd unittests && make && ./test && ./issue32 -i && ./issue32 -r && ./issue43
    make[1]: Entering directory '/home/romz/tmp/upscaledb/unittests'
    make[1]: Nothing to be done for 'all'.
    make[1]: Leaving directory '/home/romz/tmp/upscaledb/unittests'
    5upscaledb/upscaledb.cc[142]: transactions are disabled (see UPS_ENABLE_TRANSACTIONS)
    5upscaledb/upscaledb.cc[191]: parameter 'txn' must not be NULL
    4db/db_local.cc[1114]: invalid key size (2 instead of 1)
    4db/db_local.cc[1114]: invalid key size (2 instead of 1)
    4db/db_local.cc[1114]: invalid key size (0 instead of 1)
    4db/db_local.cc[1114]: invalid key size (0 instead of 1)
    4db/db_local.cc[1114]: invalid key size (3 instead of 2)
    4db/db_local.cc[1114]: invalid key size (3 instead of 2)
    4db/db_local.cc[1114]: invalid key size (1 instead of 2)
    4db/db_local.cc[1114]: invalid key size (1 instead of 2)
    4db/db_local.cc[1114]: invalid key size (5 instead of 4)
    4db/db_local.cc[1114]: invalid key size (5 instead of 4)
    4db/db_local.cc[1114]: invalid key size (3 instead of 4)
    4db/db_local.cc[1114]: invalid key size (3 instead of 4)
    4db/db_local.cc[1114]: invalid key size (9 instead of 8)
    4db/db_local.cc[1114]: invalid key size (9 instead of 8)
    4db/db_local.cc[1114]: invalid key size (7 instead of 8)
    4db/db_local.cc[1114]: invalid key size (7 instead of 8)
    4db/db_local.cc[1114]: invalid key size (5 instead of 4)
    4db/db_local.cc[1114]: invalid key size (5 instead of 4)
    4db/db_local.cc[1114]: invalid key size (3 instead of 4)
    4db/db_local.cc[1114]: invalid key size (3 instead of 4)
    4db/db_local.cc[1114]: invalid key size (9 instead of 8)
    4db/db_local.cc[1114]: invalid key size (9 instead of 8)
    4db/db_local.cc[1114]: invalid key size (7 instead of 8)
    4db/db_local.cc[1114]: invalid key size (7 instead of 8)
    4db/db_local.cc[1114]: invalid key size (9 instead of 8)
    4db/db_local.cc[1114]: invalid key size (9 instead of 8)
    4db/db_local.cc[1114]: invalid key size (7 instead of 8)
    4db/db_local.cc[1114]: invalid key size (7 instead of 8)
    4db/db_local.cc[1168]: invalid key size (0 instead of 80)
    4db/db_local.cc[1168]: invalid key size (0 instead of 80)
    5upscaledb/upscaledb.cc[1059]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[1059]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[499]: Journal compression parameters are only allowed in ups_env_create
    4env/env_local.cc[620]: Record compression parameters are only allowed in ups_env_create_db
    4env/env_local.cc[544]: Key compression only allowed for unlimited binary keys (UPS_TYPE_BINARY
    4env/env_local.cc[544]: Key compression only allowed for unlimited binary keys (UPS_TYPE_BINARY
    4env/env_local.cc[430]: unknown algorithm for record compression
    1os/os_posix.cc[336]: creating file data/ failed with status 21 (Is a directory)
    1os/os_posix.cc[382]: opening file xxxxxx failed with status 2 (No such file or directory)
    5upscaledb/upscaledb.cc[961]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[965]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[1026]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[912]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[1137]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[1318]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[1314]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[1211]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[318]: combination of UPS_IN_MEMORY and UPS_ENABLE_CRC32 not allowed
    3page_manager/page_manager.cc[110]: crc32 mismatch in page 16384: 0x6084e7fa != 0xd31637ab
    3page_manager/page_manager.cc[110]: crc32 mismatch in page 32768: 0xf70c8fa9 != 0x6301fb9d
    4txn/txn_local.cc[350]: Txn cannot be aborted till all attached Cursors are closed
    4txn/txn_local.cc[337]: Txn cannot be committed till all attached Cursors are closed
    5upscaledb/upscaledb.cc[1247]: combination of UPS_ONLY_DUPLICATES and UPS_SKIP_DUPLICATES not allowed
    5upscaledb/upscaledb.cc[1400]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1404]: parameter 'count' must not be NULL
    5upscaledb/upscaledb.cc[1247]: combination of UPS_ONLY_DUPLICATES and UPS_SKIP_DUPLICATES not allowed
    5upscaledb/upscaledb.cc[1400]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1404]: parameter 'count' must not be NULL
    5upscaledb/upscaledb.cc[818]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[596]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[592]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[638]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[634]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[614]: cannot create database in a read-only environment
    5upscaledb/upscaledb.cc[991]: cannot insert in a read-only database
    5upscaledb/upscaledb.cc[1040]: cannot erase from a read-only database
    5upscaledb/upscaledb.cc[1223]: cannot overwrite in a read-only database
    5upscaledb/upscaledb.cc[1334]: cannot insert to a read-only database
    5upscaledb/upscaledb.cc[1382]: cannot erase from a read-only database
    1os/os_posix.cc[382]: opening file xxxxxx... failed with status 2 (No such file or directory)
    5upscaledb/upscaledb.cc[465]: parameter 'env' must not be NULL
    1os/os_posix.cc[382]: opening file xxxtest.db failed with status 2 (No such file or directory)
    5upscaledb/upscaledb.cc[345]: combination of UPS_CACHE_UNLIMITED and cache size != 0 not allowed
    5upscaledb/upscaledb.cc[538]: unknown parameter 257
    5upscaledb/upscaledb.cc[674]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[679]: parameter 'oldname' must not be 0
    5upscaledb/upscaledb.cc[683]: parameter 'newname' must not be 0
    5upscaledb/upscaledb.cc[687]: parameter 'newname' must be lower than 0xf000
    5upscaledb/upscaledb.cc[710]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[714]: parameter 'name' must not be 0
    5upscaledb/upscaledb.cc[733]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[737]: parameter 'names' must not be NULL
    5upscaledb/upscaledb.cc[741]: parameter 'length' must not be NULL
    5upscaledb/upscaledb.cc[818]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[596]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[592]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[638]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[634]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[339]: combination of UPS_IN_MEMORY and cache size != 0 not allowed
    5upscaledb/upscaledb.cc[674]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[679]: parameter 'oldname' must not be 0
    5upscaledb/upscaledb.cc[683]: parameter 'newname' must not be 0
    5upscaledb/upscaledb.cc[687]: parameter 'newname' must be lower than 0xf000
    5upscaledb/upscaledb.cc[710]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[714]: parameter 'name' must not be 0
    5upscaledb/upscaledb.cc[733]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[737]: parameter 'names' must not be NULL
    5upscaledb/upscaledb.cc[741]: parameter 'length' must not be NULL
    1os/os_posix.cc[336]: creating file /::asdf.jrn0 failed with status 13 (Permission denied)
    1os/os_posix.cc[336]: creating file /::asdf.jrn0 failed with status 13 (Permission denied)
    1os/os_posix.cc[382]: opening file /::asdf.jrn0 failed with status 2 (No such file or directory)
    3journal/journal.cc[266]: ups_db_close() failed w/ error -14 (Internal error)
    1os/os_posix.cc[382]: opening file __98324kasdlf.blöd failed with status 2 (No such file or directory)
    1os/os_posix.cc[78]: flock failed with status 11 (Resource temporarily unavailable)
    1os/os_posix.cc[187]: mmap failed with status 22 (Invalid argument)
    4db/db_local.cc[1198]: invalid key size (4 instead of 8)
    4db/db_local.cc[1198]: invalid key size (0 instead of 8)
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    4db/db_local.cc[1101]: invalid key size (4 instead of 8)
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[961]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[1314]: parameter 'key' must not be NULL
    4env/env_local.cc[455]: invalid key size 7 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 7 - must be 8 for UPS_RECORD_NUMBER64 databases
    4env/env_local.cc[455]: invalid key size 9 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 9 - must be 8 for UPS_RECORD_NUMBER64 databases
    4db/db_local.cc[1101]: invalid key size (4 instead of 8)
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    4db/db_local.cc[1198]: invalid key size (4 instead of 8)
    4db/db_local.cc[1198]: invalid key size (0 instead of 8)
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[961]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[1314]: parameter 'key' must not be NULL
    4env/env_local.cc[455]: invalid key size 7 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 7 - must be 8 for UPS_RECORD_NUMBER64 databases
    4env/env_local.cc[455]: invalid key size 9 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 9 - must be 8 for UPS_RECORD_NUMBER64 databases
    4db/db_local.cc[1101]: invalid key size (4 instead of 8)
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    4db/db_local.cc[1198]: invalid key size (8 instead of 4)
    4db/db_local.cc[1198]: invalid key size (0 instead of 4)
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    4db/db_local.cc[1101]: invalid key size (8 instead of 4)
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[961]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[1314]: parameter 'key' must not be NULL
    4env/env_local.cc[455]: invalid key size 7 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 7 - must be 8 for UPS_RECORD_NUMBER64 databases
    4env/env_local.cc[455]: invalid key size 9 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 9 - must be 8 for UPS_RECORD_NUMBER64 databases
    4db/db_local.cc[1101]: invalid key size (8 instead of 4)
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    4db/db_local.cc[1198]: invalid key size (8 instead of 4)
    4db/db_local.cc[1198]: invalid key size (0 instead of 4)
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[961]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[110]: key->size must be 0, key->data must be NULL
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    5upscaledb/upscaledb.cc[1314]: parameter 'key' must not be NULL
    4env/env_local.cc[455]: invalid key size 7 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 7 - must be 8 for UPS_RECORD_NUMBER64 databases
    4env/env_local.cc[455]: invalid key size 9 - must be 4 for UPS_RECORD_NUMBER32 databases
    4env/env_local.cc[463]: invalid key size 9 - must be 8 for UPS_RECORD_NUMBER64 databases
    4db/db_local.cc[1101]: invalid key size (8 instead of 4)
    5upscaledb/upscaledb.cc[67]: key->size != 0, but key->data is NULL
    4txn/txn_local.cc[337]: Txn cannot be committed till all attached Cursors are closed
    4txn/txn_local.cc[350]: Txn cannot be aborted till all attached Cursors are closed
    4db/db_local.cc[1405]: cannot close a Database that is modified by a currently active Txn
    4txn/txn_local.cc[337]: Txn cannot be committed till all attached Cursors are closed
    4txn/txn_local.cc[350]: Txn cannot be aborted till all attached Cursors are closed
    5upscaledb/upscaledb.cc[71]: invalid flag in key->flags
    
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    test is a Catch v1.4.0 host application.
    Run with -? for options
    
    -------------------------------------------------------------------------------
    TxnCursor/issue101Test
    -------------------------------------------------------------------------------
    txn_cursor.cpp:1151
    ...............................................................................
    
    txn_cursor.cpp:912: FAILED:
      REQUIRE( 0 == ups_cursor_move(cursor, &key, 0, 0x0002) )
    with expansion:
      0 == -8
    
    5upscaledb/upscaledb.cc[465]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[490]: filename is missing
    5upscaledb/upscaledb.cc[473]: cannot open an in-memory database
    1os/os_posix.cc[382]: opening file xxxx... failed with status 2 (No such file or directory)
    5upscaledb/upscaledb.cc[473]: cannot open an in-memory database
    5upscaledb/upscaledb.cc[481]: invalid flag UPS_ENABLE_DUPLICATE_KEYS (only allowed when creating a database
    5upscaledb/upscaledb.cc[481]: invalid flag UPS_ENABLE_DUPLICATE_KEYS (only allowed when creating a database
    1os/os_posix.cc[382]: opening file /usr failed with status 21 (Is a directory)
    4env/env_local.cc[233]: invalid file type
    5upscaledb/upscaledb.cc[303]: parameter 'env' must not be NULL
    5upscaledb/upscaledb.cc[399]: filename is missing
    5upscaledb/upscaledb.cc[339]: combination of UPS_IN_MEMORY and cache size != 0 not allowed
    5upscaledb/upscaledb.cc[311]: cannot create a file in read-only mode
    5upscaledb/upscaledb.cc[311]: cannot create a file in read-only mode
    5upscaledb/upscaledb.cc[353]: invalid page size - must be 1024 or a multiple of 2048
    1os/os_posix.cc[336]: creating file /home failed with status 21 (Is a directory)
    5upscaledb/upscaledb.cc[991]: cannot insert in a read-only database
    5upscaledb/upscaledb.cc[1040]: cannot erase from a read-only database
    5upscaledb/upscaledb.cc[1223]: cannot overwrite in a read-only database
    5upscaledb/upscaledb.cc[1334]: cannot insert to a read-only database
    5upscaledb/upscaledb.cc[1382]: cannot erase from a read-only database
    5upscaledb/upscaledb.cc[353]: invalid page size - must be 1024 or a multiple of 2048
    4db/db_local.cc[701]: key size too large; either increase page_size or decrease key size
    5upscaledb/upscaledb.cc[353]: invalid page size - must be 1024 or a multiple of 2048
    4db/db_local.cc[701]: key size too large; either increase page_size or decrease key size
    5upscaledb/upscaledb.cc[873]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[877]: function pointer must not be NULL
    5upscaledb/upscaledb.cc[908]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[912]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[916]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[957]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[71]: invalid flag in key->flags
    5upscaledb/upscaledb.cc[86]: invalid flag in record->flags
    5upscaledb/upscaledb.cc[969]: cannot combine UPS_OVERWRITE and UPS_DUPLICATE
    5upscaledb/upscaledb.cc[997]: database does not support duplicate keys (see UPS_ENABLE_DUPLICATE_KEYS)
    5upscaledb/upscaledb.cc[961]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[965]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[977]: function does not support flags UPS_DUPLICATE_INSERT_*; see ups_cursor_insert
    5upscaledb/upscaledb.cc[977]: function does not support flags UPS_DUPLICATE_INSERT_*; see ups_cursor_insert
    5upscaledb/upscaledb.cc[977]: function does not support flags UPS_DUPLICATE_INSERT_*; see ups_cursor_insert
    5upscaledb/upscaledb.cc[977]: function does not support flags UPS_DUPLICATE_INSERT_*; see ups_cursor_insert
    4db/db_local.cc[1114]: invalid key size (255 instead of 10)
    4db/db_local.cc[1114]: invalid key size (255 instead of 10)
    5upscaledb/upscaledb.cc[1022]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[1026]: parameter 'key' must not be NULL
    1os/os_posix.cc[78]: flock failed with status 11 (Resource temporarily unavailable)
    1os/os_posix.cc[78]: flock failed with status 11 (Resource temporarily unavailable)
    5upscaledb/upscaledb.cc[1082]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[1117]: cannot close Database if Cursors are still open
    5upscaledb/upscaledb.cc[891]: ups_set_compare_func only allowed for UPS_TYPE_CUSTOM databases!
    5upscaledb/upscaledb.cc[1137]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[1141]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1170]: parameter 'src' must not be NULL
    5upscaledb/upscaledb.cc[1174]: parameter 'dest' must not be NULL
    5upscaledb/upscaledb.cc[1241]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1202]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1211]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[1274]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1278]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[1310]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1314]: parameter 'key' must not be NULL
    5upscaledb/upscaledb.cc[1318]: parameter 'record' must not be NULL
    5upscaledb/upscaledb.cc[1372]: parameter 'cursor' must not be NULL
    5upscaledb/upscaledb.cc[1481]: parameter 'cursor' must not be NULL
    4db/db_local.cc[1114]: invalid key size (4 instead of 7)
    4db/db_local.cc[1114]: invalid key size (4 instead of 7)
    4db/db_local.cc[1121]: invalid record size (12 instead of 22)
    4db/db_local.cc[1121]: invalid record size (8 instead of 4)
    4env/env_local.cc[448]: invalid key size 17083940 - must be < 0xffff
    ./2device/device_disk.h[141]: mmap failed with error -14, falling back to read/write
    4db/db_local.cc[784]: custom compare function is not yet registered
    4env/env_local.cc[657]: Database could not be opened
    4db/db_local.cc[1121]: invalid record size (2 instead of 1)
    4db/db_local.cc[1121]: invalid record size (4 instead of 2)
    4db/db_local.cc[1121]: invalid record size (8 instead of 4)
    4db/db_local.cc[1121]: invalid record size (16 instead of 8)
    4db/db_local.cc[1121]: invalid record size (8 instead of 4)
    4db/db_local.cc[1121]: invalid record size (16 instead of 8)
    4db/db_local.cc[1121]: invalid record size (2 instead of 1)
    4db/db_local.cc[1121]: invalid record size (4 instead of 2)
    4db/db_local.cc[1121]: invalid record size (8 instead of 4)
    4db/db_local.cc[1121]: invalid record size (16 instead of 8)
    4db/db_local.cc[1121]: invalid record size (8 instead of 4)
    4db/db_local.cc[1121]: invalid record size (16 instead of 8)
    4env/env_local.cc[514]: invalid record type UPS_TYPE_CUSTOM - use UPS_TYPE_BINARY instead
    4env/env_local.cc[514]: invalid record type UPS_TYPE_CUSTOM - use UPS_TYPE_BINARY instead
    1os/os_posix.cc[382]: opening file test.db failed with status 2 (No such file or directory)
    1os/os_posix.cc[244]: File::pread() failed with short read (No such file or directory)
    5upscaledb/upscaledb.cc[1690]: parameter 'db' must not be NULL
    5upscaledb/upscaledb.cc[1694]: parameter 'operations' must not be NULL
    4uqi/plugins.cc[91]: Failed to open library noexist: noexist: cannot open shared object file: No such file or directory
    4uqi/plugins.cc[91]: Failed to open library /usr/lib/libsnappy.so: /usr/lib/libsnappy.so: cannot open shared object file: No such file or directory
    4uqi/plugins.cc[91]: Failed to open library ./plugin.so: ./plugin.so: cannot open shared object file: No such file or directory
    4uqi/plugins.cc[91]: Failed to open library ./plugin.so: ./plugin.so: cannot open shared object file: No such file or directory
    4uqi/plugins.cc[91]: Failed to open library ./plugin.so: ./plugin.so: cannot open shared object file: No such file or directory
    4uqi/plugins.cc[91]: Failed to open library ./plugin.so: ./plugin.so: cannot open shared object file: No such file or directory
    4uqi/plugins.cc[91]: Failed to open library ./plugin.so: ./plugin.so: cannot open shared object file: No such file or directory
    -------------------------------------------------------------------------------
    Uqi/pluginTest
    -------------------------------------------------------------------------------
    uqi.cpp:862
    ...............................................................................
    
    uqi.cpp:893: FAILED:
      REQUIRE( upscaledb::PluginManager::import("./plugin.so", "test4") == 0 )
    with expansion:
      -500 == 0
    
    4uqi/plugins.cc[91]: Failed to open library ./plugin.so: ./plugin.so: cannot open shared object file: No such file or directory
    -------------------------------------------------------------------------------
    Uqi/parserTest
    -------------------------------------------------------------------------------
    uqi.cpp:912
    ...............................................................................
    
    uqi.cpp:943: FAILED:
      REQUIRE( upscaledb::PluginManager::import("./plugin.so", "test4") == 0 )
    with expansion:
      -500 == 0
    
    4env/env_local.cc[398]: cursor 'begin' uses wrong database
    ./4uqi/scanvisitorfactoryhelper.h[69]: function does not accept binary input
    ./4uqi/scanvisitorfactoryhelper.h[60]: function does not accept binary input
    ./4uqi/scanvisitorfactoryhelper.h[60]: function does not accept binary input
    ./4uqi/scanvisitorfactoryhelper.h[60]: function does not accept binary input
    ./4uqi/scanvisitorfactoryhelper.h[60]: function does not accept binary input
    4env/env_local.cc[531]: Uint32 compression only allowed for page size of 16k
    ===============================================================================
    test cases:    1142 |    1139 passed | 3 failed
    assertions: 8686166 | 8686163 passed | 3 failed
    
    make: *** [Makefile:869: test] Error 3
    

    Are you going to fix it?

    Kind regards, Zbigniew

    opened by romz-pl 1
Releases(release-2.2.1)
Velox is a new C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.

Velox is a C++ database acceleration library which provides reusable, extensible, and high-performance data processing components

Facebook Incubator 893 Jun 27, 2022
libmdbx is an extremely fast, compact, powerful, embedded, transactional key-value database, with permissive license

One of the fastest embeddable key-value ACID database without WAL. libmdbx surpasses the legendary LMDB in terms of reliability, features and performance.

Леонид Юрьев (Leonid Yuriev) 1k Apr 13, 2022
ESE is an embedded / ISAM-based database engine, that provides rudimentary table and indexed access.

Extensible-Storage-Engine A Non-SQL Database Engine The Extensible Storage Engine (ESE) is one of those rare codebases having proven to have a more th

Microsoft 780 Jun 13, 2022
An Embedded NoSQL, Transactional Database Engine

UnQLite - Transactional Embedded Database Engine

PixLab | Symisc Systems 1.7k Jun 23, 2022
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 7.9k Jun 24, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 3 Nov 3, 2021
The database built for IoT streaming data storage and real-time stream processing.

The database built for IoT streaming data storage and real-time stream processing.

HStreamDB 501 Jun 29, 2022
C++11 wrapper for the LMDB embedded B+ tree database library.

lmdb++: a C++11 wrapper for LMDB This is a comprehensive C++ wrapper for the LMDB embedded database library, offering both an error-checked procedural

D.R.Y. C++ 257 Jun 16, 2022
C++ embedded memory database

ShadowDB 一个C++嵌入式内存数据库 语法极简风 支持自定义索引、复合条件查询('<','<=','==','>=','>','!=',&&,||) 能够快速fork出一份数据副本 // ShadowDB简单示例 // ShadowDB是一个可以创建索引、能够快速fork出一份数据分支的C+

null 8 Jun 21, 2022
A friendly and lightweight C++ database library for MySQL, PostgreSQL, SQLite and ODBC.

QTL QTL is a C ++ library for accessing SQL databases and currently supports MySQL, SQLite, PostgreSQL and ODBC. QTL is a lightweight library that con

null 155 Jun 26, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 803 Jun 18, 2022
GridDB is a next-generation open source database that makes time series IoT and big data fast,and easy.

Overview GridDB is Database for IoT with both NoSQL interface and SQL Interface. Please refer to GridDB Features Reference for functionality. This rep

GridDB 1.8k Jun 27, 2022
SiriDB is a highly-scalable, robust and super fast time series database

SiriDB is a highly-scalable, robust and super fast time series database. Build from the ground up SiriDB uses a unique mechanism to operate without a global index and allows server resources to be added on the fly. SiriDB's unique query language includes dynamic grouping of time series for easy analysis over large amounts of time series.

SiriDB 464 Jun 13, 2022
ObjectBox C and C++: super-fast database for objects and structs

ObjectBox Embedded Database for C and C++ ObjectBox is a superfast C and C++ database for embedded devices (mobile and IoT), desktop and server apps.

ObjectBox 131 Jun 17, 2022
dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover. The acron

Canonical 3k Jun 27, 2022
DuckDB is an in-process SQL OLAP Database Management System

DuckDB is an in-process SQL OLAP Database Management System

DuckDB 5.4k Jun 27, 2022
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

yugabyte 6.6k Jun 21, 2022
TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.

Timescale 13.3k Jun 27, 2022
Beryl-cli is a client for the BerylDB database server

Beryl-cli is a client for the BerylDB database server. It offers multiple commands and is designed to be fast and user-friendly.

BerylDB 11 Apr 21, 2022