C++ associative containers

Overview
This directory contains several hash-map implementations, similar in
API to SGI's hash_map class, but with different performance
characteristics.  sparse_hash_map uses very little space overhead, 1-2
bits per entry.  dense_hash_map is very fast, particulary on lookup.
(sparse_hash_set and dense_hash_set are the set versions of these
routines.)  On the other hand, these classes have requirements that
may not make them appropriate for all applications.

All these implementation use a hashtable with internal quadratic
probing.  This method is space-efficient -- there is no pointer
overhead -- and time-efficient for good hash functions.

COMPILING
---------
To compile test applications with these classes, run ./configure
followed by make.  To install these header files on your system, run
'make install'.  (On Windows, the instructions are different; see
README_windows.txt.)  See INSTALL for more details.

This code should work on any modern C++ system.  It has been tested on
Linux (Ubuntu, Fedora, RedHat, Debian), Solaris 10 x86, FreeBSD 6.0,
OS X 10.3 and 10.4, and Windows under both VC++7 and VC++8.

USING
-----
See the html files in the doc directory for small example programs
that use these classes.  It's enough to just include the header file:

   #include <sparsehash/sparse_hash_map> // or sparse_hash_set, dense_hash_map, ...
   google::sparse_hash_set<int, int> number_mapper;

and use the class the way you would other hash-map implementations.
(Though see "API" below for caveats.)

By default (you can change it via a flag to ./configure), these hash
implementations are defined in the google namespace.

API
---
The API for sparse_hash_map, dense_hash_map, sparse_hash_set, and
dense_hash_set, are a superset of the API of SGI's hash_map class.
See doc/sparse_hash_map.html, et al., for more information about the
API.

The usage of these classes differ from SGI's hash_map, and other
hashtable implementations, in the following major ways:

1) dense_hash_map requires you to set aside one key value as the
   'empty bucket' value, set via the set_empty_key() method.  This
   *MUST* be called before you can use the dense_hash_map.  It is
   illegal to insert any elements into a dense_hash_map whose key is
   equal to the empty-key.

2) For both dense_hash_map and sparse_hash_map, if you wish to delete
   elements from the hashtable, you must set aside a key value as the
   'deleted bucket' value, set via the set_deleted_key() method.  If
   your hash-map is insert-only, there is no need to call this
   method.  If you call set_deleted_key(), it is illegal to insert any
   elements into a dense_hash_map or sparse_hash_map whose key is
   equal to the deleted-key.

3) These hash-map implementation support I/O.  See below.

There are also some smaller differences:

1) The constructor takes an optional argument that specifies the
   number of elements you expect to insert into the hashtable.  This
   differs from SGI's hash_map implementation, which takes an optional
   number of buckets.

2) erase() does not immediately reclaim memory.  As a consequence,
   erase() does not invalidate any iterators, making loops like this
   correct:
      for (it = ht.begin(); it != ht.end(); ++it)
        if (...) ht.erase(it);
   As another consequence, a series of erase() calls can leave your
   hashtable using more memory than it needs to.  The hashtable will
   automatically compact at the next call to insert(), but to
   manually compact a hashtable, you can call
      ht.resize(0)

I/O
---
In addition to the normal hash-map operations, sparse_hash_map can
read and write hashtables to disk.  (dense_hash_map also has the API,
but it has not yet been implemented, and writes will always fail.)

In the simplest case, writing a hashtable is as easy as calling two
methods on the hashtable:
   ht.write_metadata(fp);
   ht.write_nopointer_data(fp);

Reading in this data is equally simple:
   google::sparse_hash_map<...> ht;
   ht.read_metadata(fp);
   ht.read_nopointer_data(fp);

The above is sufficient if the key and value do not contain any
pointers: they are basic C types or agglomorations of basic C types.
If the key and/or value do contain pointers, you can still store the
hashtable by replacing write_nopointer_data() with a custom writing
routine.  See sparse_hash_map.html et al. for more information.

SPARSETABLE
-----------
In addition to the hash-map and hash-set classes, this package also
provides sparsetable.h, an array implementation that uses space
proportional to the number of elements in the array, rather than the
maximum element index.  It uses very little space overhead: 2 to 5
bits per entry.  See doc/sparsetable.html for the API.

RESOURCE USAGE
--------------
* sparse_hash_map has memory overhead of about 4 to 10 bits per 
  hash-map entry, assuming a typical average occupancy of 50%.
* dense_hash_map has a factor of 2-3 memory overhead: if your
  hashtable data takes X bytes, dense_hash_map will use 3X-4X memory
  total.

Hashtables tend to double in size when resizing, creating an
additional 50% space overhead.  dense_hash_map does in fact have a
significant "high water mark" memory use requirement, which is 6 times
the size of hash entries in the table when resizing (when reaching 
50% occupancy, the table resizes to double the previous size, and the 
old table (2x) is copied to the new table (4x)).

sparse_hash_map, however, is written to need very little space
overhead when resizing: only a few bits per hashtable entry.

PERFORMANCE
-----------
You can compile and run the included file time_hash_map.cc to examine
the performance of sparse_hash_map, dense_hash_map, and your native
hash_map implementation on your system.  One test against the
SGI hash_map implementation gave the following timing information for
a simple find() call:
   SGI hash_map:     22 ns
   dense_hash_map:   13 ns
   sparse_hash_map: 117 ns
   SGI map:         113 ns

See doc/performance.html for more detailed charts on resource usage
and performance data.

---
16 March 2005
(Last updated: 12 September 2010)
Comments
  • Limited to approx 2^31 entries?

    Limited to approx 2^31 entries?

    What steps will reproduce the problem?
    
    Fill table with entries--table full error at approx 2^31 entries.
    
    
    What version of the product are you using? On what operating system?
    
    sparsehash-1.8.1
    
    Mac OS X 10.6
    
    
    Please provide any additional information below.
    
    All was well with the table occupying approx 96GB RAM when I hit the table full 
    limit of approx 2^31 entries.  I'm curious whether there's any plan to lift 
    this nominal 32-bit limit.  (Yes, it did occur to me that I could get around 
    the limit by using multiple tables.)
    
    Regardless, kudos to the developers, it's a real champ--I paired it with 
    Murmurhash.
    
    

    Original issue reported on code.google.com by [email protected] on 20 Sep 2010 at 11:05

    Priority-Medium Type-Defect auto-migrated 
    opened by GoogleCodeExporter 20
  • sparse_hash_map serializer/deserializer

    sparse_hash_map serializer/deserializer

    Hey there, I have a question if it's possible to serializer/deserialize sparse_hash_map which consist of

    sparse_hash_map<unsigned long long, std::vector<Structure>>
    

    I tried to write my own serializer, but it seems it's not possible to serialize STL containers, did I miss something or simply I can't write any serializer for this map?

    question 
    opened by MitchesD 15
  • Problems compiling on VS2010

    Problems compiling on VS2010

    What steps will reproduce the problem?
    1. Open solution in Visual C++ 2010
    2. Try to compile
    
    What is the expected output? What do you see instead?
    I've expected some warnings, but not an error. 
    1>  hashtable_test.cc
    1>...\vc\include\utility(260): error C2166: l-value specifies const object
    
    What version of the product are you using? On what operating system?
    Visual Studio 2010 Ultimate / Windows Seven x64
    
    Please provide any additional information below.
    
    

    Original issue reported on code.google.com by wolfulus on 28 Sep 2010 at 6:58

    Priority-Medium Type-Defect auto-migrated 
    opened by GoogleCodeExporter 13
  • macOS 10.13.6 installation: cannot run C++ compiled programs

    macOS 10.13.6 installation: cannot run C++ compiled programs

    Hello,

    Running ./configure fails with "configure: error: cannot run C++ compiled programs." A bigger picture - I'm trying to compile ea-utils, which fails with the same error.

    The detailed output of ./configure is:

    checking whether make sets $(MAKE)... yes
    checking whether the C++ compiler works... yes
    checking for C++ compiler default output file name... a.out
    checking for suffix of executables...
    checking whether we are cross compiling... configure: error: in `/Users/mdozmorov/Documents/nobackup/ea-utils/clipper/sparsehash':
    configure: error: cannot run C++ compiled programs.
    

    And the full config.log is attached.

    Thanks for your help, Mikhail config.log

    opened by mdozmorov 11
  • failed backward compatbility

    failed backward compatbility

    AFAIK, include dir "google" is still available in 2.0.1 because of backward 
    compatibility and new codes should use the new include dir "sparsehash".
    
    However, this dir "google" is not correct as backward compatibility, because 
    the old version 1.12 had include dir 'google/sparsehash', not just 'google'. 
    So, yes, it is broken.
    
    In my case, PCSX2 failed to build, because it didn't find include dir. Here is 
    an open thread in PCSX2 forum (just in case you want to see other report): 
    http://forums.pcsx2.net/Thread-ZZogl-ERROR-Need-GL-EXT-framebuffer-object-for-mu
    ltiple-render-targets
    
    I'm currently using Sparsehash version 2.0.1 on Arch Linux 64-bit(all system 
    up-to-date)
    

    Original issue reported on code.google.com by [email protected] on 14 Feb 2012 at 1:28

    Type-Defect auto-migrated Priority-High 
    opened by GoogleCodeExporter 11
  • dense_set: insert after erase is too slow

    dense_set: insert after erase is too slow

    What steps will reproduce the problem?
    
    #include "google/dense_hash_set"
    #include "tr1/unordered_set"
    
    int main()
    {
    //  typedef std::tr1::unordered_set<int> TSet;
      typedef google::dense_hash_set<int> TSet;
      TSet set;
      set.set_empty_key(-1);
      set.set_deleted_key(-2);
      const int size = 1000000;
      set.rehash(size);
      for (int i = 0; i < size; ++i) set.insert(i);
      for (int i = 0; i < size; ++i) { set.erase(i); set.insert(i); }
      return 0;
    }
    
    What is the expected output? What do you see instead?
    time ./a.out ~ 3 sec (dense_set)
    time ./a.out ~ 0.01 sec (unordered_set)
    
    What version of the product are you using? On what operating system?
    Google Hash 1.11 amd64
    Linux DE3UNP-503726 2.6.32-26-generic #48-Ubuntu SMP Wed Nov 24 10:14:11 UTC 
    2010 x86_64 GNU/Linux
    gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
    Intel Core 2 Duo 3 GHz
    g++ -O3
    
    Please provide any additional information below.
    
    

    Original issue reported on code.google.com by [email protected] on 28 Jun 2011 at 5:39

    Type-Defect auto-migrated Priority-Low 
    opened by GoogleCodeExporter 11
  • problem with make install in macOS Sierra

    problem with make install in macOS Sierra

    God evening

    I am trying to install sparsehash, /.configure runs OK, and so does make, but when I try make install there is a fatal error:

    install: /usr/local/share/doc/sparsehash-2.0.2/AUTHORS: Permission denied make[2]: *** [install-dist_docDATA] Error 71 make[1]: *** [install-am] Error 2 make: *** [install] Error 2

    Can someone help please?

    opened by raqxavier 9
  • sparseconfig.h is empty.

    sparseconfig.h is empty.

    The updated build in 0.9 produces an empty sparseconfig.h file, which
    prevents use of any of the collections.
    
    To reproduce:
    1. Extract the 0.9 source.
    2. Run configure in the source directory.
    3. Run make in the source directory.
    4. Examine src/google/sparsehash/sparseconfig.h.  It will be empty.
    
    I suspect there is some problem with the awk command in the Makefile that
    replaces the grep/fgrep commands in previous versions.  I have not
    attempted to debug the awk command myself.
    
    Replacing the awk command with the grep/fgrep pipe from the 0.8 package
    (with a couple of path fixups) fixes the problem, and sparseconfig.h is
    created correctly.
    
    This bug happens both when building in place and when building in a
    separate build directory.
    
    

    Original issue reported on code.google.com by [email protected] on 10 Oct 2007 at 6:10

    Priority-Medium Type-Defect auto-migrated 
    opened by GoogleCodeExporter 9
  • Correct the memory usage claims to take into account allocator memory usage

    Correct the memory usage claims to take into account allocator memory usage

    The default memory allocator used (libc_allocator_with_realloc) necessarily has some overhead, as the size of the block is not passed to free(). The memory usage claims are updated to take into account an overhead of up to 16 bytes per malloc'ed block.

    opened by greg7mdp 8
  • Template related warnings when compiling with

    Template related warnings when compiling with "-W -Wall"

    Hello, I tried using sparse_hash_map to replace std::map in a big application, 
    and I got many warnings during compilation. I isolated the problem to the "-W 
    -Wall" compilation parameters and compiling google-sparsehash v1.7 with those 
    parameters also issues the same warnings. 
    
    They are huge, template related warnings so it's not meaningful to copy-paste 
    them here, just try compiling with those flags. 
    
    Unfortunately sparse_hash_map doesn't work for me as drop-in replacement to 
    std::map, sometimes I get garbage when reading from specific places... I can't 
    tell if it's related to these warnings but I'll look into the issue more and 
    report back if it is so.
    

    Original issue reported on code.google.com by [email protected] on 20 Jun 2010 at 1:57

    Priority-Medium Type-Defect auto-migrated 
    opened by GoogleCodeExporter 8
  • Latest version of xcode Command Line tools breaks sparsehash.

    Latest version of xcode Command Line tools breaks sparsehash.

    What steps will reproduce the problem?
    1. Compile on Mac OS X Maverick with XCode Command Line Tools March 2014
    
    
    What is the expected output? What do you see instead?
    
    I was expected correct compilation
    
    I have attached the log.
    
    What version of the product are you using? On what operating system?
    
    Latest SVN (116).
    Mac OS X 10.9.2
    Compiler: XCode Command Line Tools March 2014
    
    
    Please provide any additional information below.
    
    
    
    

    Original issue reported on code.google.com by [email protected] on 24 Mar 2014 at 5:52

    Attachments:

    Priority-Medium Type-Defect auto-migrated 
    opened by GoogleCodeExporter 7
  • error to make sparsehash-2.0.4

    error to make sparsehash-2.0.4

    I am going to install sparsehash-2.0.4 into Mac OS 10.15.7. Setting of C++ is not so easy in my PC. $./configure CXX=“gcc" #worked, but I cannot make with the following commands. $ CXX="gcc" make $ CXX="g++" make $ CC="gcc" make and so on.

    The error is like below.

    (std::__1::basic_ostream<char, std::__1::char_traits >&, char const*, unsigned long) in template_util_unittest.o std::__1::ostreambuf_iterator<char, std::__1::char_traits > std::__1::__pad_and_output<char, std::__1::char_traits >(std::__1::ostreambuf_iterator<char, std::__1::char_traits >, char const*, char const*, char const*, std::__1::ios_base&, char) in template_util_unittest.o Dwarf Exception Unwind Info (__eh_frame) in template_util_unittest.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[1]: *** [template_util_unittest] Error 1 make: *** [all] Error 2

    I assume this will be the problem of Mac, rather than the package, but any advices or suggestions will greatly help me. Thank you in advance.

    opened by nobulet 3
  • sparsetable indexed assignment

    sparsetable indexed assignment

    Hello, I was trying to use sparsetable and come to this issue with indexed assignment:

    google::sparsetable tab; tab.resize(10); tab[4] = 1; tab[0] = tab[4]; printf("%d %d\n", tab.test(0), (int) tab[0]);

    the tab[0] reamins unasinged, sure I can do tab[0] = (int) tab[4]; and then everything is as I would expect.

    What is the correct assignment behavior of tab[0] = tab[4] as in example? Isn't it a bug?

    Thank you Pavel

    opened by xkr111 1
  • sparsehash 2.0.4 compiled with gcc 11.2 in c++2b mode segfaults on libc_allocator_with_realloc

    sparsehash 2.0.4 compiled with gcc 11.2 in c++2b mode segfaults on libc_allocator_with_realloc

    Hello!

    I compiled sparsehash 2.0.4 and tried running my pretty old tests which I think worked few years ago.

    But now I can observe crashes in 100% of runs of my test code: https://gist.github.com/pavel-odintsov/d72513432faf208828a387792053b623

    I just insert around 10m of keys and it crashes during process:

    ./traffic_structures_tests 
    google:dense_hashmap without preallocation: 0.5 mega ops per second
    Segmentation fault
    

    Stacktrace:

    google:dense_hashmap without preallocation: 0.2 mega ops per second
    
    Program received signal SIGSEGV, Segmentation fault.
    0x0000000000424aa0 in google::dense_hashtable<std::pair<unsigned int const, subnet_counter_t>, unsigned int, std::hash<unsigned int>, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SelectKey, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SetKey, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::dense_hashtable(google::dense_hashtable<std::pair<unsigned int const, subnet_counter_t>, unsigned int, std::hash<unsigned int>, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SelectKey, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SetKey, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > > const&, unsigned long) ()
    (gdb) 
    

    Full backtrace:

    bt
    #0  0x0000000000424aa0 in google::dense_hashtable<std::pair<unsigned int const, subnet_counter_t>, unsigned int, std::hash<unsigned int>, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SelectKey, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SetKey, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::dense_hashtable(google::dense_hashtable<std::pair<unsigned int const, subnet_counter_t>, unsigned int, std::hash<unsigned int>, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SelectKey, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SetKey, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > > const&, unsigned long)
        ()
    #1  0x0000000000425435 in google::dense_hashtable<std::pair<unsigned int const, subnet_counter_t>, unsigned int, std::hash<unsigned int>, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SelectKey, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SetKey, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::resize_delta(unsigned long) ()
    #2  0x0000000000420e19 in std::pair<unsigned int const, subnet_counter_t>& google::dense_hashtable<std::pair<unsigned int const, subnet_counter_t>, unsigned int, std::hash<unsigned int>, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SelectKey, google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::SetKey, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::find_or_insert<google::dense_hash_map<unsigned int, subnet_counter_t, std::hash<unsigned int>, eqint, google::libc_allocator_with_realloc<std::pair<unsigned int const, subnet_counter_t> > >::DefaultValue>(unsigned int const&) [clone .constprop.1] ()
    #3  0x0000000000420f3a in packet_collector_thread_google_dense_hash_map_preallocated() ()
    #4  0x00000000004208d5 in run_tests(std::function<void ()>) ()
    #5  0x000000000041fe13 in main ()
    
    

    Let me know if you need more details from me.

    opened by pavel-odintsov 0
  • fix build with -std=c++20

    fix build with -std=c++20

    C++20 removed many deprecated std::allocator members, causing sparsehash to fail when it is built with -std=c++20. This implements most of the uses of the removed members in terms of std::allocator_traits.

    There are no corresponding reference and const_reference members in std::allocator_traits, in which case used the actual value_type&/const value_type& instead.

    opened by igorsugak 0
  • Replace NULL by nullptr

    Replace NULL by nullptr

    Replace C NULL by C++11 nullptr

    This allows building with -Werror,-Wzero-as-null-pointer-constant

    Code tested and validated in our codebase (https://github.com/algolia)

    opened by xroche 0
  • Add support for templated keys for various operations

    Add support for templated keys for various operations

    This includes find, count, equal_range

    The rationale is that HashFcn and EqualKey passed during type construction may be tuned to support more types compared by the fixed type key_type.

    • HashFcn would typically implement additional size_t operator()(const T& key) const member(s)
    • EqualKey would typically implement additional bool operator()(const T& other, const key_type& value) const member(s)

    Code tested and validated in our codebase (https://github.com/algolia)

    opened by xroche 0
Releases(sparsehash-2.0.4)
  • sparsehash-2.0.4(Aug 11, 2020)

    diff from 2.0.3

    95e5e93 Prevent compiler warning about writing to an object with no trivial copy-assignment
    a320767 Prevent compiler warning about calling realloc() on an object which cannot be relocated in memory
    4cb9240 Correct the memory usage claims to take into account allocator overhead (#132)
    90e60f0 Update test for large objects with a more reasonable hash function.
    d6684b2 Fix missing initialization of g_num_copies and g_num_hashes
    67cdd69 -Wformat-pedantic casts to quite compiler warning
    4a36398 Pass by const ref not copy
    3151e11 Add test ResizeWithoutShrink and in-code comments.
    2d27620 Fix the bug of endless bucket doubling when min_load_factor=0.
    7b8afad Use unordered_map instead of hash_map for Visual Studio 2013 and later
    6c4151b amend spelling mistakes for insert() method
    
    Source code(tar.gz)
    Source code(zip)
  • sparsehash-2.0.3(Oct 12, 2015)

  • sparsehash-2.0.2(Sep 3, 2015)

Owner
null
Flat map - Header only associative linear container.

flat_map flat_map is a header only associative container library that constructed on linear container. It compliants C++17/20 standard associative con

Kohei Takahashi 24 Aug 26, 2022
A compressed, associative, exact dictionary for k-mers.

SSHash This is a compressed dictionary data structure for k-mers (strings of length k over the DNA alphabet {A,C,G,T}), based on Sparse and Skew Hashi

Giulio Ermanno Pibiri 66 Nov 24, 2022
An associative array implemented by C laugnage

C Map An associative array implemented by C laugnage. This is an implementation of an associative array written by C. it is similar as C++ <map> but i

Dr.Xiao 0 Nov 26, 2022
A RBTree is a sorted associative collection that is implemented with a Red-Black Tree.

A RBTree is a sorted associative collection that is implemented with a Red-Black Tree.

Yusuke Endoh 5 Feb 9, 2022
Library of generic and type safe containers in pure C language (C99 or C11) for a wide collection of container (comparable to the C++ STL).

M*LIB: Generic type-safe Container Library for C language Overview M*LIB (M star lib) is a C library enabling to use generic and type safe container i

PpHd 571 Jan 5, 2023
A collection of single-file C libraries. (generic containers, random number generation, argument parsing and other functionalities)

cauldron A collection of single-file C libraries and tools with the goal to be portable and modifiable. Libraries library description arena-allocator.

Camel Coder 40 Dec 29, 2022
Ubpa small flat containers based on C++20

USmallFlat Ubpa small flat containers based on C++20 Containers basic_flat_map basic_flat_multimap basic_flat_multiset basic_flat_set basic_small_vect

Ubpa 20 Nov 8, 2022
Collection of C++ containers extracted from LLVM

lvc lvc is a set of C++ containers extracted form LLVM for an easier integration in external projects. To avoid any potential conflit, the llvm namesp

Benjamin Navarro 26 Apr 22, 2022
Netdata's distributed, real-time monitoring Agent collects thousands of metrics from systems, hardware, containers, and applications with zero configuration.

Netdata is high-fidelity infrastructure monitoring and troubleshooting. It runs permanently on all your physical/virtual servers, containers, cloud deployments, and edge/IoT devices, and is perfectly safe to install on your systems mid-incident without any preparation.

netdata 61.6k Jan 4, 2023
Allocator Aware Containers Made Easy

Custom Allocator Aware containers made easy with "wit" Have you ever looked at the allocator specification and thought about how hard it would be to

Darth Rubik 4 Sep 25, 2021
A family of header-only, very fast and memory-friendly hashmap and btree containers.

The Parallel Hashmap Overview This repository aims to provide a set of excellent hash map implementations, as well as a btree alternative to std::map

Gregory Popovitch 1.7k Jan 9, 2023
C++ small containers

Applications usually contain many auxiliary small data structures for each large collection of values.

Alan de Freitas 92 Jan 1, 2023
tabbed window manager that can tile windows inside floating containers

tabbed window manager that can tile windows inside floating containers

Seninha 131 Dec 27, 2022
A C++ data container replicating std::stack functionality but with better performance than standard library containers in a stack context.

plf::stack A data container replicating std::stack functionality but with better performance than standard library containers in a stack context. C++9

Matt Bentley 47 Sep 11, 2022
C++ API: http server with local dynamic or precompiled repository containers

libnavajo Framework to develop easily web interfaces in your C++ applications Libnavajo makes it easy to run an HTTP server into your own application,

null 60 Jan 29, 2022
LXC Manager provide a set of functions to visually manage LXC unprivileged containers.

LXC Manager provide a set of functions to visually manage LXC unprivileged containers. The applciation use LXC Api to manage LXC. To use the application you must have LXC installed on your linux machine.

Peter Cata 4 Dec 14, 2022