The MongoDB Database

Overview
MongoDB README

Welcome to MongoDB!

COMPONENTS

  mongod - The database server.
  mongos - Sharding router.
  mongo  - The database shell (uses interactive javascript).

UTILITIES

  install_compass   - Installs MongoDB Compass for your platform.

BUILDING

  See docs/building.md.

RUNNING

  For command line options invoke:

    $ ./mongod --help

  To run a single server database:

    $ sudo mkdir -p /data/db
    $ ./mongod
    $
    $ # The mongo javascript shell connects to localhost and test database by default:
    $ ./mongo
    > help

INSTALLING COMPASS

  You can install compass using the install_compass script packaged with MongoDB:

    $ ./install_compass

  This will download the appropriate MongoDB Compass package for your platform
  and install it.

DRIVERS

  Client drivers for most programming languages are available at
  https://docs.mongodb.com/manual/applications/drivers/. Use the shell
  ("mongo") for administrative tasks.

BUG REPORTS

  See https://github.com/mongodb/mongo/wiki/Submit-Bug-Reports.

PACKAGING

  Packages are created dynamically by the package.py script located in the
  buildscripts directory. This will generate RPM and Debian packages.

DOCUMENTATION

  https://docs.mongodb.com/manual/

CLOUD HOSTED MONGODB

  https://www.mongodb.com/cloud/atlas

FORUMS

  https://community.mongodb.com

    A forum for technical questions about using MongoDB.

  https://community.mongodb.com/c/server-dev

    A forum for technical questions about building and developing MongoDB.

LEARN MONGODB

  https://university.mongodb.com/

LICENSE

  MongoDB is free and the source is available. Versions released prior to
  October 16, 2018 are published under the AGPL. All versions released after
  October 16, 2018, including patch fixes for prior versions, are published
  under the Server Side Public License (SSPL) v1. See individual files for
  details.
Comments
  • SERVER-4785 maintain slowms at database level

    SERVER-4785 maintain slowms at database level

    See issue at https://jira.mongodb.org/browse/SERVER-4785 and https://jira.mongodb.org/browse/SERVER-18946

    Here is the test result.

    [email protected]:~/github/mongo$ ./mongo

    db.getProfilingStatus() { "was" : 0, "slowms" : 100, "sampleRate" : 1 } db.setProfilingLevel(0, 1) { "was" : 0, "slowms" : 100, "sampleRate" : 1, "ok" : 1 } db.getProfilingStatus() { "was" : 0, "slowms" : 1, "sampleRate" : 1 } for(var i =0; i<10000; i++) {db.test.save({'i': i})} WriteResult({ "nInserted" : 1 }) use test2 switched to db test2 db.getProfilingStatus() { "was" : 0, "slowms" : 100, "sampleRate" : 1 } . -- test 2: still 100 for(var i =0; i<10000; i++) {db.test.save({'i': i})} WriteResult({ "nInserted" : 1 })

    as a result: we capture the slow queries from test database but not for test2 database.

    ` 2017-12-26T13:38:32.678+0800 I COMMAND [conn1] command test.test appName: "MongoDB Shell" command: insert { insert: "test", ordered: true, $db: "test" } ninserted:1 keysInserted:1 numYields:0 reslen:29 locks:{ Global: { acquireCount: { r: 3, w: 1 } }, Database: { acquireCount: { w: 1, R: 1 } }, Collection: { acquireCount: { w: 1 } } } protocol:op_msg 3ms

    `

    opened by zhihuiFan 23
  •  SERVER-12064 Portability enhancements

    SERVER-12064 Portability enhancements

    The first two commits are of particularly low risk: they should not regress anything, since the new gcc atomic builtins are only used on new architectures not previously supported. In the IA-32 and x86_64 cases, the inline assembly continues to be used.

    The benefit of these first two commits is that a build and smoke test on ARM (specifically Ubuntu armhf) now succeeds, provided that you use --use-system-v8 as the in-tree v8 does not support ARM.

    The third commit switches IA-32 and x86_64 to also use the gcc atomic builtins instead of the previous inline assembly, as requested by Andy Schwerin in SERVER-12064. The inline assembly remains and covers the case of a build with an older gcc that does not support atomic builtins. This commit is higher risk, since I change the code path on IA-32 and x86_64, and I think warrants a close review to make sure that my understanding of the required semantics is correct.

    For my purposes, I'm fine if you take just the first two commits, since I'll have enabled ARM and hopefully AArch64, too.

    However, taking the third commit also is probably a good long term plan for the project.

    opened by basak 23
  • SERVER-2459 add --excludeCollection to mongodump

    SERVER-2459 add --excludeCollection to mongodump

    I wanted to resolve SERVER-2459, so I patched /src/mongo/tools/dump.cpp to add a new command line option, --excludeCollection. I also updated the man file debian/mongodump.1 to reflect this. I think I have made these changes following the style of the existing code. This is my first contribution to Mongo and if you see issues with my code, I would appreciate any feedback. Thanks for considering my patch.

    opened by tedb 23
  • SERVER-9306 support Invisible index in MongoDB

    SERVER-9306 support Invisible index in MongoDB

    Hi: This pull request is about https://jira.mongodb.org/browse/SERVER-9306 and https://jira.mongodb.org/browse/SERVER-26589

    The user story of this feature is something like this:

    1. DBA want to drop an index
    2. DBA is not confident enough to know if the index is not need at all.
    3. The table is huge so recreating the index after drop will take a long time.

    In this situation, we can make the index invisible for the following few days, which means mongodb will still maintain index for the following data change but the optimizer will not use it any more. If we found the index is still needed, we can simple rollback our change quickly by making it visible again. If we found the index is not needed for the following N days, we can drop it safely.

    The reason why “indexStats” command in MongoDB 3.2 is not perfect for this user case is that the optimizer may choose a wrong index, so it will still show the index is in use but in fact it can be and should be dropped.

    Here is an overview for this feature:

    > for(var i = 0; i<10000; i++) { db.i.save({a: i})}
    > db.i.createIndex({a: 1})
    {
                   "createdCollectionAutomatically" : false,
                   "numIndexesBefore" : 1,
                   "numIndexesAfter" : 2,
                   "ok" : 1
    }
    
    > db.i.getIndices()
    [
                   {
                                   "v" : 2,
                                   "key" : {
                                                   "_id" : 1
                                   },
                                   "name" : "_id_",
                                   "ns" : "zhifan.i",
                                   "invisible" : false
                   },
                   {
                                   "v" : 2,
                                   "key" : {
                                                   "a" : 1
                                   },
                                   "name" : "a_1",
                                   "ns" : "zhifan.i",
                                   "invisible" : false
                   }
    ]
    
    > db.i.find({a: 100}).explain()
    {
                   "queryPlanner" : {
                                   "plannerVersion" : 1,
                                   "namespace" : "zhifan.i",
                                   "indexFilterSet" : false,
                                   "parsedQuery" : {
                                                   "a" : {
                                                                   "$eq" : 100
                                                   }
                                   },
                                   "winningPlan" : {
                                                   "stage" : "FETCH",
                                                   "inputStage" : {
                                                                   "stage" : "IXSCAN",
                                                                   "keyPattern" : {
                                                                                   "a" : 1
                                                                   },
                                                                   "indexName" : "a_1",
                                                                   "isMultiKey" : false,
                                                                   "multiKeyPaths" : {
                                                                                   "a" : [ ]
                                                                   },
                                                                   "isUnique" : false,
                                                                   "isSparse" : false,
                                                                   "isPartial" : false,
                                                                   "indexVersion" : 2,
                                                                   "direction" : "forward",
                                                                   "indexBounds" : {
                                                                                   "a" : [
                                                                                                   "[100.0, 100.0]"
                                                                                   ]
                                                                   }
                                                   }
                                   },
                                   "rejectedPlans" : [ ]
                   },
                   "serverInfo" : {
                                   "host" : "zhifan-dev16",
                                   "port" : 27017,
                                   "version" : "3.7.0-353-g2307b7a",
                                   "gitVersion" : "2307b7ae2495b4b1c0caa0a83d7be1323b975fe4"
                   },
                   "ok" : 1
    }
    
    > db.runCommand({"collMod": "i", "index": {"name": "a_1", "invisible": true}})
    { "invisible_old" : false, "invisible_new" : true, "ok" : 1 }
    > db.i.find({a: 100}).explain()
    {
                   "queryPlanner" : {
                                   "plannerVersion" : 1,
                                   "namespace" : "zhifan.i",
                                   "indexFilterSet" : false,
                                   "parsedQuery" : {
                                                   "a" : {
                                                                   "$eq" : 100
                                                   }
                                   },
                                   "winningPlan" : {
                                                   "stage" : "COLLSCAN",
                                                   "filter" : {
                                                                   "a" : {
                                                                                   "$eq" : 100
                                                                   }
                                                   },
                                                   "direction" : "forward"
                                   },
                                   "rejectedPlans" : [ ]
                   },
                   "serverInfo" : {
                                   "host" : "zhifan-dev16",
                                   "port" : 27017,
                                   "version" : "3.7.0-353-g2307b7a",
                                   "gitVersion" : "2307b7ae2495b4b1c0caa0a83d7be1323b975fe4"
                   },
                   "ok" : 1
    }
    > db.runCommand({"collMod": "i", "index": {"name": "a_1", "invisible": false}})
    { "invisible_old" : true, "invisible_new" : false, "ok" : 1 }
    > db.i.find({a: 100}).explain()
    {
                   "queryPlanner" : {
                                   "plannerVersion" : 1,
                                   "namespace" : "zhifan.i",
                                   "indexFilterSet" : false,
                                   "parsedQuery" : {
                                                   "a" : {
                                                                   "$eq" : 100
                                                   }
                                   },
                                   "winningPlan" : {
                                                   "stage" : "FETCH",
                                                   "inputStage" : {
                                                                   "stage" : "IXSCAN",
                                                                   "keyPattern" : {
                                                                                   "a" : 1
                                                                   },
                                                                   "indexName" : "a_1",
                                                                   "isMultiKey" : false,
                                                                   "multiKeyPaths" : {
                                                                                   "a" : [ ]
                                                                   },
                                                                   "isUnique" : false,
                                                                   "isSparse" : false,
                                                                   "isPartial" : false,
                                                                   "indexVersion" : 2,
                                                                   "direction" : "forward",
                                                                   "indexBounds" : {
                                                                                   "a" : [
                                                                                                   "[100.0, 100.0]"
                                                                                   ]
                                                                   }
                                                   }
                                   },
                                   "rejectedPlans" : [ ]
                   },
                   "serverInfo" : {
                                   "host" : "zhifan-dev16",
                                   "port" : 27017,
                                   "version" : "3.7.0-353-g2307b7a",
                                   "gitVersion" : "2307b7ae2495b4b1c0caa0a83d7be1323b975fe4"
                   },
                   "ok" : 1
    }
    

    all the existing test case are passed with python buildscripts/resmoke.py jstests/core/*.js and code is formated with python buildscripts/clang_format.py format

    opened by zhihuiFan 22
  • SERVER-991: Added support for $trim

    SERVER-991: Added support for $trim

    The $trim operator will limit the length of the arrays to which it is applied. A positive number indicates how many items to keep starting counting from the beginning of the array whereas a negative number keeps items from the end of the array.

    array: [1 2 3 4 5 6 7 8] trim(5) -> keep 1 2 3 4 5, drop 6 7 8 -> [1, 2, 3, 4, 5] trim(-5) -> drop 1 2 3, keep 4 5 6 7 8 -> [4, 5, 6, 7, 8]

    $trim is special as it is allowed (and required) to reference the same field as another update modifier. It can not be used by itself, but the same effect can be used by doing a $pushAll: [](pushing an empty list) at the same time as using $trim.

    $trim is always performed after updating the field with any other modifier.

    Test cases have been added.

    opened by boivie 21
  • SERVER-6233: Support Connection String URI Format for mongo shell

    SERVER-6233: Support Connection String URI Format for mongo shell

    Support Connection String URI Format for mongo auth.

    Example usage: mongo mongodb://username:[email protected]:port/database

    Jira Reference: https://jira.mongodb.org/browse/SERVER-6233

    opened by dutchakdev 16
  • SERVER-14802 Fixed problem with sleepmillis() on Windows due to default timer resolution being typically higher than 15ms

    SERVER-14802 Fixed problem with sleepmillis() on Windows due to default timer resolution being typically higher than 15ms

    All sleepmillis() calls with values lesser than timer resolution will sleep AT LEAST the timer resolution value, making Mongo be really slow on certain use cases. Like when implementing consumer/producer patterns using capped collections to stream objects back and forth

    opened by jsbattig 16
  • SERVER-5399 Essentially aliasing quit, exit, and function call variations. (+ misc cleanup)

    SERVER-5399 Essentially aliasing quit, exit, and function call variations. (+ misc cleanup)

    Previously, exit was being implemented as a code in the DB shell, while quit was being implemented as a shell function. This meant that both were doing different things, and were called differently--i.e., exit could be run as is, whereas quit() had to be called as a function.

    opened by amcfague 16
  • initial scons 3.0.1 and python3 build support

    initial scons 3.0.1 and python3 build support

    This PR allows building mongo with use of latest (3.0.1) scons which now uses python3 by default. There are still a lot of rough edges but on the end it allows building binary mongod

    python version:

    [email protected]:~/rpmbuild/BUILD/mongo-patched> python3 --version
    Python 3.6.3
    [email protected]:~/rpmbuild/BUILD/mongo-patched> 
    

    scons version:

    [email protected]:~/rpmbuild/BUILD/mongo-patched> scons -v
    SCons by Steven Knight et al.:
            script: v3.0.1.74b2c53bc42290e911b334a6b44f187da698a668, 2017/11/14 13:16:53, by bdbaddog on hpmicrodog
            engine: v3.0.1.74b2c53bc42290e911b334a6b44f187da698a668, 2017/11/14 13:16:53, by bdbaddog on hpmicrodog
            engine path: ['/usr/lib/scons-3.0.1/SCons']
    Copyright (c) 2001 - 2017 The SCons Foundation
    [email protected]:~/rpmbuild/BUILD/mongo-patched>
    

    and finally running scons on my machine

    [email protected]:~/rpmbuild/BUILD/mongo-patched> scons -j 8 MONGO_VERSION=3.6.0  --disable-warnings-as-errors  --ssl
    scons: Reading SConscript files ...
    Mkdir("build/scons")
    scons version: 3.0.1
    python version: 3 6 3 final 0
    Checking whether the C compiler works... yes
    Checking whether the C++ compiler works... yes
    ...
    ...
    ...
    ranlib build/blah/mongo/db/query/libquery_common.a
    Generating library build/blah/mongo/db/libmongod_options.a
    ranlib build/blah/mongo/db/libmongod_options.a
    Linking build/blah/mongo/mongod
    Install file: "build/blah/mongo/mongod" as "mongod"
    scons: done building targets.
    [email protected]:~/rpmbuild/BUILD/mongo-patched> 
    

    and of course running built binary

    [email protected]:~/rpmbuild/BUILD/mongo-patched> ./mongod --dbpath /tmp/ --bind_ip 127.0.0.1
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] MongoDB starting : pid=2937 port=27017 dbpath=/tmp/ 64-bit host=pc
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] db version v3.6.0
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] git version: 9038d0a67ee578aa68ef8482b1fc98750d1007a6
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0g-fips  2 Nov 2017
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] allocator: tcmalloc
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] modules: none
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] build environment:
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten]     distarch: x86_64
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten]     target_arch: x86_64
    2017-12-22T18:52:02.566+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1" }, storage: { dbPath: "/tmp/" } }
    2017-12-22T18:52:02.566+0000 I STORAGE  [initandlisten] Detected data files in /tmp/ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
    2017-12-22T18:52:02.566+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=15542M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
    2017-12-22T18:52:02.637+0000 I STORAGE  [initandlisten] WiredTiger message [1513968722:637511][2937:0x7f87663729c0], txn-recover: Main recovery loop: starting at 6/4736
    2017-12-22T18:52:02.679+0000 I STORAGE  [initandlisten] WiredTiger message [1513968722:679111][2937:0x7f87663729c0], txn-recover: Recovering log 6 through 7
    2017-12-22T18:52:02.706+0000 I STORAGE  [initandlisten] WiredTiger message [1513968722:706386][2937:0x7f87663729c0], txn-recover: Recovering log 7 through 7
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 4096 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
    2017-12-22T18:52:02.730+0000 I CONTROL  [initandlisten] 
    2017-12-22T18:52:02.736+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/tmp/diagnostic.data'
    2017-12-22T18:52:02.737+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
    
    opened by bmanojlovic 15
  • SERVER-12007 add support for new ObjectId(Date|Number)

    SERVER-12007 add support for new ObjectId(Date|Number)

    see https://github.com/marcello3d/node-buffalo/pull/11, please. Then could u check the following shell scripts:

    var timespan = new ObjectId(Date.now() - 10*60*60*1000);
    db.myColl.find({_id: {'$gt': timespan}}).count();
    

    I would get the latest 10 hours records from their insertion date, It's useful.

    opened by yorkie 14
  • SERVER-9751 Force S2 geometry library to use Mongo's LOG instead of stderr

    SERVER-9751 Force S2 geometry library to use Mongo's LOG instead of stderr

    Right now S2 library just writes any debugging output and errors to std::cerr. This patch changes S2's logger to use Mongo's LOG macros.

    That way it is much easier to find errors in geometry primitives.

    opened by svetlyak40wt 14
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 1
  • SERVER-71627: refresh incremental route info optimize(Performance increase 1000 times when chunks More than 2 million)

    SERVER-71627: refresh incremental route info optimize(Performance increase 1000 times when chunks More than 2 million)

    In recent releases of MongoDB, there’s a couple optimization been made around Refreshing incremental Routing Info, however the performance issue caused by it wasn’t rooted out for good that big sharded clusters would still suffer slow queries due to it.

    Tencent cloud MongoDB team have (or hope so) come up with an optimization solution to solve the problem by utilizing Two-Dimensional Sorting & Search. With the optimization, there’d be no latency caused by refreshing routing info, that the refreshing time cost would remain at around 2ms regardless of the data size of the shreded cluster.

    In the official release, refreshing routing info requires iterating full ChunkInfo in the ChunkMap twice, plus iterating ChunkVector once to free shared pointers – This could be very time- & resource-consuming if the chunk size exceeds certain threshold.

    The updated _chunkMap and algorithm in the proposed method requires only one iteration on a very small portion of ChunkInfo based on the changed Chunks to update routing info: _chunkMap, _collectionVersion & _shardVersions.

    opened by y123456yz 2
  • [Snyk] Upgrade http-server from 0.12.3 to 0.13.0

    [Snyk] Upgrade http-server from 0.12.3 to 0.13.0

    Snyk has created this PR to upgrade http-server from 0.12.3 to 0.13.0.

    merge advice :information_source: Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.


    • The recommended version is 1 version ahead of your current version.
    • The recommended version was released a year ago, on 2021-08-07.

    The recommended version fixes:

    Severity | Issue | PriorityScore (*) | Exploit Maturity | :-------------------------:|:-------------------------|-------------------------|:------------------------- | Denial of Service (DoS)
    SNYK-JS-ECSTATIC-540354 | 696/1000
    Why? Proof of Concept exploit, Has a fix available, CVSS 7.5 | Proof of Concept

    (*) Note that the real score may have changed since the PR was raised.

    Release notes
    Package name: http-server
    • 0.13.0 - 2021-08-07

      A long time coming, the next major release for http-server! This will be the final release before a switch to actual semantic versioning. This release's major achievement is the internalization of the functionality of the now-abandoned ecstatic library, thus removing it as a dependency. Huge thanks to @ zbynek for help on that front, as well as several other included changes.

      Breaking changes:

      • No longer sends the header server: http-server-${version} with every response

      New features:

      • All responses include Accept-Ranges: bytes to advertise support for partial requests

      Fixes

      • Removes dependency on the abandoned ecstatic library
      • Dependency upgrades to fix several security alerts
      • http-server -a 0.0.0.0 will now do what you told it to do, rather than overriding the address to 127.0.0.1
      • Will no longer serve binary files with a charset in the Content-Type, fixing serving WebAssembly files, among other issues
      • Support .mjs MimeType correctly

      Internal

      • Switched from Travis to GH Actions for CI
    • 0.12.3 - 2020-04-27

      Patch release to package man page

    from http-server GitHub release notes
    Commit messages
    Package name: http-server
    • 77243e7 0.13.0
    • a845834 Update dependency tree
    • f2c0dfb update milestone
    • aec3911 update security for release
    • 1f994c0 Merge pull request #591 from http-party/no_server_headers
    • c57654d Merge branch 'master' into no_server_headers
    • a4ec10b Merge pull request #713 from http-party/codeql-bye-bye
    • 6b87653 drop codeql
    • a7fdf0f remove server header
    • cd1afb7 Merge pull request #706 from zbynek/no-charset-binary
    • 46c0ce7 Merge pull request #705 from zbynek/patch-1
    • 9c51cb2 Merge branch 'master' into no_server_headers
    • cd84a85 revert
    • 7830ac2 Remove charset from header of binary files
    • b4991b8 Remove line break from LICENSE
    • fab3248 Merge pull request #704 from zbynek/patch-1
    • e9716d1 Account for CRLF in a test
    • 0f3e241 Merge pull request #642 from skyward-luke/master
    • 33fe714 Merge pull request #702 from http-party/replace-travis
    • e9ad269 Replace travis badge
    • f09c821 Update node.js.yml
    • 2c2ad02 Update node.js.yml
    • dad375d Update node.js.yml
    • 133a64c Update node.js.yml

    Compare


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open upgrade PRs.

    For more information:

    🧐 View latest project report

    🛠 Adjust upgrade PR settings

    🔕 Ignore this dependency or unsubscribe from future upgrade PRs

    opened by admin-token-bot 0
  • [Snyk] Upgrade bezier-js from 4.0.3 to 4.1.1

    [Snyk] Upgrade bezier-js from 4.0.3 to 4.1.1

    Snyk has created this PR to upgrade bezier-js from 4.0.3 to 4.1.1.

    merge advice :information_source: Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.


    • The recommended version is 2 versions ahead of your current version.
    • The recommended version was released a year ago, on 2021-04-30.
    Release notes
    Package name: bezier-js from bezier-js GitHub release notes
    Commit messages
    Package name: bezier-js
    • 707f7f5 4.1.1
    • 6af37cf Merge pull request #154 from pranavtotla/master
    • 9832af0 Fix: lerp() for 3D points where z is 0
    • cb4fe3e 4.1.0
    • 9448953 Merge pull request #150 from GrumpySailor/fix/commonjs
    • 094da68 Add Node Support Matrix
    • 43677c0 Switch to Conditional Exports
    • 4e5cd95 Fix CommonJS
    • c6a33e6 Merge pull request #147 from joostdecock/patch-1
    • b61c031 Fixed project name in README funding pitch
    • be81bfb Update package.json
    • fad9f76 Update FUNDING.md
    • a410da1 Create FUNDING.md
    • 70a8d79 Update package.json
    • 7e2ace1 Update README.md
    • 060cac5 Merge pull request #143 from ntamas/fix/derivative-3d
    • 1c34a81 derivative calculations now work for the 3D case as well

    Compare


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open upgrade PRs.

    For more information:

    🧐 View latest project report

    🛠 Adjust upgrade PR settings

    🔕 Ignore this dependency or unsubscribe from future upgrade PRs

    opened by admin-token-bot 0
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large-scale graphs with billions of vertices (nodes) and trillions of edges, with milliseconds of latency. It delivers enterprise-grade high performance to simplify the most complex data sets imaginable into meaningful and useful information.

vesoft inc. 8.4k Jan 9, 2023
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

?? ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

ArangoDB 12.8k Jan 9, 2023
FEDB is a NewSQL database optimised for realtime inference and decisioning application

FEDB is a NewSQL database optimised for realtime inference and decisioning applications. These applications put real-time features extracted from multiple time windows through a pre-trained model to evaluate new data to support decision making. Existing in-memory databases cost hundreds or even thousands of milliseconds so they cannot meet the requirements of inference and decisioning applications.

4Paradigm 1.2k Jan 2, 2023
Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks is a key-value NoSQL database based on RocksDB and compatible with Redis protocol.

Bit Leak 1.9k Jan 8, 2023
Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks is a distributed key value NoSQL database based on RocksDB and compatible with Redis protocol.

Kvrocks Labs 1.9k Jan 9, 2023
The MongoDB Database

The MongoDB Database

mongodb 23k Jan 1, 2023
ARCHIVED - libbson has moved to https://github.com/mongodb/mongo-c-driver/tree/master/src/libbson

libbson ARCHIVED - libbson is now maintained in a subdirectory of the libmongoc project: https://github.com/mongodb/mongo-c-driver/tree/master/src/lib

mongodb 344 Nov 29, 2022
C++ Driver for MongoDB

MongoDB C++ Driver Welcome to the MongoDB C++ Driver! Branches - releases/stable versus master The default checkout branch of this repository is relea

mongodb 931 Dec 28, 2022
A high-performance MongoDB driver for C

mongo-c-driver About mongo-c-driver is a project that includes two libraries: libmongoc, a client library written in C for MongoDB. libbson, a library

mongodb 754 Dec 29, 2022
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8.6k Dec 26, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 4 Nov 14, 2022
Bear is a tool that generates a compilation database for clang tooling.

ʕ·ᴥ·ʔ Build EAR Bear is a tool that generates a compilation database for clang tooling. The JSON compilation database is used in the clang project to

László Nagy 3.2k Jan 9, 2023
null 313 Dec 31, 2022
ESE is an embedded / ISAM-based database engine, that provides rudimentary table and indexed access.

Extensible-Storage-Engine A Non-SQL Database Engine The Extensible Storage Engine (ESE) is one of those rare codebases having proven to have a more th

Microsoft 792 Dec 22, 2022
Identify I2C devices from a database of the most popular I2C sensors and other devices

I2C Detective Identify I2C devices from a database of the most popular I2C sensors and other devices. For more information see http://www.technoblogy.

David Johnson-Davies 21 Nov 29, 2022
Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB

Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB. Scylla embraces a shared-nothing approach that increases throughput and storage capacity to realize order-of-magnitude performance improvements and reduce hardware costs.

ScyllaDB 8.9k Jan 4, 2023
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large-scale graphs with billions of vertices (nodes) and trillions of edges, with milliseconds of latency. It delivers enterprise-grade high performance to simplify the most complex data sets imaginable into meaningful and useful information.

vesoft inc. 8.4k Jan 9, 2023
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

?? ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

ArangoDB 12.8k Jan 9, 2023
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 834 Dec 24, 2022