ClickHouse® is a free analytics DBMS for big data

Overview

ClickHouse — open source distributed column-oriented DBMS

ClickHouse® is an open-source column-oriented database management system that allows generating analytical data reports in real time.

Useful Links

  • Official website has quick high-level overview of ClickHouse on main page.
  • Tutorial shows how to set up and query small ClickHouse cluster.
  • Documentation provides more in-depth information.
  • YouTube channel has a lot of content about ClickHouse in video format.
  • Slack and Telegram allow to chat with ClickHouse users in real-time.
  • Blog contains various ClickHouse-related articles, as well as announcements and reports about events.
  • Code Browser with syntax highlight and navigation.
  • Contacts can help to get your questions answered if there are any.
Comments
  • Builtin skim

    Builtin skim

    Changelog category (leave one):

    • Build/Testing/Packaging Improvement

    Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

    Integrate skim into the client/local

    Note, that it can the fail the client if the skim itself will fail, however I haven't seen it panicd, so let's try.

    P.S. about adding USE_SKIM into configure header instead of just compile option for target, it is better, because it allows not to recompile lots of C++ headers, since we have to add skim library as PUBLIC.

    Blocked by: #43498

    pr-build 
    opened by azat 0
  • Fix incorrect exception message

    Fix incorrect exception message

    Changelog category (leave one):

    • Improvement

    Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

    When ClickHouse requests a remote HTTP server, and it returns an error, the numeric HTTP code was not displayed correctly in the exception message. Closes #43919.

    pr-improvement 
    opened by alexey-milovidov 0
  • Double whitespace in error message.

    Double whitespace in error message.

    Describe the issue

    Code: 86. DB::HTTPException: Received error from remote server /?endpoint=DataPartsExchange%3A%2Fclickhouse%2Ftables%2F091f5ec4-13f0-485a-9afe-92cefaf15518%2F1%2Freplicas%2Fus&part=all_14200_14486_63&client_protocol_version=7&compress=false. HTTP status code:  Internal Server Error, body: Code: 232. DB::Exception: No part all_14200_14486_63 in table. (NO_SUCH_DATA_PART), Stack trace (when copying this message, always include the lines below):
    
    0. ./build_docker/../src/Common/Exception.cpp:77: DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0xe48155a in /usr/bin/clickhouse
    1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&>::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) @ 0x82ab740 in /usr/bin/clickhouse
    2. ./build_docker/../src/Storages/MergeTree/DataPartsExchange.cpp:421: DB::DataPartsExchange::Service::findPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) @ 0x14a32489 in /usr/bin/clickhouse
    3. ./build_docker/../contrib/llvm-project/libcxx/include/__utility/swap.h:37: DB::DataPartsExchange::Service::processQuery(DB::HTMLForm const&, DB::ReadBuffer&, DB::WriteBuffer&, DB::HTTPServerResponse&) @ 0x14a3071a in /usr/bin/clickhouse
    4. ./build_docker/../contrib/llvm-project/libcxx/include/shared_mutex:370: DB::InterserverIOHTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTTPServerResponse&, DB::InterserverIOHTTPHandler::Output&) @ 0x14ff2cde in /usr/bin/clickhouse
    5. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:815: DB::InterserverIOHTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x14ff3613 in /usr/bin/clickhouse
    6. ./build_docker/../contrib/poco/Foundation/include/Poco/AutoPtr.h:215: DB::HTTPServerConnection::run() @ 0x1505fed4 in /usr/bin/clickhouse
    7. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x17ebced4 in /usr/bin/clickhouse
    8. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x17ebe87b in /usr/bin/clickhouse
    9. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x1804d967 in /usr/bin/clickhouse
    10. ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:277: Poco::ThreadImpl::runnableEntry(void*) @ 0x1804b39d in /usr/bin/clickhouse
    11. ? @ 0x7f833a232609 in ?
    12. __clone @ 0x7f833a157163 in ?
     (version 22.12.1.1071 (official build)). (RECEIVED_ERROR_FROM_REMOTE_IO_SERVER) (version 22.12.1.1071 (official build))
    

    Here:

    HTTP status code:  Internal Server Error
    
    unexpected behaviour 
    opened by alexey-milovidov 0
  • `system.replicas` table can be slow

    `system.replicas` table can be slow

    If I select all the columns from the system.replicas table, and if clickhouse-keeper is located in another continent, and I have many tables, SELECT query can take multiple seconds.

    It should be parallelized across tables: https://clickhouse.com/codebrowser/ClickHouse/src/Storages/System/StorageSystemReplicas.cpp.html#154

    easy task performance 
    opened by alexey-milovidov 0
  • I suspect that one incorrectly configured disk in the storage configuration can prevent server from starting up.

    I suspect that one incorrectly configured disk in the storage configuration can prevent server from starting up.

    It will display an error while loading a completely unrelated table.

    2022.12.03 22:27:42.352350 [ 1526948 ] {} <Error> Application: Code: 33. DB::Exception: Cannot read all data. Bytes read: 0. Bytes expected: 4.: While checking access for disk s3_plain: Cannot attach table `system`.`crash_log` from metadata file /mnt/raid/clickhouse/store/3ee/3eea614d-363c-42db-a00e-8225338a67ac/crash_log.sql from query ATTACH TABLE system.crash_log UUID '2a74c804-0f50-49e9-b575-88e803c55483' (`event_date` Date, `event_time` DateTime, `timestamp_ns` UInt64, `signal` Int32, `thread_id` UInt64, `query_id` String, `trace` Array(UInt64), `trace_full` Array(String), `version` String, `revision` UInt32, `build_id` String) ENGINE = MergeTree ORDER BY (event_date, event_time) SETTINGS index_granularity = 8192. (CANNOT_READ_ALL_DATA), Stack trace (when copying this message, always include the lines below):
    
    0. ./build_docker/../src/Common/Exception.cpp:77: DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0xe48155a in /usr/bin/clickhouse
    1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, bool) @ 0x7e75dad in /usr/bin/clickhouse
    2. ./build_docker/../src/IO/ReadBuffer.h:0: DB::ReadBuffer::readStrict(char*, unsigned long) @ 0xe4df50a in /usr/bin/clickhouse
    3. ./build_docker/../contrib/llvm-project/libcxx/include/string:1499: DB::IDisk::checkAccessImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) @ 0x134969b8 in /usr/
    bin/clickhouse
    4. ./build_docker/../contrib/llvm-project/libcxx/include/string:1499: DB::IDisk::checkAccess() @ 0x13496643 in /usr/bin/clickhouse
    5. ./build_docker/../src/Disks/IDisk.cpp:0: DB::IDisk::startup(std::__1::shared_ptr<DB::Context const>, bool) @ 0x134961d1 in /usr/bin/clickhouse
    6. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:603: std::__1::shared_ptr<DB::IDisk> std::__1::__function::__policy_invoker<std::__1::shared_ptr<DB::IDisk> (std::__1::basic_string<
    char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::_
    _1::shared_ptr<DB::Context const>, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<DB::IDisk>, std::__1::less<std::__1::basic_string<char,
     std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, std::__1::shared_ptr<DB::
    IDisk>>>> const&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::registerDiskS3(DB::DiskFactory&, bool)::$_0, std::__1::shared_ptr<DB::IDisk> (std::__1::basic_string<char, std::__1::char_traits<cha
    r>, std::__1::allocator<char>> const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context cons
    t>, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<DB::IDisk>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, std::__1::shared_ptr<DB::IDisk>>>> const&)>>(std::__1::__function::__policy_storage const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>&&, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<DB::IDisk>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, std::__1::shared_ptr<DB::IDisk>>>> const&) @ 0x1350069d in /usr/bin/clickhouse
    7. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:0: DB::DiskFactory::create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<DB::IDisk>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, std::__1::shared_ptr<DB::IDisk>>>> const&) const @ 0x13499624 in /usr/bin/clickhouse
    8. ./build_docker/../src/Disks/DiskSelector.cpp:0: DB::DiskSelector::initialize(Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>) @ 0x13667bad in /usr/bin/clickhouse
    9. ./build_docker/../contrib/llvm-project/libcxx/include/string:1499: DB::Context::getDiskSelector(std::__1::lock_guard<std::__1::mutex>&) const @ 0x136343a6 in /usr/bin/clickhouse
    10. ./build_docker/../src/Interpreters/Context.cpp:2843: DB::Context::getStoragePolicySelector(std::__1::lock_guard<std::__1::mutex>&) const @ 0x13612ad7 in /usr/bin/clickhouse
    11. ./build_docker/../src/Interpreters/Context.cpp:2804: DB::Context::getStoragePolicy(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) const @ 0x13634620 in /usr/bin/clickhouse
    12. ./build_docker/../src/Storages/MergeTree/MergeTreeData.cpp:0: DB::MergeTreeData::MergeTreeData(DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings>>, bool, bool, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)>) @ 0x14ad3aac in /usr/bin/clickhouse
    13. ./build_docker/../src/Storages/StorageMergeTree.cpp:92: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, DB::StorageInMemoryMetadata const&, bool, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings>>, bool) @ 0x14e8a6ee in /usr/bin/clickhouse
    14. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:1460: DB::create(DB::StorageFactory::Arguments const&) @ 0x14e80ec4 in /usr/bin/clickhouse
    15. ./build_docker/../src/Storages/StorageFactory.cpp:229: DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x14661bca in /usr/bin/clickhouse
    16. ./build_docker/../contrib/llvm-project/libcxx/include/string:2067: DB::createTableFromAST(DB::ASTCreateQuery, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool) @ 0x1341db25 in /usr/bin/clickhouse
    17. ./build_docker/../contrib/llvm-project/libcxx/include/string:1499: DB::DatabaseOrdinary::loadTableFromMetadata(std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, DB::QualifiedTableName const&, std::__1::shared_ptr<DB::IAST> const&, DB::LoadingStrictnessLevel) @ 0x1343b1a7 in /usr/bin/clickhouse
    
    unexpected behaviour 
    opened by alexey-milovidov 0
Releases(v22.11.2.30-stable)
🥑 ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

?? ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions.

ArangoDB 12.7k Nov 28, 2022
StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks is a next-gen sub-second MPP database for full analysis senarios, including multi-dimensional analytics, real-time analytics and ad-hoc query, formerly known as DorisDB.

StarRocks 3.6k Nov 26, 2022
TengineGst is a streaming media analytics framework, based on GStreamer multimedia framework, for creating varied complex media analytics pipelines.

TengineGst is a streaming media analytics framework, based on GStreamer multimedia framework, for creating varied complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Tengine Toolkit Inference Engine backend, across varied architecture - CPU, iGPU and VPU.

OAID 66 Nov 22, 2022
VERY simple cross-platform C++ analytics for games (using Google Analytics)

Tiniest Analytics is a very simple to use, cross-platform (tested on win/osx/linux/ios/android) and basically very tiny analytics system written in C++ (less than 100 lines of code), made specifically for games. It uses libcurl to post events to your Google Analytics account.

Mihai Gosa 96 Oct 11, 2022
Kunlun distributed DBMS is a NewSQL OLTP relational distributed database management system

Kunlun distributed DBMS is a NewSQL OLTP relational distributed database management system. Application developers can use Kunlun to build IT systems that handles terabytes of data, without any effort on their part to implement data sharding, distributed transaction processing, distributed query processing, crash safety, high availability, strong consistency, horizontal scalability. All these powerful features are provided by Kunlun.

zettadb 112 Nov 23, 2022
Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

Tuplex 791 Nov 15, 2022
Cytopia is a free, open source retro pixel-art city building game with a big focus on mods.

Cytopia is a free, open source retro pixel-art city building game with a big focus on mods. It utilizes a custom isometric rendering engine based on SDL2.

CytopiaTeam 1.6k Nov 27, 2022
oneAPI Data Analytics Library (oneDAL)

Intel® oneAPI Data Analytics Library Installation | Documentation | Support | Examples | Samples | How to Contribute Intel® oneAPI Data Analytics Libr

oneAPI-SRC 529 Nov 29, 2022
oneAPI Data Analytics Library (oneDAL)

Intel® oneAPI Data Analytics Library Installation | Documentation | Support | Examples | Samples | How to Contribute Intel® oneAPI Data Analytics Libr

oneAPI-SRC 527 Nov 16, 2022
Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB

Scylla is the real-time big data database that is API-compatible with Apache Cassandra and Amazon DynamoDB. Scylla embraces a shared-nothing approach that increases throughput and storage capacity to realize order-of-magnitude performance improvements and reduce hardware costs.

ScyllaDB 8.8k Dec 4, 2022
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider 132 Sep 8, 2022
GridDB is a next-generation open source database that makes time series IoT and big data fast,and easy.

Overview GridDB is Database for IoT with both NoSQL interface and SQL Interface. Please refer to GridDB Features Reference for functionality. This rep

GridDB 1.9k Nov 24, 2022
An open-source big data platform designed and optimized for the Internet of Things (IoT).

An open-source big data platform designed and optimized for the Internet of Things (IoT).

null 20k Nov 28, 2022
Analytics In Real-time (AIR) is a light-weight system profiling tool

Analytics In Real-time Analytics In Real-time (AIR) is a light-weight system profiling tool that provides a set of APIs for profiling performance, lat

null 2 Mar 3, 2022
Axis video analytics example applications

Axis Camera Application Platform (ACAP) 4 example applications that provide developers with the tools and knowledge to build their own solutions based on the ACAP Computer Vision SDK

Axis Communications 21 Nov 15, 2022
SQL powered operating system instrumentation, monitoring, and analytics.

osquery osquery is a SQL powered operating system instrumentation, monitoring, and analytics framework. Available for Linux, macOS, Windows, and FreeB

osquery 19.6k Dec 1, 2022
Quick Look extension for Markdown files on macOS Catalina and Big Sur.

QLMarkdown is a macOS Quick Look extension to preview Markdown files. It can also preview textbundle packages and rmarkdown (.rmd) files.

sbarex 626 Nov 28, 2022
Not a big fan of git. May create a nicer repo in the future.

os My x86-64 hobby operating system. Cooperative multitasking system with no user-mode support, everything runs on ring 0 (for now). Packed with a rea

tiagoporsch 13 Sep 9, 2022
A collection of scripts written in many different programming languages and each developed independently to perform very specific tasks (big or small)

Script Collection A collection of scripts written in many different programming languages and each developed independently to perform very specific ta

Giovanni Rebouças 5 Aug 31, 2021
Sorting algorithms & Big O

[![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] Sorting algorith

Joseph Mahiuha 1 Nov 7, 2021