TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

Overview
Linux/macOS Linux i386 Windows Coverity Code Coverage
Build Status Linux/macOS Build Status Linux i386 Windows build status Coverity Scan Build Status Code Coverage

TimescaleDB

TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL and packaged as a PostgreSQL extension, providing automatic partitioning across time and space (partitioning key), as well as full SQL support.

Timescale Cloud is our fully managed, hosted version of TimescaleDB, available in the cloud of your choice (pay-as-you-go, with free trial credits to start). To determine which option is best for you, see Timescale Products for more information about our Apache-2 version, TimescaleDB Community (self-hosted) and Timescale Cloud (hosted), including: feature comparisons, FAQ, documentation, and support.

Below is an introduction to TimescaleDB. For more information, please check out these other resources:

For reference and clarity, all code files in this repository reference licensing in their header (either Apache License, Version 2.0 or Timescale License (TSL)). Apache-2 licensed binaries can be built by passing -DAPACHE_ONLY=1 to bootstrap.

Contributors welcome.

(To build TimescaleDB from source, see instructions in Building from source.)

Using TimescaleDB

TimescaleDB scales PostgreSQL for time-series data via automatic partitioning across time and space (partitioning key), yet retains the standard PostgreSQL interface.

In other words, TimescaleDB exposes what look like regular tables, but are actually only an abstraction (or a virtual view) of many individual tables comprising the actual data. This single-table view, which we call a hypertable, is comprised of many chunks, which are created by partitioning the hypertable's data in either one or two dimensions: by a time interval, and by an (optional) "partition key" such as device id, location, user id, etc. (Architecture discussion)

Virtually all user interactions with TimescaleDB are with hypertables. Creating tables and indexes, altering tables, inserting data, selecting data, etc., can (and should) all be executed on the hypertable.

From the perspective of both use and management, TimescaleDB just looks and feels like PostgreSQL, and can be managed and queried as such.

Before you start

PostgreSQL's out-of-the-box settings are typically too conservative for modern servers and TimescaleDB. You should make sure your postgresql.conf settings are tuned, either by using timescaledb-tune or doing it manually.

Creating a hypertable

-- Do not forget to create timescaledb extension
CREATE EXTENSION timescaledb;

-- We start by creating a regular SQL table
CREATE TABLE conditions (
  time        TIMESTAMPTZ       NOT NULL,
  location    TEXT              NOT NULL,
  temperature DOUBLE PRECISION  NULL,
  humidity    DOUBLE PRECISION  NULL
);

-- Then we convert it into a hypertable that is partitioned by time
SELECT create_hypertable('conditions', 'time');

Inserting and querying data

Inserting data into the hypertable is done via normal SQL commands:

INSERT INTO conditions(time, location, temperature, humidity)
  VALUES (NOW(), 'office', 70.0, 50.0);

SELECT * FROM conditions ORDER BY time DESC LIMIT 100;

SELECT time_bucket('15 minutes', time) AS fifteen_min,
    location, COUNT(*),
    MAX(temperature) AS max_temp,
    MAX(humidity) AS max_hum
  FROM conditions
  WHERE time > NOW() - interval '3 hours'
  GROUP BY fifteen_min, location
  ORDER BY fifteen_min DESC, max_temp DESC;

In addition, TimescaleDB includes additional functions for time-series analysis that are not present in vanilla PostgreSQL. (For example, the time_bucket function above.)

Installation

TimescaleDB is available pre-packaged for several platforms:

Timescale Cloud (database-as-a-service) is available via free trial. You create database instances in the cloud of your choice and use TimescaleDB to power your queries, automating common operational tasks and reducing management overhead.

We recommend following our detailed installation instructions.

To build from source, see instructions here.

Resources

Useful tools

  • timescaledb-tune: Helps set your PostgreSQL configuration settings based on your system's resources.
  • timescaledb-parallel-copy: Parallelize your initial bulk loading by using PostgreSQL's COPY across multiple workers.

Additional documentation

Community & help

Releases & updates

Contributing

Issues
  • Amazon RDS, GCP Cloud SQL, Azure, Heroku support?

    Amazon RDS, GCP Cloud SQL, Azure, Heroku support?

    RDS comes with pre-packaged extensions... are there any talks with AWS to include timescale in their supported extensions, given that timescaledb uses the Apache 2.0 license?

    question feedback-wanted 
    opened by jqnatividad 87
  • Segfault when drop_chunks with 1.7.1

    Segfault when drop_chunks with 1.7.1

    Relevant system information:

    • OS: Centos 7.5
    • PostgreSQL version (output of postgres --version): 12.3
    • TimescaleDB version (output of \dx in psql): 1.7.1
    • Installation method: YUM

    Describe the bug After the upgrade of Postgresql (10.9 -> 12.3) and TimescaleDB extension (1.6.1 -> 1.7.1) we see repeated segfaults when executing drop_chunks:

    Under GDB the segfault yields:

    Program received signal SIGSEGV, Segmentation fault.
    0x00007fa710efc259 in cmp_slices_by_dimension_id () from /usr/pgsql-12/lib/timescaledb-1.7.1.so
    

    To Reproduce Not certain how to reproduce it on any other setup. This happens currently in our staging environment.

    Expected behavior Chunks dropped and disk space freed.

    Actual behavior Segfault

    bug segfault 
    opened by akamensky 62
  • Timescale hypertables don't support ON CONFLICT ON CONSTRAINT upsert statement (breaking Hasura compatibility)

    Timescale hypertables don't support ON CONFLICT ON CONSTRAINT upsert statement (breaking Hasura compatibility)

    Relevant system information:

    • OS: Ubuntu 18.04
    • PostgreSQL version: PostgreSQL 11.1
    • TimescaleDB version: 1.2.1
    • Installation method: docker timescale/timescaledb:1.2.1-pg11

    Describe the bug Here is my schema:

    CREATE TABLE data (
        "timestamp" timestamp with time zone NOT NULL,
        "uuid" uuid NOT NULL
    );
    
    ALTER TABLE ONLY data
        ADD CONSTRAINT data_pkey PRIMARY KEY ("uuid", "timestamp");
    
    SELECT create_hypertable('data', 'timestamp');
    

    To Reproduce Run this SQL

    INSERT INTO "data" ( "timestamp", "uuid" ) VALUES ( "2019-03-06T16:46:46.988Z", "9db6dbe1-cd14-1d17-38c0-4d4a36679fd6" ) ON CONFLICT ON CONSTRAINT "data_pkey" DO NOTHING RETURNING *
    

    Expected behavior Expected to correctly insert/upsert data without error, as it is doing with PostgreSQL 10.6 and timescale 1.2.1 (docker timescale/timescaledb:1.2.1-pg10).

    Actual behavior Postgres throws the following error:

    ERROR:  hypertables do not support ON CONFLICT statements that reference constraints
    HINT:  Use column names to infer indexes instead.
    

    Note: mention of my error in this file: https://github.com/timescale/timescaledb/blob/master/test/expected/upsert.out

    feature-request hypertable Query Experience Team 
    opened by waterdrop01 35
  • [Bug]: Segfault when querying continuous aggregate in docker container

    [Bug]: Segfault when querying continuous aggregate in docker container

    What type of bug is this?

    Crash

    What subsystems and features are affected?

    Continuous aggregate, Query executor

    What happened?

    I'm seeing an inconsistent crash when running timescale on my development machine within docker. This happens most commonly when I run my unit test suite.

    I'm using the timescale image: timescale/timescaledb-postgis:latest-pg12

    When a particular query runs, it causes a postgres segfault which restarts the database and crashes the test suite.

    TimescaleDB version affected

    2.5.0

    PostgreSQL version used

    12.9

    What operating system did you use?

    Debian GNU/Linux 10 (buster)

    What installation method did you use?

    Docker

    What platform did you run on?

    Other

    Relevant log output and stack trace

    2022-02-03 21:32:36.063 UTC [14201] LOG:  TimescaleDB background worker launcher connected to shared catalogs
    2022-02-03 21:32:36.345 UTC [1] LOG:  server process (PID 14206) was terminated by signal 11: Segmentation fault
    2022-02-03 21:32:36.345 UTC [1] DETAIL:  Failed process was running: SELECT "hopthru_tripreference"."trip_id", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."ons"), 0) AS "sum_ons", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."offs"), 0) AS "sum_offs", COUNT(DISTINCT "hopthru_ca_boarding_routes_trips"."service_date") AS "total_days", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."passenger_distance_travelled"), 0) AS "sum_passenger_distance_travelled", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."load"), 0) AS "sum_load", MAX("hopthru_ca_boarding_routes_trips"."maxload") AS "maxload", SUM("hopthru_ca_boarding_routes_trips"."load_count") AS "load_count" FROM "hopthru_ca_boarding_routes_trips" INNER JOIN "hopthru_tripreference" ON ("hopthru_ca_boarding_routes_trips"."trip_reference_id" = "hopthru_tripreference"."id") INNER JOIN "hopthru_routereference" ON ("hopthru_ca_boarding_routes_trips"."route_reference_id" = "hopthru_routereference"."id") INNER JOIN "hopthru_routemaster" ON ("hopthru_routereference"."route_id" = "hopthru_routemaster"."id") WHERE ("hopthru_ca_bo
    2022-02-03 21:32:36.345 UTC [1] LOG:  terminating any other active server processes
    2022-02-03 21:32:36.345 UTC [14199] WARNING:  terminating connection because of crash of another server process
    2022-02-03 21:32:36.345 UTC [14199] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
    2022-02-03 21:32:36.345 UTC [14199] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
    
    -----
    
    Note, here's some other relevant log when reproducing this:
    
    2022-02-03 22:56:59.866 UTC [243] ERROR:  invalid memory alloc request size 18446744070017253376
    
    -----
    
    Stack trace from a core dump:
    
    [email protected]:~$ gdb postgres /tmp/core.cfe2c1e7f013.postgres.1643929681
    GNU gdb (Ubuntu 10.1-2ubuntu2) 10.1.90.20210411-git
    Copyright (C) 2021 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    Type "show copying" and "show warranty" for details.
    This GDB was configured as "x86_64-linux-gnu".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <https://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
        <http://www.gnu.org/software/gdb/documentation/>.
    
    For help, type "help".
    Type "apropos word" to search for commands related to "word"...
    Reading symbols from postgres...
    Reading symbols from /usr/lib/debug/.build-id/e4/51d1e56e9d52f8e6a98c797a34d02043aa279e.debug...
    
    warning: Can't open file /dev/shm/PostgreSQL.1664165300 during file-backed mapping note processing
    
    warning: Can't open file /SYSV0052e2c1 (deleted) during file-backed mapping note processing
    
    warning: Can't open file /dev/zero (deleted) during file-backed mapping note processing
    [New LWP 206]
    Warning: couldn't activate thread debugging using libthread_db: Cannot find new threads: generic error
    
    warning: File "/usr/lib/x86_64-linux-gnu/libthread_db-1.0.so" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load".
    To enable execution of this file add
    	add-auto-load-safe-path /usr/lib/x86_64-linux-gnu/libthread_db-1.0.so
    line to your configuration file "/home/postgres/.gdbinit".
    To completely disable this security protection add
    	set auto-load safe-path /
    line to your configuration file "/home/postgres/.gdbinit".
    For more information about this security protection see the
    "Auto-loading safe path" section in the GDB manual.  E.g., run from the shell:
    	info "(gdb)Auto-loading safe path"
    
    warning: Unable to find libthread_db matching inferior's thread library, thread debugging will not be available.
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    Core was generated by `postgres:'.
    Program terminated with signal SIGSEGV, Segmentation fault.
    #0  0x000055d9533f5a2a in accum_sum_final (accum=0x55d9552157e0, [email protected]=0x7ffc044a6e30) at utils/adt/./build/../src/backend/utils/adt/numeric.c:9473
    9473	utils/adt/./build/../src/backend/utils/adt/numeric.c: No such file or directory.
    (gdb) backtrace
    #0  0x000055d9533f5a2a in accum_sum_final (accum=0x55d9552157e0, [email protected]=0x7ffc044a6e30) at utils/adt/./build/../src/backend/utils/adt/numeric.c:9473
    #1  0x000055d9533f7a9b in numeric_sum (fcinfo=<optimized out>) at utils/adt/./build/../src/backend/utils/adt/numeric.c:4948
    #2  0x00007fdb2bf86f88 in tsl_finalize_agg_ffunc (fcinfo=0x7ffc044a6f20) at /build/timescaledb/tsl/src/partialize_finalize.c:550
    #3  0x000055d9531c467f in finalize_aggregate ([email protected]=0x55d954498408, [email protected]=0x55d95506c3e0, pergroupstate=0x55d95519f988, [email protected]=0x55d9542e19a8,
        [email protected]=0x55d9542e19d5) at executor/./build/../src/backend/executor/nodeAgg.c:954
    #4  0x000055d9531c7a37 in finalize_aggregates ([email protected]=0x55d954498408, [email protected]=0x55d95506c228, [email protected]=0x55d95519f938) at executor/./build/../src/backend/executor/nodeAgg.c:1163
    #5  0x000055d9531c8188 in agg_retrieve_hash_table ([email protected]=0x55d954498408) at executor/./build/../src/backend/executor/nodeAgg.c:2067
    #6  0x000055d9531c8878 in agg_retrieve_direct (aggstate=<optimized out>) at executor/./build/../src/backend/executor/nodeAgg.c:1682
    #7  ExecAgg (pstate=0x55d954498408) at executor/./build/../src/backend/executor/nodeAgg.c:1563
    #8  0x000055d9531ba08a in ExecScan (node=0x55d9544982d8, accessMtd=0x55d9531e0380 <SubqueryNext>, recheckMtd=0x55d9531dd950 <SubqueryRecheck>) at executor/./build/../src/backend/executor/execScan.c:200
    #9  0x000055d9531c6aa9 in ExecProcNode (node=0x55d9544982d8) at executor/./build/../src/include/executor/executor.h:242
    #10 ExecAppend (pstate=0x55d954497868) at executor/./build/../src/backend/executor/nodeAppend.c:292
    #11 0x000055d9531e1ff9 in ExecProcNode (node=0x55d954497868) at executor/./build/../src/include/executor/executor.h:242
    #12 ExecNestLoop (pstate=0x55d9543e4748) at executor/./build/../src/backend/executor/nodeNestloop.c:160
    #13 0x000055d9531e20dd in ExecProcNode (node=0x55d9543e4748) at executor/./build/../src/include/executor/executor.h:242
    #14 ExecNestLoop (pstate=0x55d9543e4588) at executor/./build/../src/backend/executor/nodeNestloop.c:109
    #15 0x000055d9531e20dd in ExecProcNode (node=0x55d9543e4588) at executor/./build/../src/include/executor/executor.h:242
    #16 ExecNestLoop (pstate=0x55d9543e43c8) at executor/./build/../src/backend/executor/nodeNestloop.c:109
    #17 0x000055d9531c7394 in ExecProcNode (node=0x55d9543e43c8) at executor/./build/../src/include/executor/executor.h:242
    #18 fetch_input_tuple ([email protected]=0x55d9543e3f60) at executor/./build/../src/backend/executor/nodeAgg.c:406
    #19 0x000055d9531c8a5d in agg_retrieve_direct (aggstate=0x55d9543e3f60) at executor/./build/../src/backend/executor/nodeAgg.c:1748
    #20 ExecAgg (pstate=0x55d9543e3f60) at executor/./build/../src/backend/executor/nodeAgg.c:1563
    #21 0x000055d9531b1b8d in ExecProcNode (node=0x55d9543e3f60) at executor/./build/../src/include/executor/executor.h:242
    #22 ExecutePlan (execute_once=<optimized out>, dest=0x55d9550c8678, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x55d9543e3f60,
        estate=0x55d9543e3c30) at executor/./build/../src/backend/executor/execMain.c:1632
    #23 standard_ExecutorRun (queryDesc=0x55d9543e92c0, direction=<optimized out>, count=0, execute_once=<optimized out>) at executor/./build/../src/backend/executor/execMain.c:350
    #24 0x000055d953349065 in ExecutorRun (execute_once=<optimized out>, count=0, direction=ForwardScanDirection, queryDesc=0x55d9543e92c0) at executor/./build/../src/backend/executor/execMain.c:294
    #25 PortalRunSelect (portal=0x55d95406cdd0, forward=<optimized out>, count=0, dest=<optimized out>) at tcop/./build/../src/backend/tcop/pquery.c:938
    #26 0x000055d95334acb0 in PortalRun (portal=0x55d95406cdd0, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x55d9550c8678, altdest=0x55d9550c8678, completionTag=0x7ffc044a7d90 "")
        at tcop/./build/../src/backend/tcop/pquery.c:779
    #27 0x000055d95334534b in exec_simple_query (
        query_string=0x55d954003b20 "SELECT \"hopthru_tripreference\".\"trip_id\", COALESCE(SUM(\"hopthru_ca_boarding_routes_trips\".\"ons\"), 0) AS \"sum_ons\", COALESCE(SUM(\"hopthru_ca_boarding_routes_trips\".\"offs\"), 0) AS \"sum_offs\", COUNT(DIST"...) at tcop/./build/../src/backend/tcop/postgres.c:1215
    #28 0x000055d95334d4ad in PostgresMain (argc=<optimized out>, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at tcop/./build/../src/backend/tcop/postgres.c:4271
    #29 0x000055d9532c14ea in BackendRun (port=0x55d95402be30) at postmaster/./build/../src/backend/postmaster/postmaster.c:4510
    #30 BackendStartup (port=0x55d95402be30) at postmaster/./build/../src/backend/postmaster/postmaster.c:4193
    #31 ServerLoop () at postmaster/./build/../src/backend/postmaster/postmaster.c:1725
    #32 0x000055d9532c2470 in PostmasterMain (argc=1, argv=<optimized out>) at postmaster/./build/../src/backend/postmaster/postmaster.c:1398
    #33 0x000055d952fd47e7 in main (argc=1, argv=0x55d953ff6d20) at main/./build/../src/backend/main/main.c:228
    

    How can we reproduce the bug?

    No response

    bug continuous_aggregate segfault docker-build 2.5.0 Core Database Team 
    opened by brockhaywood 34
  • TimescaleDB 2.x Continuous Aggrecation long recalculation

    TimescaleDB 2.x Continuous Aggrecation long recalculation

    We have an issue with the Continuous Aggregation feature with TimescaleDB 2.x upgrade. We storing data with 1 seconds granularity in hypertable and doing rollups to 10m and 1h intervals with continuous aggregation views. Before 2.0 we have no issues with refreshes, but with 2.0 rollups get stuck after inserting a bunch of historical data. We've looked through a code and was able to tell that the issue raises in invalidation logic.

    Intervals

    Our system doing separate inserts of data points rounded to 1-second grid (shown in red on the diagram) and this action leads to creating invalidation records in the _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log table. Because all of these records are 0 length intervals - the aggregation process cannot merge them into bigger intervals. This leads to adding dozens of records into _timescaledb_catalog.continuous_aggs_materialization_invalidation_log and the continuous aggregation job invalidates the same 10 minutes interval again and again for each changed record. Maybe the invalidation process can be more clever and join intervals that fall into the same bucket to avoid recalculating the same intervals over and over again? Another thought is to recalculate the bucket once taking into account all intervals currently invalidating it and not once per invalidation log record.

    I hope this problem is not ours only. Please help us to understand and investigate the problem properly.

    postgres=# SELECT version();
                                                     version
    ---------------------------------------------------------------------------------------------------------
     PostgreSQL 12.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
    (1 row)
    postgres=# \dx
                                          List of installed extensions
        Name     | Version |   Schema   |                            Description
    -------------+---------+------------+-------------------------------------------------------------------
     plpgsql     | 1.0     | pg_catalog | PL/pgSQL procedural language
     timescaledb | 2.0.0   | public     | Enables scalable inserts and complex queries for time-series data
    
    bug continuous_aggregate 
    opened by Antiarchitect 33
  • POLL:  For what additional platforms should we offer binary releases?

    POLL: For what additional platforms should we offer binary releases?

    We often get various requests to offer a binary releases, with clear package management, for different platforms. It would be useful to get feedback from the community as to what platforms they'd like a native binary package for.

    So, please signify your interest with a thumbs-up in the comments below. (Or suggest a new platform if we missed it.)

    feedback-wanted packaging 
    opened by mfreed 32
  • Segfault returning text values from hypertable insert

    Segfault returning text values from hypertable insert

    Relevant system information:

    • OS: [e.g. Ubuntu 16.04, Windows 10 x64, etc] Linux 67c644fdfcfc 4.20.1-arch1-1-ARCH #1 SMP PREEMPT Wed Jan 9 20:25:43 UTC 2019 x86_64 GNU/Linux
    • PostgreSQL version (output of postgres --version): PostgreSQL 10.6 (Debian 10.6-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
    • TimescaleDB version (output of \dx in psql): 1.2.1
    • Installation method: [e.g., "using Docker", "apt install", "source"] source build in docker container (from postgres 10)

    Describe the bug Seeing segfaults when returning a lot of text column from an insert on a hyper table. the same table without the hypertable applied things seem to work fine

    INSERT INTO events (
    
             organization_id
            , conversation_id
            , content
            , related_id
            , related_type
            , event_type
            , medium
            , is_staff
            , created_by
            , correlation_id
            , persona
    
            )
            VALUES
            (
    
              'be42e487-ce61-4306-b914-8071b3696c63'
            , '9c43092b-9e12-47e0-87ac-a1bcc37eb727'
            , 'hello'
            , null
            , 'member'
            , 'conversation_created'
            , 'messenger'
            , true
            , 'f6a8bf32-9b81-4814-87e6-c38210fc4d06'
            , NULL
            , 'agent'
    
            )
    
            RETURNING
            id, event_type, content, related_id, related_type, conversation_id, organization_id, medium, created_at, created_by, redacted, is_staff
    
    backtrace:
    /usr/lib/postgresql/10/lib/pipelinedb.so(debug_segfault+0x33)[0x7f0ce609ca53]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x110e0)[0x7f0cee4dc0e0]
    postgres: help help 172.22.0.1(48228) INSERT(+0xbc642)[0x561d197b2642]
    postgres: help help 172.22.0.1(48228) INSERT(slot_getallattrs+0x45)[0x561d197b4185]
    postgres: help help 172.22.0.1(48228) INSERT(+0xbf8d1)[0x561d197b58d1]
    postgres: help help 172.22.0.1(48228) INSERT(+0x3814ae)[0x561d19a774ae]
    postgres: help help 172.22.0.1(48228) INSERT(+0x381573)[0x561d19a77573]
    postgres: help help 172.22.0.1(48228) INSERT(PortalRun+0x300)[0x561d19a78c30]
    postgres: help help 172.22.0.1(48228) INSERT(+0x37e94a)[0x561d19a7494a]
    postgres: help help 172.22.0.1(48228) INSERT(PostgresMain+0x1a4f)[0x561d19a7674f]
    postgres: help help 172.22.0.1(48228) INSERT(+0xb399d)[0x561d197a999d]
    postgres: help help 172.22.0.1(48228) INSERT(PostmasterMain+0xfea)[0x561d19a0534a]
    postgres: help help 172.22.0.1(48228) INSERT(main+0x854)[0x561d197aaf74]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f0cec0b02e1]
    postgres: help help 172.22.0.1(48228) INSERT(_start+0x2a)[0x561d197ab02a]
    
    'utf-8' codec can't decode byte 0x86 in position 0: invalid start byte
    
    ERROR:  invalid memory alloc request size 18446744073709551613
    

    To Reproduce Steps to reproduce the behavior:

    CREATE TABLE events
    (
        id uuid NOT NULL DEFAULT gen_random_uuid(),
        conversation_id uuid NOT NULL,
        organization_id uuid NOT NULL,
        created_by uuid NOT NULL,
        redacted_by uuid,
        related_id uuid,
        sorter bigint NOT NULL,
        created_at timestamp with time zone NOT NULL DEFAULT (now())::timestamp(6) with time zone,
        redacted boolean NOT NULL DEFAULT false,
        is_staff boolean NOT NULL DEFAULT false,
        persona persona, -- DOMAIN AS TEXT
        medium mediumNOT NULL,  -- DOMAIN AS TEXT
        related_type event_related_type NOT NULL-- DOMAIN AS TEXT
        event_type text NOT NULL,
        correlation_id text,
        page_cursor text,
        content text
    );
    
    SELECT create_hypertable(
      'events'
    , 'created_at'
    , chunk_time_interval => interval '4 weeks'
    , if_not_exists => TRUE
    );
    

    inserting with return values will almost always cause a segfault

    INSERT INTO events (
             organization_id
            , conversation_id
            , content
            , related_id
            , related_type
            , event_type
            , medium
            , is_staff
            , created_by
            , correlation_id
            , persona
            )
    VALUES
            (
              'be42e487-ce61-4306-b914-8071b3696c63'
            , '9c43092b-9e12-47e0-87ac-a1bcc37eb727'
            , 'hello'
            , null
            , 'member'
            , 'conversation_created'
            , 'messenger'
            , true
            , 'f6a8bf32-9b81-4814-87e6-c38210fc4d06'
            , NULL
            , 'agent'
            )
    RETURNING
            id, event_type, content, related_id, related_type, conversation_id, organization_id, medium, created_at, created_by, redacted, is_staff, page_cursor, sorter
    

    If I remove the returning clause it will be just fine. I can also select * from events just fine. But inserting with the return statement will always crash. If I remove a number of the text columns from the return statement it will also get better. We've had to down grade from 1.2.1 back to 1.0.0

    It seems like there is maybe an issue with toasting / detoasting?

    page_cursor is a base64 encoded string

    NTJkZTRiMWMtNTJlMC00MmQyLWE0ZmUtOTVmMjMzMGRkNzVkfDljNDMwOTJiLTllMTItNDdlMC04N2FjLWExYmNjMzdlYjcyN3wyMDE5LTAyLTI3IDIyOjE2OjMwLjQxMTYzMiswMHwyMDE5LTAyLTI3IDIyOjIyOjM1LjcwNDY1OCswMHwyMA==
    

    Expected behavior I would expect it to not segfault

    Actual behavior postgres is segfaulting

    bug 
    opened by esatterwhite 29
  • Trouble finding extension on Windows

    Trouble finding extension on Windows

    Bug report on Windows 10 x64 Postgre 11.6 timescaledb 1.5.1

    Relevant system information:

    • OS: [e.g. Ubuntu 16.04, Windows 10 x64, etc]
    • PostgreSQL version (output of postgres --version): [e.g. 10.5, 9.6.10]
    • TimescaleDB version (output of \dx in psql): [e.g. 1.0.0]
    • Installation method: [e.g., "using Docker", "apt install", "source"]

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Actual behavior A clear and concise description of what actually happened.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

    build windows unsupported-pg 
    opened by yebidaxiong 27
  • could not access file

    could not access file "$libdir/timescaledb-1.4.0": No such file or directory

    Relevant system information:

    • OS: Ubuntu 18.04
    • PostgreSQL version: postgres (PostgreSQL) 11.5
    • TimescaleDB version: 1.4.1
    • Installation method: Docker

    Describe the bug Postgresql showing the error message could not access file "$libdir/timescaledb-1.4.0": No such file or directory on any psql command like SELECT *.

    I did run:

    docker exec -it timescaledb psql -U postgres -X
    
    # within the PostgreSQL instance
    ALTER EXTENSION timescaledb UPDATE;
    

    It says the extension is up to date...I don't understand what's going on.

    From inside docker image, here is the output of:

    bash-5.0# pg_config --pkglibdir
    /usr/local/lib/postgresql
    
    bash-5.0# ls /usr/local/lib/postgresql
    _int.so                   latin_and_mic.so          timescaledb-tsl-1.3.0.so
    adminpack.so              libpqwalreceiver.so       timescaledb-tsl-1.3.1.so
    amcheck.so                lo.so                     timescaledb-tsl-1.3.2.so
    ascii_and_mic.so          ltree.so                  timescaledb-tsl-1.4.0.so
    auth_delay.so             moddatetime.so            timescaledb-tsl-1.4.1.so
    auto_explain.so           pageinspect.so            timescaledb.so
    autoinc.so                passwordcheck.so          timetravel.so
    bloom.so                  pg_buffercache.so         tsm_system_rows.so
    btree_gin.so              pg_freespacemap.so        tsm_system_time.so
    btree_gist.so             pg_prewarm.so             unaccent.so
    citext.so                 pg_stat_statements.so     utf8_and_ascii.so
    cube.so                   pg_trgm.so                utf8_and_big5.so
    cyrillic_and_mic.so       pg_visibility.so          utf8_and_cyrillic.so
    dblink.so                 pgcrypto.so               utf8_and_euc2004.so
    dict_int.so               pgoutput.so               utf8_and_euc_cn.so
    dict_snowball.so          pgrowlocks.so             utf8_and_euc_jp.so
    dict_xsyn.so              pgstattuple.so            utf8_and_euc_kr.so
    earthdistance.so          pgxml.so                  utf8_and_euc_tw.so
    euc2004_sjis2004.so       pgxs                      utf8_and_gb18030.so
    euc_cn_and_mic.so         plpgsql.so                utf8_and_gbk.so
    euc_jp_and_sjis.so        postgres_fdw.so           utf8_and_iso8859.so
    euc_kr_and_mic.so         refint.so                 utf8_and_iso8859_1.so
    euc_tw_and_big5.so        seg.so                    utf8_and_johab.so
    file_fdw.so               sslinfo.so                utf8_and_sjis.so
    fuzzystrmatch.so          tablefunc.so              utf8_and_sjis2004.so
    hstore.so                 tcn.so                    utf8_and_uhc.so
    insert_username.so        test_decoding.so          utf8_and_win.so
    isn.so                    timescaledb-1.4.1.so      uuid-ossp.so
    latin2_and_win1250.so     timescaledb-tsl-1.2.2.so
    

    Thanks for your help!

    opened by waterdrop01 26
  • Performance issues when using 10,000s of chunks

    Performance issues when using 10,000s of chunks

    Hello timescale!

    As per our conf call, here are the problems we encounter regarding response time over partition increase. We are leveraging the opportunity of using Postgres + Timescale, as an alternative to Oracle Database for managing big data in an energy application (time series, energy business measures).

    Methodology: - Create a new database with Timescale extension - Create a big table, insert data into it - Run queries - Add new empty partitions (or chunks) - Run the same queries

    Note: for Portgres it is easy to create empty partition; with Timescale, it looks like we have to actually insert some data to create more chunks… so the new partitions are not exactly empty.

    One of the query is:

    BEGIN;
    	insert into collfact (time, pointid, value, mesureid, version, insertiondate, iscurrent)
    	values (timestamp '2018-04-11 00:00:00', 10, trunc(random() * 99999),
    		1, 2, now(), '1');
    	UPDATE collfact set iscurrent = '0'
    	where time = timestamp '2018-04-11 00:00:00'
    		and pointid = 10 and mesureid = 1 and version = 1;
    COMMIT;
    

    Which should INSERT 1 row, and UPDATE 1 single row…

    Except for the first test data, the 3 others crashed:

    ERROR:  out of memory
    DÉTAIL : Failed on request of size 200.
    

    In fact this is the message from Windows, under Linux we have something slightly different but seems to be due to the memory management / protection under Linux (see https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT) :

    la connexion au serveur a été coupée de façon inattendue
            Le serveur s'est peut-être arrêté anormalement avant ou durant le
            traitement de la requête.
    La connexion au serveur a été perdue. Tentative de réinitialisation : Échec.
    Durée : 543010,711 ms (09:03,011)
    !> COMMIT;
    Vous n'êtes pas connecté à une base de données.
    !>
    

    But the issue (statement failure) is still the same even if the system does not react the same way.

    Other unexpected observations: the same SQL SELECT query response time increses while adding chunks:

    Chunks #      |    5840   |    7301   |    11319  |   14971   
    Time (sec.)   |      13   |      57   |       70  |      76   
    

    See attached files for the SQL statements we used.

    So our concern here is that we expected response time stability; the dataset and number of chunks here is quite small (or even really small), wheras Timescale is supposed to address these kind of requirements very well… So do we make something the wrong weay?

    Thanks in advance for your help, Regards, Sebastien

    PS1: tests are running on a Linux server, 8 GB, 4 CPU, RAID 5 disks; Postgres a been set using pg Tune: see attache file "PG_CONF.txt"

    PS 2: file "PGTS_INITDATA.TXT" to build the database, table structure and test data file "PGTS_SOMEQUERIES.TXT" contains the queries we run file "PGTS_MOREDATA.TXT" to add some more chunks PGTS_INITDATA.TXT PGTS_SOMEQUERIES.TXT PGTS_MOREDATA.TXT PG_CONF.txt

    question limitation 
    opened by sspieser 26
  • "Telemetry Reporter" job running & failing even when telemetry disabled

    timescaledb.telemetry_level is off timescaledb.max_background_workers is set to 8 I have 7 databases (including postgres) TimescaleDB 1.6.1, PostgreSQL 11.7

    _timescaledb_config.bgw_job 1 "Telemetry Reporter" "telemetry_and_version_check_if_enabled" "24:00:00" "00:01:40" -1 "01:00:00"

    _timescaledb_internal.bgw_job_stat 1 "2020-03-26 17:28:01.392372+00" "2020-03-26 17:28:01.40367+00" "2020-03-27 17:28:01.40367+00" "2020-03-26 17:28:01.40367+00" true 21 "00:00:00.226514" 10 11 0 0 0

    I get a steady stream of this warning message (10 per second) WARNING: failed to launch job 1 "Telemetry Reporter": out of background workers

    Seems to me like the Telemetry Reporter should NOT run when telemetry is disabled! I've confirmed that it spits out empty JSON.

    bug 2.0-proposed bgw 
    opened by jflambert 25
  • [Bug]: decompress_backfill fails with staging table not a hypertable error

    [Bug]: decompress_backfill fails with staging table not a hypertable error

    What type of bug is this?

    Unexpected error

    What subsystems and features are affected?

    Distributed hypertable

    What happened?

    TL;DR

    decompress_backfill with a distributed hypertable as the destination_hypertable fails with an unexpected error requiring staging table to be a hypertable: ERROR: table "staging_table" is not a hypertable (Full error details below).

    Reproduction Steps

    Create a staging table and sample data

    CREATE TABLE IF NOT exists staging_table (
    	start_time timestamptz NOT NULL,
    	request_id text NULL,
    	account_uuid uuid NULL
    );
    
    INSERT INTO public.staging_table (start_time,request_id,account_uuid) VALUES
    	 ('2022-03-05 05:47:35.449779-05','0011c42c-bace-410e-9194-574f893024a9','b384793a-16c3-4a4e-a5c3-0da0a587a913'),
    	 ('2022-03-05 05:28:11.874011-05','001c55e8-f8ab-4b6b-8103-95bdbd1e653a','b384793a-16c3-4a4e-a5c3-0da0a587a913'),
    	 ('2022-03-05 05:23:26.300802-05','00009f14-24d9-41e8-a14b-4c26e8749d61','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:59:34.353057-05','001185c6-b331-4e5b-8d6f-bad2b028403d','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:04:05.589386-05','000db254-161c-4c90-b8e2-b8e26fc42ae4','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:52:20.956663-05','0018e5c4-d458-407c-9795-75a746aa842c','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:25:28.621477-05','00122310-8947-4cfd-844a-5d9245744a80','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:13:47.468799-05','001c4bdd-0cab-44f0-8ce8-7ee65c036fa0','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:41:35.080609-05','001a3f4c-067b-4061-b48b-eaa2b74b0935','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782'),
    	 ('2022-03-05 05:28:02.181944-05','000fba7c-52e2-4670-9b2c-a2df277504d3','b0fbcdb1-422c-42fc-ba50-a6d4bbef0782');
    

    Create non-distributed hypertable and run decompress backfill (success case)

    CREATE TABLE dest_hypertable_non_dist (
    	start_time timestamptz NOT NULL,
    	request_id text NULL,
    	account_uuid uuid NULL
    )
    
    select create_hypertable('dest_hypertable_non_dist', 'start_time');
    CALL decompress_backfill(staging_table=>'staging_table', destination_hypertable=>'dest_hypertable_non_dist');
    
    • Runs successfully as expected with the following output:
      • 10 rows moved in range TIMESTAMPTZ '2022-03-05 05:04:05.589386-05' to TIMESTAMPTZ '2022-03-05 05:59:34.353058-05'

    Create distributed hypertable and run decompress backfill (FAILURE CASE)

    CREATE TABLE dest_hypertable_dist (
    	start_time timestamptz NOT NULL,
    	request_id text NULL,
    	account_uuid uuid NULL
    )
    
    select create_distributed_hypertable('dest_hypertable_dist', 'start_time');
    
    CALL decompress_backfill(staging_table=>'staging_table', destination_hypertable=>'dest_hypertable_dist');
    
    • decompress_backfill call fails with below error:
    SQL Error [TS001]: ERROR: table "staging_table" is not a hypertable
      Where: SQL statement "  
                WITH to_insert AS (DELETE 
                FROM staging_table --source table
                WHERE start_time >= NULL -- time column >= range start
                AND start_time < NULL -- time column < range end
                RETURNING * )
                INSERT INTO dest_hypertable_dist 
                SELECT * FROM to_insert
                ON CONFLICT DO NOTHING -- ON CONFLICT CLAUSE if it exists
                "
    PL/pgSQL function decompress_backfill(regclass,regclass,text,boolean,interval,text[],boolean) line 149 at EXECUTE
    

    TimescaleDB version affected

    2.7.0

    PostgreSQL version used

    psql (PostgreSQL) 14.4 (Ubuntu 14.4-1.pgdg20.04+1)

    What operating system did you use?

    Ubuntu 20.04.4 LTS

    What installation method did you use?

    Deb/Ubuntu

    What platform did you run on?

    Other

    Relevant log output and stack trace

    SQL Error [TS001]: ERROR: table "staging_table" is not a hypertable
      Where: SQL statement "  
                WITH to_insert AS (DELETE 
                FROM staging_table --source table
                WHERE start_time >= NULL -- time column >= range start
                AND start_time < NULL -- time column < range end
                RETURNING * )
                INSERT INTO dest_hypertable_dist 
                SELECT * FROM to_insert
                ON CONFLICT DO NOTHING -- ON CONFLICT CLAUSE if it exists
                "
    PL/pgSQL function decompress_backfill(regclass,regclass,text,boolean,interval,text[],boolean) line 149 at EXECUTE
    

    How can we reproduce the bug?

    Repro steps included in the description
    
    bug 
    opened by murali89 0
  • Block drop chunk if chunk is in frozen state

    Block drop chunk if chunk is in frozen state

    A chunk in frozen state cannot be dropped. drop_chunks will skip over frozen chunks without erroring. Internal api , drop_chunk will error if you attempt to drop a chunk without unfreezing it.

    This PR also adds a new internal API to unfreeze a chunk.

    opened by gayyappan 1
  • Better superuser handling for move_chunk

    Better superuser handling for move_chunk

    The current code was assuming the bootstrap superuser for the actual move chunk operation. However, we can make it further flexible by using the logged in credentials if those happen to have superuser privileges.

    opened by nikkhils 1
  • [Enhancement]: include additional debug information for SQLCA when deccompressing chunks

    [Enhancement]: include additional debug information for SQLCA when deccompressing chunks

    What type of enhancement is this?

    User experience

    What subsystems and features will be improved?

    Compression

    What does the enhancement do?

    user testing chunk decompression using python3 and the psycopg2 library.

    please include more/useful/actionable information for troubleshooting compression issues:

    Implementation challenges

    #1 It seems the SQLCA is only partially filled out (many DEBUG field "None"). Here are the results of the error.diag list: [06/09/22 09:21:30] App msg: decompressing [06/09/22 09:21:30] SQL Error [55000] Severity: ERROR DEBUG: column_name = None DEBUG: constraint_name = None DEBUG: context = None DEBUG: datatype_name = None DEBUG: internal_position = None DEBUG: internal_query = None DEBUG: message_detail = None DEBUG: message_hint = None DEBUG: message_primary = chunk "_hyper_449_2940_chunk" is not compressed DEBUG: schema_name = None DEBUG: severity = ERROR DEBUG: severity_nonlocalized = ERROR DEBUG: source_file = compress_utils.c DEBUG: source_function = decompress_chunk_impl DEBUG: source_line = 315 DEBUG: sqlstate = 55000 DEBUG: statement_position = None DEBUG: table_name = None

    enhancement 
    opened by spaceshipoperator 0
  • [Bug]: SkipScan not used with LIKE 'abc%def'

    [Bug]: SkipScan not used with LIKE 'abc%def'

    What type of bug is this?

    Performance issue

    What subsystems and features are affected?

    SkipScan

    What happened?

    Timescaledb seems to have some trouble deciding to use SkipScan when using LIKE with characters after %.

                                  Table "public.ltest"
           Column        |           Type           | Collation | Nullable | Default
    ---------------------+--------------------------+-----------+----------+---------
     time                | timestamp with time zone |           | not null |
     entity_id           | text                     |           | not null |
     device_class        | text                     |           |          |
     state               | text                     |           | not null |
     state_class         | text                     |           |          |
     attributes          | jsonb                    |           | not null |
     location            | geometry(Point,4326)     |           |          |
     friendly_name       | text                     |           |          |
     unit_of_measurement | text                     |           |          |
    Indexes:
        "ltest_entity_id_time_idx" btree (entity_id, "time" DESC)
    
    select distinct on (entity_id) * from ltest
    where time > now() - interval '7 day' and entity_id like PATTERN
    order by entity_id, time desc;
    

    Examples of entity_id:

     sensor.atmos_salon_pressure
     sensor.atmos_salon_temperature
     sensor.atmos_utilitaire_battery
     sensor.atmos_utilitaire_humidity
     sensor.atmos_utilitaire_pressure
     sensor.chauffe_eau_tx
     sensor.chauffe_eau_uptime_2
     sensor.cloud_key_rx
     sensor.cloud_key_tx
     sensor.cloud_key_uptime
     sensor.desktop_de_mikael_eth_rx
    

    When % is the last part of PATTERN, I get SkipScan as expected. It also happens for sensor%_ for some reason.

     Unique  (cost=209.57..99599.11 rows=209 width=175) (actual time=0.233..7.100 rows=118 loops=1)
       ->  Custom Scan (ConstraintAwareAppend)  (cost=209.57..88980.05 rows=4247627 width=175) (actual time=0.231..6.912 rows=235 loops=1)
             Hypertable: ltest
             Chunks left after exclusion: 2
             ->  Merge Append  (cost=209.57..88980.05 rows=4247627 width=175) (actual time=0.230..6.843 rows=235 loops=1)
                   Sort Key: _hyper_4_61_chunk.entity_id, _hyper_4_61_chunk."time" DESC
                   ->  Custom Scan (SkipScan) on _hyper_4_61_chunk  (cost=0.43..110.24 rows=209 width=180) (actual time=0.130..3.630 rows=118 loops=1)
                         ->  Index Scan using _hyper_4_61_chunk_ltest_entity_id_time_idx on _hyper_4_61_chunk  (cost=0.43..57995.66 rows=594921 width=180) (actual time=0.128..3.527 rows=118 loops=1)
                               Index Cond: ((entity_id >= 'sensor'::text) AND (entity_id < 'sensos'::text) AND ("time" > (now() - '7 days'::interval)))
                               Filter: (entity_id ~~ 'sensor%'::text)
                   ->  Custom Scan (SkipScan) on _hyper_4_81_chunk  (cost=0.42..99.63 rows=209 width=133) (actual time=0.097..3.082 rows=117 loops=1)
                         ->  Index Scan using _hyper_4_81_chunk_ltest_entity_id_time_idx on _hyper_4_81_chunk  (cost=0.42..3949.11 rows=72878 width=133) (actual time=0.096..2.983 rows=117 loops=1)
                               Index Cond: ((entity_id >= 'sensor'::text) AND (entity_id < 'sensos'::text) AND ("time" > (now() - '7 days'::interval)))
                               Filter: (entity_id ~~ 'sensor%'::text)
     Planning Time: 22.176 ms
     Execution Time: 8.324 ms
    

    However, depending on the end of PATTERN, I get other plans:

    • 'sensor%tem'
     Unique  (cost=26140.41..26845.77 rows=9 width=226) (actual time=341.684..356.574 rows=0 loops=1)
       ->  Gather Merge  (cost=26140.41..26830.76 rows=6003 width=226) (actual time=341.682..356.571 rows=0 loops=1)
             Workers Planned: 1
             Workers Launched: 1
             ->  Sort  (cost=25140.40..25155.41 rows=6003 width=225) (actual time=251.228..251.230 rows=0 loops=2)
                   Sort Key: ltest.entity_id, ltest."time" DESC
                   Sort Method: quicksort  Memory: 25kB
                   Worker 0:  Sort Method: quicksort  Memory: 25kB
                   ->  Parallel Custom Scan (ChunkAppend) on ltest  (cost=34.09..24378.27 rows=6003 width=225) (actual time=250.938..250.939 rows=0 loops=2)
                         Chunks excluded during startup: 7
                         ->  Parallel Seq Scan on _hyper_4_61_chunk  (cost=0.00..22111.56 rows=1 width=180) (actual time=228.952..228.953 rows=0 loops=2)
                               Filter: ((entity_id ~~ 'sensor%tem'::text) AND ("time" > (now() - '7 days'::interval)))
                               Rows Removed by Filter: 346612
                         ->  Parallel Seq Scan on _hyper_4_81_chunk  (cost=0.00..2081.14 rows=1 width=133) (actual time=43.945..43.945 rows=0 loops=1)
                               Filter: ((entity_id ~~ 'sensor%tem'::text) AND ("time" > (now() - '7 days'::interval)))
                               Rows Removed by Filter: 76928
     Planning Time: 23.440 ms
     Execution Time: 357.331 ms
    
    • sensor%temperature
     Unique  (cost=62357.72..63226.71 rows=209 width=175) (actual time=930.933..952.883 rows=14 loops=1)
       ->  Sort  (cost=62357.72..62792.22 rows=173799 width=175) (actual time=930.929..946.553 rows=18367 loops=1)
             Sort Key: ltest.entity_id, ltest."time" DESC
             Sort Method: external merge  Disk: 4144kB
             ->  Custom Scan (ChunkAppend) on ltest  (cost=0.42..29830.67 rows=173799 width=175) (actual time=0.243..802.951 rows=18367 loops=1)
                   Chunks excluded during startup: 7
                   ->  Index Scan using _hyper_4_61_chunk_ltest_time_idx on _hyper_4_61_chunk  (cost=0.43..26905.21 rows=16672 width=180) (actual time=0.241..736.887 rows=16270 loops=1)
                         Index Cond: ("time" > (now() - '7 days'::interval))
                         Filter: (entity_id ~~ 'sensor%temperature'::text)
                         Rows Removed by Filter: 600380
                   ->  Seq Scan on _hyper_4_81_chunk  (cost=0.00..2714.74 rows=2126 width=133) (actual time=0.097..59.923 rows=2097 loops=1)
                         Filter: ((entity_id ~~ 'sensor%temperature'::text) AND ("time" > (now() - '7 days'::interval)))
                         Rows Removed by Filter: 74831
     Planning Time: 21.909 ms
     Execution Time: 957.779 ms
    

    In the last case, the number of expected rows is accurate, but the index is not used even though it's a subset of sensor%. If I force the filter to happen after using a materialized CTE, I get the expected plan:

    homeassistant=# explain analyze with a as materialized ( select distinct on (entity_id) * from ltest where time > now() - interval '7 day' order by entity_id, time desc) select * from a  where entity_id like 'sensor%temperature';
                                                                                                     QUERY PLAN
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
     CTE Scan on a  (cost=114527.07..114531.77 rows=1 width=264) (actual time=32.072..39.422 rows=14 loops=1)
       Filter: (entity_id ~~ 'sensor%temperature'::text)
       Rows Removed by Filter: 199
       CTE a
         ->  Unique  (cost=197.35..114527.07 rows=209 width=175) (actual time=29.256..39.028 rows=213 loops=1)
               ->  Custom Scan (ConstraintAwareAppend)  (cost=197.35..102308.00 rows=4887627 width=175) (actual time=29.253..38.736 rows=407 loops=1)
                     Hypertable: ltest
                     Chunks left after exclusion: 2
                     ->  Merge Append  (cost=197.35..102308.00 rows=4887627 width=175) (actual time=29.251..38.626 rows=407 loops=1)
                           Sort Key: _hyper_4_61_chunk.entity_id, _hyper_4_61_chunk."time" DESC
                           ->  Custom Scan (SkipScan) on _hyper_4_61_chunk  (cost=0.43..108.41 rows=209 width=180) (actual time=29.140..34.011 rows=213 loops=1)
                                 ->  Index Scan using _hyper_4_61_chunk_ltest_entity_id_time_idx on _hyper_4_61_chunk  (cost=0.43..54693.98 rows=616535 width=180) (actual time=29.133..33.803 rows=213 loops=1)
                                       Index Cond: ("time" > (now() - '7 days'::interval))
                           ->  Custom Scan (SkipScan) on _hyper_4_81_chunk  (cost=0.42..97.87 rows=209 width=133) (actual time=0.105..4.384 rows=194 loops=1)
                                 ->  Index Scan using _hyper_4_81_chunk_ltest_entity_id_time_idx on _hyper_4_81_chunk  (cost=0.42..3528.89 rows=77068 width=133) (actual time=0.102..4.187 rows=194 loops=1)
                                       Index Cond: ("time" > (now() - '7 days'::interval))
     Planning Time: 17.991 ms
     JIT:
       Functions: 9
       Options: Inlining false, Optimization false, Expressions true, Deforming true
       Timing: Generation 4.205 ms, Inlining 0.000 ms, Optimization 1.082 ms, Emission 27.975 ms, Total 33.263 ms
     Execution Time: 45.225 ms
    

    TimescaleDB version affected

    2.6.0

    PostgreSQL version used

    14

    What operating system did you use?

    Home assistant OS on rpi 4

    What installation method did you use?

    Other

    What platform did you run on?

    On prem/Self-hosted

    Relevant log output and stack trace

    No response

    How can we reproduce the bug?

    I have 700k rows in the concerned chunk. If needed, I'll try to create a script for reproduction.
    
    bug skip-scan 
    opened by Yamakaky 1
  • Error  when joining chunk

    Error when joining chunk

    What type of bug is this?

    Unexpected error

    What subsystems and features are affected?

    Query executor

    What happened?

    ERROR: attribute 1 of type _timescaledb_internal.compress_hyper_11_145_chunk has wrong type

    TimescaleDB version affected

    2.7.0

    PostgreSQL version used

    13.6

    What operating system did you use?

    Ubuntu 18.04

    What installation method did you use?

    Docker, Other

    What platform did you run on?

    Microsoft Azure Cloud

    Relevant log output and stack trace

    This query works - 
    SELECT c.*  FROM  _timescaledb_internal._hyper_1_98_chunk c 
    where (ts,uid,key_id ) in (select ts,uid,key_id from 
    score_archive_lookup sal 
    ) limit 1 ;
    
    Moving the limit to the inner query makes the it crash
    
    SELECT c.*  FROM  _timescaledb_internal._hyper_1_98_chunk c 
    where (ts,uid,key_id ) in (select ts,uid,key_id from 
    score_archive_lookup sal limit 1
    )  ;
    
    ERROR:  attribute 1 of type _timescaledb_internal.compress_hyper_11_145_chunk has wrong type
    DETAIL:  Table has type _timescaledb_internal.compressed_data, but query expects timestamp with time zone.
    
    
    The function with limit in the inner query did work for a lot of data. I suspect it is in cases where there are duplicate records for ts,uid,key_id that the query crashes. Additional info - the chunk is compressed.
    

    How can we reproduce the bug?

    No response

    bug need-more-info compression 
    opened by jayadevanm 1
Releases(2.7.0)
  • 2.7.0(May 24, 2022)

    This release adds major new features since the 2.6.1 release. We deem it moderate priority for upgrading.

    This release includes these noteworthy features:

    • Optimize continuous aggregate query performance and storage
    • The following query clauses and functions can now be used in a continuous aggregate: FILTER, DISTINCT, ORDER BY as well as Ordered-Set Aggregate and Hypothetical-Set Aggregate
    • Optimize now() query planning time
    • Improve COPY insert performance
    • Improve performance of UPDATE/DELETE on PG14 by excluding chunks

    This release also includes several bug fixes.

    If you are upgrading from a previous version and were using compression with a non-default collation on a segmentby-column you should recompress those hypertables.

    Features

    • #4045 Custom origin's support in CAGGs
    • #4120 Add logging for retention policy
    • #4158 Allow ANALYZE command on a data node directly
    • #4169 Add support for chunk exclusion on DELETE to PG14
    • #4209 Add support for chunk exclusion on UPDATE to PG14
    • #4269 Continuous Aggregates finals form
    • #4301 Add support for bulk inserts in COPY operator
    • #4311 Support non-superuser move chunk operations
    • #4330 Add GUC "bgw_launcher_poll_time"
    • #4340 Enable now() usage in plan-time chunk exclusion

    Bugfixes

    • #3899 Fix segfault in Continuous Aggregates
    • #4225 Fix TRUNCATE error as non-owner on hypertable
    • #4236 Fix potential wrong order of results for compressed hypertable with a non-default collation
    • #4249 Fix option "timescaledb.create_group_indexes"
    • #4251 Fix INSERT into compressed chunks with dropped columns
    • #4255 Fix option "timescaledb.create_group_indexes"
    • #4259 Fix logic bug in extension update script
    • #4269 Fix bad Continuous Aggregate view definition reported in #4233
    • #4289 Support moving compressed chunks between data nodes
    • #4300 Fix refresh window cap for cagg refresh policy
    • #4315 Fix memory leak in scheduler
    • #4323 Remove printouts from signal handlers
    • #4342 Fix move chunk cleanup logic
    • #4349 Fix crashes in functions using AlterTableInternal
    • #4358 Fix crash and other issues in telemetry reporter

    Thanks

    • @abrownsword for reporting a bug in the telemetry reporter and testing the fix
    • @jsoref for fixing various misspellings in code, comments and documentation
    • @yalon for reporting an error with ALTER TABLE RENAME on distributed hypertables
    • @zhuizhuhaomeng for reporting and fixing a memory leak in our scheduler
    Source code(tar.gz)
    Source code(zip)
  • 2.6.1(Apr 12, 2022)

    This release is a patch release. We recommend that you upgrade at the next available opportunity.

    Bugfixes

    #4121 Fix RENAME TO/SET SCHEMA on distributed hypertable #4122 Fix segfault on INSERT into distributed hypertable #4142 Ignore invalid relid when deleting hypertable #4159 Fix ADD COLUMN IF NOT EXISTS error on compressed hypertable #4161 Fix memory handling during scans #4176 Fix remote EXPLAIN with parameterized queries #4181 Fix spelling errors and omissions #4186 Fix owner change for distributed hypertable #4192 Abort sessions after extension reload #4193 Fix relcache callback handling causing crashes #4199 Remove signal-unsafe calls from signal handlers #4219 Do not modify aggregation state in finalize

    Thanks

    @abrownsword for reporting a crash in the telemetry reporter (fixed with #4161) @daydayup863 for reporting issue with remote explain

    Source code(tar.gz)
    Source code(zip)
  • 2.6.0(Feb 23, 2022)

    This release is a medium priority for upgrade. We recommend that you upgrade at the next available opportunity.

    This release adds major new features since the 2.5.2 release, including:

    • Continuous aggregates with compression
    • time_bucket_ng support for N months and timezones on continuous aggregates
    • You can read more about this release on our blog post. This release also contains bug fixes since the 2.5.0 release.

    The experimental features in the 2.6 release are:

    • The time_bucket_ng function, a newer version of time_bucket. This function supports years, months, days, hours, minutes, seconds, and timezones.
    • time_bucket_ng support for N months and Timezones on continuous aggregates.
    • APIs for chunk manipulation across data nodes in a distributed hypertable setup. This includes the ability to add a data node and move chunks to the new data node for cluster rebalancing.

    Features

    • #3768 Allow ALTER TABLE ADD COLUMN with DEFAULT on compressed hypertable
    • #3769 Allow ALTER TABLE DROP COLUMN on compressed hypertable
    • #3873 Enable compression on continuous aggregates
    • #3943 Optimize first/last
    • #3945 Add support for ALTER SCHEMA on multi-node
    • #3949 Add support for DROP SCHEMA on multi-node
    • #3977 Timezones support in CAGGs

    Bugfixes

    • #3808 Properly handle max_retries option
    • #3863 Fix remote transaction heal logic
    • #3869 Fix ALTER SET/DROP NULL contstraint on distributed hypertable
    • #3944 Fix segfault in add_compression_policy
    • #3961 Fix crash in EXPLAIN VERBOSE on distributed hypertable
    • #4015 Eliminate float rounding instabilities in interpolate
    • #4019 Update ts_extension_oid in transitioning state
    • #4073 Fix buffer overflow in partition scheme

    Improvements Query planning performance is improved for hypertables with a large number of chunks.

    Thanks

    • @fvannee for reporting a first/last memory leak
    • @mmouterde for reporting an issue with floats and interpolate
    Source code(tar.gz)
    Source code(zip)
  • 2.5.2(Feb 10, 2022)

    This release contains bug fixes since the 2.5.1 release. This release is high priority for upgrade. We strongly recommend that you upgrade as soon as possible.

    Bugfixes

    • https://github.com/timescale/timescaledb/pull/3900 Improve custom scan node registration
    • https://github.com/timescale/timescaledb/pull/3911 Fix role type deparsing for GRANT command
    • https://github.com/timescale/timescaledb/pull/3918 Fix DataNodeScan plans with one-time filter
    • https://github.com/timescale/timescaledb/pull/3921 Fix segfault on insert into internal compressed table
    • https://github.com/timescale/timescaledb/pull/3938 Fix subtract_integer_from_now on 32-bit platforms and improve error handling
    • https://github.com/timescale/timescaledb/pull/3939 Fix projection handling in time_bucket_gapfill
    • https://github.com/timescale/timescaledb/pull/3948 Avoid double PGclear() in data fetchers
    • https://github.com/timescale/timescaledb/pull/3979 Fix deparsing of index predicates
    • https://github.com/timescale/timescaledb/pull/4015 Eliminate float rounding instabilities in interpolate
    • https://github.com/timescale/timescaledb/pull/4020 Fix ALTER TABLE EventTrigger initialization
    • https://github.com/timescale/timescaledb/pull/4024 Fix premature cache release call
    • https://github.com/timescale/timescaledb/pull/4037 Fix status for dropped chunks that have catalog entries
    • https://github.com/timescale/timescaledb/pull/4069 Fix riinfo NULL handling in ANY construct
    • https://github.com/timescale/timescaledb/pull/4071 Fix extension installation privilege escalation

    Thanks

    • @carlocperez for reporting crash with NULL handling in ANY construct
    • @erikhh for reporting an issue with time_bucket_gapfill
    • @fvannee for reporting a first/last memory leak
    • @kancsuki for reporting drop column and partial index creation not working
    • @mmouterde for reporting an issue with floats and interpolate
    • Pedro Gallegos for reporting a possible privilege escalation during extension installation
    Source code(tar.gz)
    Source code(zip)
  • 2.5.1(Dec 3, 2021)

    This release contains bug fixes since the 2.5.0 release. We deem it medium priority to upgrade.

    Bugfixes

    • #3706 Test enabling dist compression within a procedure
    • #3734 Rework distributed DDL processing logic
    • #3737 Fix flaky pg_dump
    • #3739 Fix compression policy on tables using INTEGER
    • #3766 Fix segfault in ts_hist_sfunc
    • #3779 Support GRANT/REVOKE on distributed database
    • #3789 Fix time_bucket comparison transformation
    • #3797 Fix DISTINCT ON queries for distributed hyperatbles
    • #3799 Fix error printout on correct security label
    • #3801 Fail size utility functions when data nodes do not respond
    • #3809 Fix NULL pointer evaluation in fill_result_error()
    • #3811 Fix INSERT..SELECT involving dist hypertables
    • #3819 Fix reading garbage value from TSConnectionError
    • #3824 Remove pointers from CAGG lists for 64-bit archs
    • #3846 Eliminate deadlock in recompress chunk policy
    • #3881 Fix SkipScan crash due to pruned unique path
    • #3884 Fix create_distributed_restore_point memory issue

    Thanks

    • @cbisnett for reporting and fixing a typo in an error message
    • @CaptainCuddleCube for reporting bug on compression policy procedure on tables using INTEGER on time dimension
    • @phemmer for reporting bugs on multi-node
    Source code(tar.gz)
    Source code(zip)
  • 2.5.0(Oct 28, 2021)

    This release adds major new features since the 2.4.2 release. We deem it moderate priority for upgrading.

    This release includes these noteworthy features:

    • Continuous Aggregates for Distributed Hypertables
    • Support for PostgreSQL 14
    • Experimental: Support for timezones in time_bucket_ng(), including the origin argument

    This release also includes several bug fixes.

    Features

    • #3034 Add support for PostgreSQL 14
    • #3435 Add continuous aggregates for distributed hypertables
    • #3505 Add support for timezones in time_bucket_ng()

    Bugfixes

    • #3580 Fix memory context bug executing TRUNCATE
    • #3592 Allow alter column type on distributed hypertable
    • #3598 Improve evaluation of stable functions such as now() on access node
    • #3618 Fix execution of refresh_caggs from user actions
    • #3625 Add shared dependencies when creating chunk
    • #3626 Fix memory context bug executing TRUNCATE
    • #3627 Schema qualify UDTs in multi-node
    • #3638 Allow owner change of a data node
    • #3654 Fix index attnum mapping in reorder_chunk
    • #3661 Fix SkipScan path generation with constant DISTINCT column
    • #3667 Fix compress_policy for multi txn handling
    • #3673 Fix distributed hypertable DROP within a procedure
    • #3701 Allow anyone to use size utilities on distributed hypertables
    • #3708 Fix crash in get_aggsplit
    • #3709 Fix ordered append pathkey check
    • #3712 Fix GRANT/REVOKE ALL IN SCHEMA handling
    • #3717 Support transparent decompression on individual chunks
    • #3724 Fix inserts into compressed chunks on hypertables with caggs
    • #3727 Fix DirectFunctionCall crash in distributed_exec
    • #3728 Fix SkipScan with varchar column
    • #3733 Fix ANALYZE crash with custom statistics for custom types
    • #3747 Always reset expr context in DecompressChunk

    Thanks

    • @binakot and @sebvett for reporting an issue with DISTINCT queries
    • @hardikm10, @DavidPavlicek and @pafiti for reporting bugs on TRUNCATE
    • @mjf for reporting an issue with ordered append and JOINs
    • @phemmer for reporting the issues on multinode with aggregate queries and evaluation of now()
    • @abolognino for reporting an issue with INSERTs into compressed hypertables that have cagg
    • @tanglebones for reporting the ANALYZE crash with custom types on multinode
    • @amadeubarbosa and @felipenogueirajack for reporting crash using JSONB column in compressed chunks
    Source code(tar.gz)
    Source code(zip)
  • 2.4.2(Sep 21, 2021)

    This release contains bug fixes since the 2.4.1 release. We deem it high priority to upgrade.

    Bugfixes

    #3437 Rename on all continuous aggregate objects #3469 Use signal-safe functions in signal handler #3520 Modify compression job processing logic #3527 Fix time_bucket_ng behaviour with origin argument #3532 Fix bootstrap with regresschecks disabled #3574 Fix failure on job execution by background worker #3590 Call cleanup functions on backend exit

    Thanks

    @jankatins for reporting a crash with background workers @LutzWeischerFujitsu for reporting an issue with bootstrap

    Source code(tar.gz)
    Source code(zip)
  • 2.4.1(Aug 24, 2021)

    This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade.

    The release fixes continuous aggregate refresh for postgres 12.8 and 13.4, a crash with ALTER TABLE commands and a crash with continuous aggregates with HAVING clause.

    Bugfixes

    • #3430 Fix havingqual processing for continuous aggregates
    • #3468 Disable tests by default if tools are not found
    • #3462 Fix crash while tracking alter table commands
    • #3489 Fix continuous agg bgw job failure for PG 12.8 and 13.4
    • #3494 Improve error message when adding data nodes

    Thanks

    • @brianbenns for reporting a segfault with continuous aggregates
    • @usego for reporting an issue with continuous aggregate refresh on PG 13.4
    Source code(tar.gz)
    Source code(zip)
  • 2.4.0(Aug 3, 2021)

    This release adds new experimental features since the 2.3.1 release.

    The experimental features in this release are:

    • APIs for chunk manipulation across data nodes in a distributed hypertable setup. This includes the ability to add a data node and move chunks to the new data node for cluster rebalancing.
    • The time_bucket_ng function, a newer version of time_bucket. This function supports years, months, days, hours, minutes, and seconds.

    We’re committed to developing these experiments, giving the community a chance to provide early feedback and influence the direction of TimescaleDB’s development. We’ll travel faster with your input!

    Please create your feedback as a GitHub issue (using the experimental-schema label), describe what you found, and tell us the steps or share the code snip to recreate it.

    This release also includes several bug fixes.

    PostgreSQL 11 deprecation announcement Timescale is working hard on our next exciting features. To make that possible, we require functionality that is available in Postgres 12 and above. Postgres 11 is not supported with TimescaleDB 2.4.

    Experimental Features

    • #3293 Add timescaledb_experimental schema
    • #3302 Add block_new_chunks and allow_new_chunks API to experimental schema. Add chunk based refresh_continuous_aggregate.
    • #3211 Introduce experimental time_bucket_ng function
    • #3366 Allow use of experimental time_bucket_ng function in continuous aggregates
    • #3408 Support for seconds, minutes and hours in time_bucket_ng
    • #3446 Implement cleanup for chunk copy/move.

    Bugfixes

    • #3401 Fix segfault for RelOptInfo without fdw_private
    • #3411 Verify compressed chunk validity for compressed path
    • #3416 Fix targetlist names for continuous aggregate views
    • #3434 Remove extension check from relcache invalidation callback
    • #3440 Fix remote_tx_heal_data_node to work with only current database

    Thanks

    • @fvannee for reporting an issue with hypertable expansion in functions
    • @amalek215 for reporting an issue with cache invalidation during pg_class vacuum full
    • @hardikm10 for reporting an issue with inserting into compressed chunks
    • @dberardo-com and @iancmcc for reporting an issue with extension updates after renaming columns of continuous aggregates.
    Source code(tar.gz)
    Source code(zip)
  • 2.3.1(Jul 7, 2021)

    This maintenance release contains bugfixes since the 2.3.0 release. We deem it moderate priority for upgrading. The release introduces the possibility of generating downgrade scripts, improves the trigger handling for distributed hypertables, adds some more randomness to chunk assignment to avoid thundering herd issues in chunk assignment, and fixes some issues in update handling as well as some other bugs.

    Bugfixes

    • #3279 Add some more randomness to chunk assignment
    • #3288 Fix failed update with parallel workers
    • #3300 Improve trigger handling on distributed hypertables
    • #3304 Remove paths that reference parent relids for compressed chunks
    • #3305 Fix pull_varnos miscomputation of relids set
    • #3310 Generate downgrade script
    • #3314 Fix heap buffer overflow in hypertable expansion
    • #3317 Fix heap buffer overflow in remote connection cache.
    • #3327 Make aggregate in caggs fully qualified
    • #3327 Make aggregates in caggs fully qualified
    • #3336 Fix pg_init_privs objsubid handling
    • #3345 Fix SkipScan distinct column identification
    • #3355 Fix heap buffer overflow when renaming compressed hypertable columns.
    • #3367 Improve DecompressChunk qual pushdown
    • #3377 Fix bad use of repalloc

    Thanks

    • @db-adrian for reporting an issue when accessing cagg view through postgres_fdw
    • @fncaldas and @pgwhalen for reporting an issue accessing caggs when public is not in search_path
    • @fvannee, @mglonnro and @ebreijo for reporting an issue with the upgrade script
    • @fvannee for reporting a performance regression with SkipScan

    The music for this release was Izzy Stradlin's On Down the Road.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.3.1.tar.lzma(1.19 MB)
  • 2.3.0(May 26, 2021)

    This release adds major new features since the 2.2.1 release. We deem it moderate priority for upgrading.

    This release adds support for inserting data into compressed chunks and improves performance when inserting data into distributed hypertables. Distributed hypertables now also support triggers and compression policies.

    The bug fixes in this release address issues related to the handling of privileges on compressed hypertables, locking, and triggers with transition tables.

    Features

    • #3116 Add distributed hypertable compression policies
    • #3162 Use COPY when executing distributed INSERTs
    • #3199 Add GENERATED column support on distributed hypertables
    • #3210 Add trigger support on distributed hypertables
    • #3230 Support for inserts into compressed chunks

    Bugfixes

    • #3213 Propagate grants to compressed hypertables
    • #3229 Use correct lock mode when updating chunk
    • #3243 Fix assertion failure in decompress_chunk_plan_create
    • #3250 Fix constraint triggers on hypertables
    • #3251 Fix segmentation fault due to incorrect call to chunk_scan_internal
    • #3252 Fix blocking triggers with transition tables

    Thanks

    • @yyjdelete for reporting a crash with decompress_chunk and identifying the bug in the code
    • @fabriziomello for documenting the prerequisites when compiling against PostgreSQL 13

    The music for this release was Queen's The Works.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.3.0.tar.lzma(1.18 MB)
  • 2.2.1(May 6, 2021)

    This maintenance release contains bugfixes since the 2.2.0 release. We deem it high priority for upgrading.

    This release extends Skip Scan to multinode by enabling the pushdown of DISTINCT to data nodes. It also fixes a number of bugs in the implementation of Skip Scan, in distributed hypertables, in creation of indexes, in compression, and in policies.

    Features

    • #3113 Pushdown "SELECT DISTINCT" in multi-node to allow use of Skip Scan

    Bugfixes

    • #3101 Use commit date in get_git_commit()
    • #3102 Fix REINDEX TABLE for distributed hypertables
    • #3104 Fix use after free in add_reorder_policy
    • #3106 Fix use after free in chunk_api_get_chunk_stats
    • #3109 Copy recreated object permissions on update
    • #3111 Fix CMAKE_BUILD_TYPE check
    • #3112 Use %u to format Oid instead of %d
    • #3118 Fix use after free in cache
    • #3123 Fix crash while using REINDEX TABLE CONCURRENTLY
    • #3135 Fix SkipScan path generation in DISTINCT queries with expressions
    • #3146 Fix SkipScan for IndexPaths without pathkeys
    • #3147 Skip ChunkAppend if AppendPath has no children
    • #3148 Make SELECT DISTINCT handle non-var targetlists
    • #3151 Fix fdw_relinfo_get assertion failure on DELETE
    • #3155 Inherit CFLAGS from PostgreSQL
    • #3169 Fix incorrect type cast in compression policy
    • #3183 Fix segfault in calculate_chunk_interval
    • #3185 Fix wrong datatype for integer based retention policy

    Thanks

    • @Dead2, @dv8472 and @einsibjarni for reporting an issue with multinode queries and views
    • @aelg for reporting an issue with policies on integer-based hypertables
    • @hperez75 for reporting an issue with Skip Scan
    • @nathanloisel for reporting an issue with compression on hypertables with integer-based timestamps
    • @xin-hedera for fixing an issue with compression on hypertables with integer-based timestamps

    The music for this release was Justin Timberlake's The 20/20 Experience – 2 of 2.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.2.1.tar.lzma(1.16 MB)
  • 2.2.0(Apr 14, 2021)

    This release adds major new features since the 2.1.1 release. We deem it moderate priority for upgrading.

    This release adds the Skip Scan optimization, which significantly improves the performance of queries with DISTINCT ON. This optimization is not yet available for queries on distributed hypertables.

    This release also adds a function to create a distributed restore point, which allows performing a consistent restore of a multi-node cluster from a backup.

    The bug fixes in this release address issues with size and stats functions, high memory usage in distributed inserts, slow distributed ORDER BY queries, indexes involving INCLUDE, and single chunk query planning.

    PostgreSQL 11 deprecation announcement

    Timescale is working hard on our next exciting features. To make that possible, we require functionality that is unfortunately absent on PostgreSQL 11. For this reason, we will continue supporting PostgreSQL 11 until mid-June 2021. Sooner to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 11 support will not be included going forward.

    Major Features

    • #2843 Add distributed restore point functionality
    • #3000 SkipScan to speed up SELECT DISTINCT

    Bugfixes

    • #2989 Refactor and harden size and stats functions
    • #3058 Reduce memory usage for distributed inserts
    • #3067 Fix extremely slow multi-node order by queries
    • #3082 Fix chunk index column name mapping
    • #3083 Keep Append pathkeys in ChunkAppend

    Thanks

    • @BowenGG for reporting an issue with indexes with INCLUDE
    • @fvannee for reporting an issue with ChunkAppend pathkeys
    • @pedrokost and @RobAtticus for reporting an issue with size functions on empty hypertables
    • @phemmer and @ryanbooz for reporting issues with slow multi-node order by queries
    • @stephane-moreau for reporting an issue with high memory usage during single-transaction inserts on a distributed hypertable.

    The music for this release was Michael Jackson's Thriller.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.2.0.tar.lzma(1.15 MB)
  • 2.1.1(Mar 31, 2021)

    This maintenance release contains bugfixes since the 2.1.0 release. We deem it high priority for upgrading.

    The bug fixes in this release address issues with CREATE INDEX and UPSERT for hypertables, custom jobs, and gapfill queries.

    This release marks TimescaleDB as a trusted extension in PG13, so that superuser privileges are not required anymore to install the extension.

    Minor features

    • #2998 Mark timescaledb as trusted extension

    Bugfixes

    • #2948 Fix off by 4 error in histogram deserialize
    • #2974 Fix index creation for hypertables with dropped columns
    • #2990 Fix segfault in job_config_check for cagg
    • #2987 Fix crash due to txns in emit_log_hook_callback
    • #3042 Commit end transaction for CREATE INDEX
    • #3053 Fix gapfill/hashagg planner interaction
    • #3059 Fix UPSERT on hypertables with columns with defaults

    Thanks

    • @eloyekunle and @kitwestneat for reporting an issue with UPSERT
    • @jocrau for reporting an issue with index creation
    • @kev009 for fixing a compilation issue
    • @majozv and @pehlert for reporting an issue with time_bucket_gapfill

    The music for this release was David Bowie's Let's Dance.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.1.1.tar.lzma(1.12 MB)
  • 2.1.0(Feb 23, 2021)

    This release adds major new features since the 2.0.2 release. We deem it moderate priority for upgrading.

    This release adds the long-awaited support for PostgreSQL 13 to TimescaleDB. The minimum required PostgreSQL 13 version is 13.2 due to a security vulnerability affecting TimescaleDB functionality present in earlier versions of PostgreSQL 13.

    This release also relaxes some restrictions for compressed hypertables; namely, TimescaleDB now supports adding columns to compressed hypertables and renaming columns of compressed hypertables.

    Major Features

    • #2779 Add support for PostgreSQL 13

    Minor features

    • #2736 Support adding columns to hypertables with compression enabled
    • #2909 Support renaming columns of hypertables with compression enabled

    The music for this release was Rush's 2112.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.1.0.tar.lzma(1.12 MB)
  • 2.0.2(Feb 22, 2021)

    This maintenance release contains bugfixes since the 2.0.1 release. We deem it high priority for upgrading.

    The bug fixes in this release address issues with joins, the status of background jobs, and disabling compression. It also includes enhancements to continuous aggregates, including improved validation of policies and optimizations for faster refreshes when there are a lot of invalidations.

    Minor features

    • #2926 Optimize cagg refresh for small invalidations

    Bugfixes

    • #2850 Set status for backend in background jobs
    • #2883 Fix join qual propagation for nested joins
    • #2884 Add GUC to control join qual propagation
    • #2885 Fix compressed chunk check when disabling compression
    • #2908 Fix changing column type of clustered hypertables
    • #2942 Validate continuous aggregate policy

    Thanks

    • @zeeshanshabbir93 for reporting an issue with joins
    • @Antiarchitect for reporting the issue with slow refreshes of continuous aggregates.
    • @diego-hermida for reporting the issue about being unable to disable compression
    • @mtin for reporting the issue about wrong job status

    The music for this release was Justin Timberlake's The 20/20 Experience.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.0.2.tar.lzma(1.07 MB)
  • 2.0.1(Jan 28, 2021)

    This maintenance release contains bugfixes since the 2.0.0 release. We deem it high priority for upgrading.

    In particular the fixes contained in this maintenance release address issues in continuous aggregates, compression, JOINs with hypertables and when upgrading from previous versions.

    Bugfixes

    • #2772 Always validate existing database and extension
    • #2780 Fix config enum entries for remote data fetcher
    • #2806 Add check for dropped chunk on update
    • #2828 Improve cagg watermark caching
    • #2838 Fix catalog repair in update script
    • #2842 Do not mark job as started when setting next_start field
    • #2845 Fix continuous aggregate privileges during upgrade
    • #2851 Fix nested loop joins that involve compressed chunks
    • #2860 Fix projection in ChunkAppend nodes
    • #2861 Remove compression stat update from update script
    • #2865 Apply volatile function quals at decompresschunk node
    • #2866 Avoid partitionwise planning of partialize_agg
    • #2868 Fix corruption in gapfill plan
    • #2874 Fix partitionwise agg crash due to uninitialized memory

    Thanks

    • @alex88 for reporting an issue with joined hypertables
    • @brian-from-quantrocket for reporting an issue with extension update and dropped chunks
    • @dhodyn for reporting an issue when joining compressed chunks
    • @markatosi for reporting a segfault with partitionwise aggregates enabled
    • @PhilippJust for reporting an issue with add_job and initial_start
    • @sgorsh for reporting an issue when using pgAdmin on windows
    • @WarriorOfWire for reporting the bug with gapfill queries not being able to find pathkey item to sort

    The music for this release was Buckethead's Electric Tears.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.0.1.tar.lzma(1.06 MB)
  • 2.0.0(Dec 21, 2020)

    With this release, we are officially moving TimescaleDB 2.0 to GA, concluding several release candidates.

    TimescaleDB 2.0 adds the much-anticipated support for distributed hypertables (multi-node TimescaleDB), as well as new features and enhancements to core functionality to give users better clarity and more control and flexibility over their data.

    Multi-node architecture: In particular, with TimescaleDB 2.0, users can now create distributed hypertables across multiple instances of TimescaleDB, configured so that one instance serves as an access node and multiple others as data nodes. All queries for a distributed hypertable are issued to the access node, but inserted data and queries are pushed down across data nodes for greater scale and performance.

    Multi-node TimescaleDB can be self managed or, for easier operation, launched within Timescale's fully-managed cloud services.

    This release also adds:

    • Support for user-defined actions, allowing users to define, customize, and schedule automated tasks, which can be run by the built-in jobs scheduling framework now exposed to users.
    • Significant changes to continuous aggregates, which now separate the view creation from the policy. Users can now refresh individual regions of the continuous aggregate materialized view, or schedule automated refreshing via policy.
    • Redesigned informational views, including new (and more general) views for information about hypertable's dimensions and chunks, policies and user-defined actions, as well as support for multi-node TimescaleDB.
    • Moving all formerly enterprise features into our Community Edition, and updating Timescale License, which now provides additional (more permissive) rights to users and developers.

    Some of the changes above (e.g., continuous aggregates, updated informational views) do introduce breaking changes to APIs and are not backwards compatible. While the update scripts in TimescaleDB 2.0 will upgrade databases running TimescaleDB 1.x automatically, some of these API and feature changes may require changes to clients and/or upstream scripts that rely on the previous APIs. Before upgrading, we recommend reviewing upgrade documentation at docs.timescale.com for more details.

    Major Features

    TimescaleDB 2.0 moves the following major features to GA:

    • #1923 Add support for distributed hypertables
    • #2006 Add support for user-defined actions
    • #2125 #2221 Improve Continuous Aggregate API
    • #2084 #2089 #2098 #2417 Redesign informational views
    • #2435 Move enterprise features to community
    • #2437 Update Timescale License

    Previous Release Candidates

    • #2702 Release Candidate 4 (December 2, 2020)
    • #2637 Release Candidate 3 (November 12, 2020)
    • #2554 Release Candidate 2 (October 20, 2020)
    • #2478 Release Candidate 1 (October 1, 2020)

    Minor Features

    Since the last release candidate 4, there are several minor improvements:

    • #2746 Optimize locking for create chunk API
    • #2705 Block tableoid access on distributed hypertable
    • #2730 Do not allow unique index on compressed hypertables
    • #2764 Bootstrap data nodes with versioned extension

    Bugfixes

    Since the last release candidate 4, there are several bugfixes:

    • #2719 Support disabling compression on distributed hypertables
    • #2742 Fix compression status in chunks view for distributed chunks
    • #2751 Fix crash and cancel when adding data node
    • #2763 Fix check constraint on hypertable metadata table

    Thanks

    Thanks to all contributors for the TimescaleDB 2.0 release:

    • @airton-neto for reporting a bug in executing some queries with UNION
    • @nshah14285 for reporting an issue with propagating privileges
    • @kalman5 for reporting an issue with renaming constraints
    • @LbaNeXte for reporting a bug in decompression for queries with subqueries
    • @semtexzv for reporting an issue with continuous aggregates on int-based hypertables
    • @mr-ns for reporting an issue with privileges for creating chunks
    • @cloud-rocket for reporting an issue with setting an owner on continuous aggregate
    • @jocrau for reporting a bug during creating an index with transaction per chunk
    • @fvannee for reporting an issue with custom time types
    • @ArtificialPB for reporting a bug in executing queries with conditional ordering on compressed hypertable
    • @dutchgecko for reporting an issue with continuous aggregate datatype handling
    • @lambdaq for suggesting to improve error message in continuous aggregate creation
    • @francesco11112 for reporting memory issue on COPY
    • @Netskeh for reporting bug on time_bucket problem in continuous aggregates
    • @mr-ns for reporting the issue with CTEs on distributed hypertables
    • @akamensky for reporting an issue with recursive cache invalidation
    • @ryanbooz for reporting slow queries with real-time aggregation on continuous aggregates
    • @cevian for reporting an issue with disabling compression on distributed hypertables

    The music for this release was Synchroncity by The Police.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.0.0.tar.lzma(1.06 MB)
  • 2.0.0-rc4(Dec 21, 2020)

    This release candidate contains bugfixes since the previous release candidate, as well as additional minor features. It improves validation of configuration changes for background jobs, adds support for gapfill on distributed tables, contains improvements to the memory handling for large COPY, and contains improvements to compression for distributed hypertables.

    Minor Features

    • #2689 Check configuration in alter_job and add_job
    • #2696 Support gapfill on distributed hypertable
    • #2468 Show more information in get_git_commit
    • #2678 Include user actions into job stats view
    • #2664 Fix support for complex aggregate expression
    • #2672 Add hypertable to continuous aggregates view
    • #2662 Save compression metadata settings on access node
    • #2707 Introduce additional db for data node bootstrapping

    Bugfixes

    • #2688 Fix crash for concurrent drop and compress chunk
    • #2666 Fix timeout handling in async library
    • #2683 Fix crash in add_job when given NULL interval
    • #2698 Improve memory handling for remote COPY
    • #2555 Set metadata for chunks compressed before 2.0
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0-rc3(Dec 21, 2020)

    This release candidate contains bugfixes since the previous release candidate, as well as additional minor features including support for "user-mapping" authentication between access/data nodes and an experimental API for refreshing continuous aggregates on individual chunks.

    Minor Features

    • #2627 Add optional user mappings support
    • #2635 Add API to refresh continuous aggregate on chunk

    Bugfixes

    • #2560 Fix SCHEMA DROP CASCADE with continuous aggregates
    • #2593 Set explicitly all lock parameters in alter_job
    • #2604 Fix chunk creation on hypertables with foreign key constraints
    • #2610 Support analyze of internal compression table
    • #2612 Optimize internal cagg_watermark function
    • #2613 Refresh correct partial during refresh on drop
    • #2617 Fix validation of available extensions on data node
    • #2619 Fix segfault in decompress_chunk for chunks with dropped columns
    • #2620 Fix DROP CASCADE for continuous aggregate
    • #2625 Fix subquery errors when using AsyncAppend
    • #2626 Fix incorrect total_table_pages setting for compressed scan
    • #2628 Stop recursion in cache invalidation
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0-rc2(Dec 21, 2020)

    This release candidate contains bugfixes since the previous release candidate.

    Minor Features

    • #2520 Support non-transactional distibuted_exec

    Bugfixes

    • #2307 Overflow handling for refresh policy with integer time
    • #2503 Remove error for correct bootstrap of data node
    • #2507 Fix validation logic when adding a new data node
    • #2510 Fix outer join qual propagation
    • #2514 Lock dimension slices when creating new chunk
    • #2515 Add if_attached argument to detach_data_node()
    • #2517 Fix member access within misaligned address in chunk_update_colstats
    • #2525 Fix index creation on hypertables with dropped columns
    • #2543 Pass correct status to lock_job
    • #2544 Assume custom time type range is same as bigint
    • #2563 Fix DecompressChunk path generation
    • #2564 Improve continuous aggregate datatype handling
    • #2568 Change use of ssl_dir GUC
    • #2571 Make errors and messages conform to style guide
    • #2577 Exclude compressed chunks from ANALYZE/VACUUM
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0-rc1(Dec 21, 2020)

    This release adds major new features and bugfixes since the 1.7.4 release. We deem it moderate priority for upgrading.

    This release adds the long-awaited support for distributed hypertables to TimescaleDB. With 2.0, users can create distributed hypertables across multiple instances of TimescaleDB, configured so that one instance serves as an access node and multiple others as data nodes. All queries for a distributed hypertable are issued to the access node, but inserted data and queries are pushed down across data nodes for greater scale and performance.

    This release also adds support for user-defined actions allowing users to define actions that are run by the TimescaleDB automation framework.

    In addition to these major new features, the 2.0 branch introduces breaking changes to APIs and existing features, such as continuous aggregates. These changes are not backwards compatible and might require changes to clients and/or scripts that rely on the previous APIs. Please review our updated documentation and do proper testing to ensure compatibility with your existing applications.

    The noticeable breaking changes in APIs are:

    • Redefined functions for policies
    • A continuous aggregate is now created with CREATE MATERIALIZED VIEW instead of CREATE VIEW and automated refreshing requires adding a policy via add_continuous_aggregate_policy
    • Redesign of informational views, including new (and more general) views for information about policies and user-defined actions

    This release candidate is upgradable, so if you are on a previous release (e.g., 1.7.4) you can upgrade to the release candidate and later expect to be able to upgrade to the final 2.0 release. However, please carefully consider your compatibility requirements before upgrading.

    Major Features

    • #1923 Add support for distributed hypertables
    • #2006 Add support for user-defined actions
    • #2435 Move enterprise features to community
    • #2437 Update Timescale License

    Minor Features

    • #2011 Constify TIMESTAMPTZ OP INTERVAL in constraints
    • #2105 Support moving compressed chunks

    Bugfixes

    • #1843 Improve handling of "dropped" chunks
    • #1886 Change ChunkAppend leader to use worker subplan
    • #2116 Propagate privileges from hypertables to chunks
    • #2263 Fix timestamp overflow in time_bucket optimization
    • #2270 Fix handling of non-reference counted TupleDescs in gapfill
    • #2325 Fix rename constraint/rename index
    • #2370 Fix detection of hypertables in subqueries
    • #2376 Fix caggs width expression handling on int based hypertables
    • #2416 Check insert privileges to create chunk
    • #2428 Allow owner change of continuous aggregate
    • #2436 Propagate grants in continuous aggregates
    Source code(tar.gz)
    Source code(zip)
  • 1.7.4(Sep 8, 2020)

    This maintenance release contains bugfixes since the 1.7.3 release. We deem it high priority for upgrading if TimescaleDB is deployed with replicas (synchronous or asynchronous).

    In particular, the fixes contained in this maintenance release address an issue with running queries on compressed hypertables on standby nodes.

    Bugfixes

    • #2336 Remove tuple lock on select path

    The music for this release was Rufus and Chaka Khan's 1974 classic Rags to Rufus.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.4.tar.lzma(834.48 KB)
  • 1.7.3(Aug 27, 2020)

    This maintenance release contains bugfixes since the 1.7.2 release. We deem it high priority for upgrading.

    In particular the fixes contained in this maintenance release address issues in compression, drop_chunks and the background worker scheduler.

    Bugfixes

    • #2059 Improve infering start and stop arguments from gapfill query
    • #2067 Support moving compressed chunks
    • #2068 Apply SET TABLESPACE for compressed chunks
    • #2090 Fix index creation with IF NOT EXISTS for existing indexes
    • #2092 Fix delete on tables involving hypertables with compression
    • #2164 Fix telemetry installed_time format
    • #2184 Fix background worker scheduler memory consumption
    • #2222 Fix negative bitmapset member not allowed in decompression
    • #2255 Propagate privileges from hypertables to chunks
    • #2256 Fix segfault in chunk_append with space partitioning
    • #2259 Fix recursion in cache processing
    • #2261 Lock dimension slice tuple when scanning

    Thanks

    • @akamensky for reporting an issue with drop_chunks and ChunkAppend with space partitioning
    • @dewetburger430 for reporting an issue with setting tablespace for compressed chunks
    • @fvannee for reporting an issue with cache invalidation
    • @nexces for reporting an issue with ChunkAppend on space-partitioned hypertables
    • @PichetGoulu for reporting an issue with index creation and IF NOT EXISTS
    • @prathamesh-sonpatki for contributing a typo fix
    • @sezaru for reporting an issue with background worker scheduler memory consumption

    The music for this release was Bob Dylan's Blonde on Blonde.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.3.tar.lzma(834.18 KB)
  • 1.7.2(Jul 7, 2020)

    This maintenance release contains bugfixes since the 1.7.1 release. We deem it medium priority for upgrading.

    In particular the fixes contained in this maintenance release address bugs in continuous aggregates, drop_chunks and compression.

    Features

    • #1877 Add support for fast pruning of inlined functions

    Bugfixes

    • #1908 Fix drop_chunks with unique constraints when cascade_to_materializations is false
    • #1915 Check for database in extension_current_state
    • #1918 Unify chunk index creation
    • #1932 Change compression locking order
    • #1938 Fix gapfill locf treat_null_as_missing
    • #1982 Check for disabled telemetry earlier
    • #1984 Fix compression bit array left shift count
    • #1997 Add checks for read-only transactions
    • #2002 Reset restoring gucs rather than explicitly setting 'off'
    • #2028 Fix locking in drop_chunks
    • #2031 Enable compression for tables with compound foreign key
    • #2039 Fix segfault in create_trigger_handler
    • #2043 Fix segfault in cagg_update_view_definition
    • #2046 Use index tablespace during chunk creation
    • #2047 Better handling of chunk insert state destruction
    • #2049 Fix handling of PlaceHolderVar in DecompressChunk
    • #2051 Fix tuple concurrently deleted error with multiple continuous aggregates

    Thanks

    • @akamensky for reporting an issue with telemetry and an issue with drop_chunks
    • @darko408 for reporting an issue with decompression
    • @dmitri191 for reporting an issue with failing background workers
    • @eduardotsj for reporting an issue with indexes not inheriting tablespace settings
    • @fourseventy for reporting an issue with multiple continuous aggregrates
    • @fvannee for contributing optimizations for pruning inlined functions
    • @jflambert for reporting an issue with failing telemetry jobs
    • @nbouscal for reporting an issue with compression jobs locking referenced tables
    • @nicolai6120 for reporting an issue with locf and treat_null_as_missing
    • @nomanor for reporting an issue with expression index with table references
    • @olernov for contributing a fix for compressing tables with compound foreign keys
    • @werjo for reporting an issue with drop_chunks and unique constraints

    The music for this release was David Bowie's The Rise and Fall of Ziggy Stardust and the Spiders From Mars.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.2.tar.lzma(828.02 KB)
  • 1.7.1(May 18, 2020)

    This maintenance release contains bugfixes since the 1.7.0 release. We deem it medium priority for upgrading and high priority for users with multiple continuous aggregates.

    In particular the fixes contained in this maintenance release address bugs in continuous aggregates with real-time aggregation and PostgreSQL 12 support.

    Bugfixes

    • #1834 Define strerror() for Windows
    • #1846 Fix segfault on COPY to hypertable
    • #1850 Fix scheduler failure due to bad next_start_time for jobs
    • #1851 Fix hypertable expansion for UNION ALL
    • #1854 Fix reorder policy job to skip compressed chunks
    • #1861 Fix qual pushdown for compressed hypertables where quals have casts
    • #1864 Fix issue with subplan selection in parallel ChunkAppend
    • #1868 Add support for WHERE, HAVING clauses with real time aggregates
    • #1869 Fix real time aggregate support for multiple continuous aggregates
    • #1871 Don't rely on timescaledb.restoring for upgrade
    • #1875 Fix hypertable detection in subqueries
    • #1884 Fix crash on SELECT WHERE NOT with empty table

    Thanks

    • @airton-neto for reporting an issue with queries over UNIONs of hypertables
    • @dhodyn for reporting an issue with UNION ALL queries
    • @frostwind for reporting an issue with casts in where clauses on compressed hypertables
    • @fvannee for reporting an issue with hypertable detection in inlined SQL functions and an issue with COPY
    • @hgiasac for reporting missing where clause with real time aggregates
    • @louisth for reporting an issue with real-time aggregation and multiple continuous aggregates
    • @michael-sayapin for reporting an issue with INSERTs and WHERE NOT EXISTS
    • @olernov for reporting and fixing an issue with compressed chunks in the reorder policy
    • @pehlert for reporting an issue with pg_upgrade

    The music for this release was the Rolling Stone's Sticky Fingers.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.1.tar.lzma(819.38 KB)
  • 1.7.0(Apr 16, 2020)

    This release adds major new features and bugfixes since the 1.6.1 release. We deem it moderate priority for upgrading.

    This release adds the long-awaited support for PostgreSQL 12 to TimescaleDB.

    This release also adds a new default behavior when querying continuous aggregates that we call real-time aggregation. A query on a continuous aggregate will now combine materialized data with recent data that has yet to be materialized.

    Note that only newly created continuous aggregates will have this real-time query behavior, although it can be enabled on existing continuous aggregates with a configuration setting as follows:

    ALTER VIEW continuous_view_name SET (timescaledb.materialized_only=false);

    This release also moves several data management lifecycle features to the Community version of TimescaleDB (from Enterprise), including data reordering and data retention policies.

    Deprecation Notice: Please note that with the release of Timescale 1.7, we are deprecating support for PostgreSQL 9.6.x and 10.x. The current plan is that the Timescale 2.0 release later this year will only support PostgreSQL major versions 11.x, 12.x, or newer.

    Major Features

    • #1807 Add support for PostgreSQL 12
    • #1685 Add support for real-time aggregation on continuous aggregates

    Bugfixes

    • #1665 Add ignore_invalidation_older_than to timescaledb_information.continuous_aggregates view
    • #1750 Handle undefined ignore_invalidation_older_than
    • #1757 Restrict watermark to max for continuous aggregates
    • #1769 Add rescan function to CompressChunkDml CustomScan node
    • #1785 Fix last_run_success value in continuous_aggregate_stats view
    • #1801 Include parallel leader in plan execution
    • #1808 Fix ts_hypertable_get_all for compressed tables
    • #1828 Ignore dropped chunks in compressed_chunk_stats

    Licensing changes

    • Reorder and policies around reorder and drop chunks are now accessible to community users, not just enterprise
    • Gapfill functionality no longer warns about expired license

    Thanks

    • @t0k4rt for reporting an issue with parallel chunk append plans
    • @alxndrdude for reporting an issue when trying to insert into compressed chunks
    • @Olernov for reporting and fixing an issue with show_chunks and drop_chunks for compressed hypertables
    • @mjb512 for reporting an issue with INSERTs in CTEs in cached plans
    • @dmarsh19 for reporting and fixing an issue with dropped chunks in compressed_chunk_stats

    The music for this release was Bon Jovi's Slippery When Wet.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.0.tar.lzma(810.38 KB)
  • 1.6.1(Mar 18, 2020)

    This maintenance release contains bugfixes since the 1.6.0 release. We deem it medium priority for upgrading.

    In particular the fixes contained in this maintenance release address bugs in continuous aggregates, time_bucket_gapfill, partial index handling and drop_chunks.

    For this release only, you will need to restart the database after upgrade before restoring a backup.

    Minor Features

    • #1666 Support drop_chunks API for continuous aggregates
    • #1711 Change log level for continuous aggregate materialization messages

    Bugfixes

    • #1630 Print notice for COPY TO on hypertable
    • #1648 Drop chunks from materialized hypertable
    • #1668 Cannot add dimension if hypertable has empty chunks
    • #1673 Fix crash when interrupting create_hypertable
    • #1674 Fix time_bucket_gapfill's interaction with GROUP BY
    • #1686 Fix order by queries on compressed hypertables that have char segment by column
    • #1687 Fix issue with disabling compression when foreign keys are present
    • #1688 Handle many BGW jobs better
    • #1698 Add logic to ignore dropped chunks in hypertable_relation_size
    • #1704 Fix bad plan for continuous aggregate materialization
    • #1709 Prevent starting background workers with NOLOGIN
    • #1713 Fix miscellaneous background worker issues
    • #1715 Fix issue with overly aggressive chunk exclusion in outer joins
    • #1719 Fix restoring/scheduler entrypoint to avoid BGW death
    • #1720 Add scheduler cache invalidations
    • #1727 Fix compressing INTERVAL columns
    • #1728 Handle Sort nodes in ConstraintAwareAppend
    • #1730 Fix partial index handling on hypertables
    • #1739 Use release OpenSSL DLLs for debug builds on Windows
    • #1740 Fix invalidation entries from multiple caggs on same hypertable
    • #1743 Fix continuous aggregate materialization timezone handling
    • #1748 Fix remove_drop_chunks_policy for continuous aggregates

    Thanks

    • @RJPhillips01 for reporting an issue with drop chunks.
    • @b4eEx for reporting an issue with disabling compression.
    • @darko408 for reporting an issue with order by on compressed hypertables
    • @mrechte for reporting an issue with compressing INTERVAL columns
    • @tstaehli for reporting an issue with ConstraintAwareAppend
    • @chadshowalter for reporting an issue with partial index on hypertables
    • @geoffreybennett for reporting an issue with create_hypertable when interrupting operations
    • @alxndrdude for reporting an issue with background workers during restore
    • @zcavaliero for reporting and fixing an issue with dropped columns in hypertable_relation_size
    • @ismailakpolat for reporting an issue with cagg materialization on hypertables with TIMESTAMP column

    The music for this release was Grateful Dead's In The Dark.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.6.1.tar.lzma(766.69 KB)
  • 1.6.0(Jan 15, 2020)

    This release adds major new features and bugfixes since the 1.5.1 release. We deem it moderate priority for upgrading.

    The major new feature in this release allows users to keep the aggregated data in a continuous aggregate while dropping the raw data with drop_chunks. This allows users to save storage by keeping only the aggregates.

    The semantics of the refresh_lag parameter for continuous aggregates has been changed to be relative to the current timestamp instead of the maximum value in the table. This change requires that an integer_now func be set on hypertables with integer-based time columns to use continuous aggregates on this table.

    We added a timescaledb.ignore_invalidation_older_than parameter for continuous aggregates. This parameter accept a time-interval (e.g. 1 month). If set, it limits the amount of time for which to process invalidation. Thus, if timescaledb.ignore_invalidation_older_than = '1 month', then any modifications for data older than 1 month from the current timestamp at modification time may not cause continuous aggregate to be updated. This limits the amount of work that a backfill can trigger. By default, all invalidations are processed.

    Major Features

    • #1589 Allow drop_chunks while keeping continuous aggregates

    Minor Features

    • #1568 Add ignore_invalidation_older_than option to continuous aggs
    • #1575 Reorder group-by clause for continuous aggregates
    • #1592 Improve continuous agg user messages

    Bugfixes

    • #1565 Fix partial select query for continuous aggregate
    • #1591 Fix locf treat_null_as_missing option
    • #1594 Fix error in compression constraint check
    • #1603 Add join info to compressed chunk
    • #1606 Fix constify params during runtime exclusion
    • #1607 Delete compression policy when drop hypertable
    • #1608 Add jobs to timescaledb_information.policy_stats
    • #1609 Fix bug with parent table in decompression
    • #1624 Fix drop_chunks for ApacheOnly
    • #1632 Check for NULL before dereferencing variable

    Thanks

    • @optijon for reporting an issue with locf treat_null_as_missing option
    • @acarrera42 for reporting an issue with constify params during runtime exclusion
    • @ChristopherZellermann for reporting an issue with the compression constraint check
    • @SimonDelamare for reporting an issue with joining hypertables with compression

    The music for this release was DJ Shadow's Endtroducing....

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.6.0.tar.lzma(757.67 KB)
  • 1.5.1(Nov 12, 2019)

    This maintenance release contains bugfixes since the 1.5.0 release. We deem it low priority for upgrading.

    In particular the fixes contained in this maintenance release address potential segfaults and no other security vulnerabilities. The bugfixes are related to bloom indexes and updates from previous versions.

    Bugfixes

    • #1523 Fix bad SQL updates from previous updates
    • #1526 Fix hypertable model
    • #1530 Set active snapshots in multi-xact index create

    Thanks

    • @84660320 for reporting an issue with bloom indexes
    • @gumshoes @perhamm @jermudgeon @gmisagm for reporting the issue with updates

    The music for this release was Van Halen's Val Halen.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.5.1.tar.lzma(741.42 KB)
MySQL Server, the world's most popular open source database, and MySQL Cluster, a real-time, open source transactional database.

Copyright (c) 2000, 2021, Oracle and/or its affiliates. This is a release of MySQL, an SQL database server. License information can be found in the

MySQL 8k Jul 4, 2022
GridDB is a next-generation open source database that makes time series IoT and big data fast,and easy.

Overview GridDB is Database for IoT with both NoSQL interface and SQL Interface. Please refer to GridDB Features Reference for functionality. This rep

GridDB 1.8k Jun 27, 2022
HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine

HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine. It can provide fast and consistent execution on heterogeneous SQL data systems, e.g., OLAD database, HTAP system, SparkSQL, and Flink Stream SQL.

4Paradigm 45 Sep 12, 2021
以简单、易用、高性能为目标、开源的时序数据库,支持Linux和Windows, Time Series Database

松果时序数据库(pinusdb) 松果时序数据库是一款针对中小规模(设备数少于10万台,每天产生的数据量少于10亿条)场景设计的时序数据库。以简单、易用、高性能为设计目标。使用SQL语句进行交互,拥有极低的学习、使用成本, 提供了丰富的功能、较高的性能。 我们的目标是成为最简单、易用、健壮的单机时序

null 94 Apr 27, 2022
DuckDB is an in-process SQL OLAP Database Management System

DuckDB is an in-process SQL OLAP Database Management System

DuckDB 5.4k Jun 27, 2022
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

yugabyte 6.6k Jul 1, 2022
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

OceanBase 4.4k Jun 27, 2022
dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover. The acron

Canonical 3k Jun 27, 2022
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider 127 Jun 24, 2022
An open-source big data platform designed and optimized for the Internet of Things (IoT).

An open-source big data platform designed and optimized for the Internet of Things (IoT).

null 18.6k Jul 1, 2022
The database built for IoT streaming data storage and real-time stream processing.

The database built for IoT streaming data storage and real-time stream processing.

HStreamDB 501 Jun 29, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 3 Nov 3, 2021
Trilogy is a client library for MySQL-compatible database servers, designed for performance, flexibility, and ease of embedding.

Trilogy is a client library for MySQL-compatible database servers, designed for performance, flexibility, and ease of embedding.

GitHub 248 Jun 11, 2022
High-performance time-series aggregation for PostgreSQL

PipelineDB has joined Confluent, read the blog post here. PipelineDB will not have new releases beyond 1.0.0, although critical bugs will still be fix

PipelineDB 2.5k Jun 24, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 807 Jun 30, 2022
PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL.

PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.

Alibaba 2.3k Jun 30, 2022
DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.

DB Browser for SQLite What it is DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files c

null 16.7k Jun 27, 2022
The open-source database for the realtime web.

RethinkDB What is RethinkDB? Open-source database for building realtime web applications NoSQL database that stores schemaless JSON documents Distribu

RethinkDB 25.5k Jun 24, 2022
A type safe SQL template library for C++

sqlpp11 A type safe embedded domain specific language for SQL queries and results in C++ Documentation is found in the wiki So what is this about? SQL

Roland Bock 2k Jul 2, 2022