TimescaleDB is an open-source database designed to make SQL scalable for time-series data.

Overview
Linux/macOS Linux i386 Windows Coverity Code Coverage
Build Status Linux/macOS Build Status Linux i386 Windows build status Coverity Scan Build Status Code Coverage

TimescaleDB

TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL and packaged as a PostgreSQL extension, providing automatic partitioning across time and space (partitioning key), as well as full SQL support.

Timescale Cloud is our fully managed, hosted version of TimescaleDB, available in the cloud of your choice (pay-as-you-go, with free trial credits to start). To determine which option is best for you, see Timescale Products for more information about our Apache-2 version, TimescaleDB Community (self-hosted) and Timescale Cloud (hosted), including: feature comparisons, FAQ, documentation, and support.

Below is an introduction to TimescaleDB. For more information, please check out these other resources:

For reference and clarity, all code files in this repository reference licensing in their header (either Apache License, Version 2.0 or Timescale License (TSL)). Apache-2 licensed binaries can be built by passing -DAPACHE_ONLY=1 to bootstrap.

Contributors welcome.

(To build TimescaleDB from source, see instructions in Building from source.)

Using TimescaleDB

TimescaleDB scales PostgreSQL for time-series data via automatic partitioning across time and space (partitioning key), yet retains the standard PostgreSQL interface.

In other words, TimescaleDB exposes what look like regular tables, but are actually only an abstraction (or a virtual view) of many individual tables comprising the actual data. This single-table view, which we call a hypertable, is comprised of many chunks, which are created by partitioning the hypertable's data in either one or two dimensions: by a time interval, and by an (optional) "partition key" such as device id, location, user id, etc. (Architecture discussion)

Virtually all user interactions with TimescaleDB are with hypertables. Creating tables and indexes, altering tables, inserting data, selecting data, etc., can (and should) all be executed on the hypertable.

From the perspective of both use and management, TimescaleDB just looks and feels like PostgreSQL, and can be managed and queried as such.

Before you start

PostgreSQL's out-of-the-box settings are typically too conservative for modern servers and TimescaleDB. You should make sure your postgresql.conf settings are tuned, either by using timescaledb-tune or doing it manually.

Creating a hypertable

-- Do not forget to create timescaledb extension
CREATE EXTENSION timescaledb;

-- We start by creating a regular SQL table
CREATE TABLE conditions (
  time        TIMESTAMPTZ       NOT NULL,
  location    TEXT              NOT NULL,
  temperature DOUBLE PRECISION  NULL,
  humidity    DOUBLE PRECISION  NULL
);

-- Then we convert it into a hypertable that is partitioned by time
SELECT create_hypertable('conditions', 'time');

Inserting and querying data

Inserting data into the hypertable is done via normal SQL commands:

INSERT INTO conditions(time, location, temperature, humidity)
  VALUES (NOW(), 'office', 70.0, 50.0);

SELECT * FROM conditions ORDER BY time DESC LIMIT 100;

SELECT time_bucket('15 minutes', time) AS fifteen_min,
    location, COUNT(*),
    MAX(temperature) AS max_temp,
    MAX(humidity) AS max_hum
  FROM conditions
  WHERE time > NOW() - interval '3 hours'
  GROUP BY fifteen_min, location
  ORDER BY fifteen_min DESC, max_temp DESC;

In addition, TimescaleDB includes additional functions for time-series analysis that are not present in vanilla PostgreSQL. (For example, the time_bucket function above.)

Installation

TimescaleDB is available pre-packaged for several platforms:

Timescale Cloud (database-as-a-service) is available via free trial. You create database instances in the cloud of your choice and use TimescaleDB to power your queries, automating common operational tasks and reducing management overhead.

We recommend following our detailed installation instructions.

To build from source, see instructions here.

Resources

Useful tools

  • timescaledb-tune: Helps set your PostgreSQL configuration settings based on your system's resources.
  • timescaledb-parallel-copy: Parallelize your initial bulk loading by using PostgreSQL's COPY across multiple workers.

Additional documentation

Community & help

Releases & updates

Contributing

Comments
  • Amazon RDS, GCP Cloud SQL, Azure, Heroku support?

    Amazon RDS, GCP Cloud SQL, Azure, Heroku support?

    RDS comes with pre-packaged extensions... are there any talks with AWS to include timescale in their supported extensions, given that timescaledb uses the Apache 2.0 license?

    question feedback-wanted 
    opened by jqnatividad 87
  • Support for month, year and time zones in time_bucket

    Support for month, year and time zones in time_bucket

    Time_bucket feature works great but why can't we use SELECT time_bucket('1 month', date)...

    I would like to know how to get month bucket and even yearly buckets?

    time_bucket feature-request 
    opened by jean-francois-labbe 76
  • Segfault when drop_chunks with 1.7.1

    Segfault when drop_chunks with 1.7.1

    Relevant system information:

    • OS: Centos 7.5
    • PostgreSQL version (output of postgres --version): 12.3
    • TimescaleDB version (output of \dx in psql): 1.7.1
    • Installation method: YUM

    Describe the bug After the upgrade of Postgresql (10.9 -> 12.3) and TimescaleDB extension (1.6.1 -> 1.7.1) we see repeated segfaults when executing drop_chunks:

    Under GDB the segfault yields:

    Program received signal SIGSEGV, Segmentation fault.
    0x00007fa710efc259 in cmp_slices_by_dimension_id () from /usr/pgsql-12/lib/timescaledb-1.7.1.so
    

    To Reproduce Not certain how to reproduce it on any other setup. This happens currently in our staging environment.

    Expected behavior Chunks dropped and disk space freed.

    Actual behavior Segfault

    bug segfault 
    opened by akamensky 62
  • Timescale hypertables don't support ON CONFLICT ON CONSTRAINT upsert statement (breaking Hasura compatibility)

    Timescale hypertables don't support ON CONFLICT ON CONSTRAINT upsert statement (breaking Hasura compatibility)

    Relevant system information:

    • OS: Ubuntu 18.04
    • PostgreSQL version: PostgreSQL 11.1
    • TimescaleDB version: 1.2.1
    • Installation method: docker timescale/timescaledb:1.2.1-pg11

    Describe the bug Here is my schema:

    CREATE TABLE data (
        "timestamp" timestamp with time zone NOT NULL,
        "uuid" uuid NOT NULL
    );
    
    ALTER TABLE ONLY data
        ADD CONSTRAINT data_pkey PRIMARY KEY ("uuid", "timestamp");
    
    SELECT create_hypertable('data', 'timestamp');
    

    To Reproduce Run this SQL

    INSERT INTO "data" ( "timestamp", "uuid" ) VALUES ( "2019-03-06T16:46:46.988Z", "9db6dbe1-cd14-1d17-38c0-4d4a36679fd6" ) ON CONFLICT ON CONSTRAINT "data_pkey" DO NOTHING RETURNING *
    

    Expected behavior Expected to correctly insert/upsert data without error, as it is doing with PostgreSQL 10.6 and timescale 1.2.1 (docker timescale/timescaledb:1.2.1-pg10).

    Actual behavior Postgres throws the following error:

    ERROR:  hypertables do not support ON CONFLICT statements that reference constraints
    HINT:  Use column names to infer indexes instead.
    

    Note: mention of my error in this file: https://github.com/timescale/timescaledb/blob/master/test/expected/upsert.out

    feature-request hypertable Query Experience Team 
    opened by waterdrop01 37
  • [Bug]: Segfault when querying continuous aggregate in docker container

    [Bug]: Segfault when querying continuous aggregate in docker container

    What type of bug is this?

    Crash

    What subsystems and features are affected?

    Continuous aggregate, Query executor

    What happened?

    I'm seeing an inconsistent crash when running timescale on my development machine within docker. This happens most commonly when I run my unit test suite.

    I'm using the timescale image: timescale/timescaledb-postgis:latest-pg12

    When a particular query runs, it causes a postgres segfault which restarts the database and crashes the test suite.

    TimescaleDB version affected

    2.5.0

    PostgreSQL version used

    12.9

    What operating system did you use?

    Debian GNU/Linux 10 (buster)

    What installation method did you use?

    Docker

    What platform did you run on?

    Other

    Relevant log output and stack trace

    2022-02-03 21:32:36.063 UTC [14201] LOG:  TimescaleDB background worker launcher connected to shared catalogs
    2022-02-03 21:32:36.345 UTC [1] LOG:  server process (PID 14206) was terminated by signal 11: Segmentation fault
    2022-02-03 21:32:36.345 UTC [1] DETAIL:  Failed process was running: SELECT "hopthru_tripreference"."trip_id", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."ons"), 0) AS "sum_ons", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."offs"), 0) AS "sum_offs", COUNT(DISTINCT "hopthru_ca_boarding_routes_trips"."service_date") AS "total_days", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."passenger_distance_travelled"), 0) AS "sum_passenger_distance_travelled", COALESCE(SUM("hopthru_ca_boarding_routes_trips"."load"), 0) AS "sum_load", MAX("hopthru_ca_boarding_routes_trips"."maxload") AS "maxload", SUM("hopthru_ca_boarding_routes_trips"."load_count") AS "load_count" FROM "hopthru_ca_boarding_routes_trips" INNER JOIN "hopthru_tripreference" ON ("hopthru_ca_boarding_routes_trips"."trip_reference_id" = "hopthru_tripreference"."id") INNER JOIN "hopthru_routereference" ON ("hopthru_ca_boarding_routes_trips"."route_reference_id" = "hopthru_routereference"."id") INNER JOIN "hopthru_routemaster" ON ("hopthru_routereference"."route_id" = "hopthru_routemaster"."id") WHERE ("hopthru_ca_bo
    2022-02-03 21:32:36.345 UTC [1] LOG:  terminating any other active server processes
    2022-02-03 21:32:36.345 UTC [14199] WARNING:  terminating connection because of crash of another server process
    2022-02-03 21:32:36.345 UTC [14199] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
    2022-02-03 21:32:36.345 UTC [14199] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
    
    -----
    
    Note, here's some other relevant log when reproducing this:
    
    2022-02-03 22:56:59.866 UTC [243] ERROR:  invalid memory alloc request size 18446744070017253376
    
    -----
    
    Stack trace from a core dump:
    
    postgres@cfe2c1e7f013:~$ gdb postgres /tmp/core.cfe2c1e7f013.postgres.1643929681
    GNU gdb (Ubuntu 10.1-2ubuntu2) 10.1.90.20210411-git
    Copyright (C) 2021 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    Type "show copying" and "show warranty" for details.
    This GDB was configured as "x86_64-linux-gnu".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <https://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
        <http://www.gnu.org/software/gdb/documentation/>.
    
    For help, type "help".
    Type "apropos word" to search for commands related to "word"...
    Reading symbols from postgres...
    Reading symbols from /usr/lib/debug/.build-id/e4/51d1e56e9d52f8e6a98c797a34d02043aa279e.debug...
    
    warning: Can't open file /dev/shm/PostgreSQL.1664165300 during file-backed mapping note processing
    
    warning: Can't open file /SYSV0052e2c1 (deleted) during file-backed mapping note processing
    
    warning: Can't open file /dev/zero (deleted) during file-backed mapping note processing
    [New LWP 206]
    Warning: couldn't activate thread debugging using libthread_db: Cannot find new threads: generic error
    
    warning: File "/usr/lib/x86_64-linux-gnu/libthread_db-1.0.so" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load".
    To enable execution of this file add
    	add-auto-load-safe-path /usr/lib/x86_64-linux-gnu/libthread_db-1.0.so
    line to your configuration file "/home/postgres/.gdbinit".
    To completely disable this security protection add
    	set auto-load safe-path /
    line to your configuration file "/home/postgres/.gdbinit".
    For more information about this security protection see the
    "Auto-loading safe path" section in the GDB manual.  E.g., run from the shell:
    	info "(gdb)Auto-loading safe path"
    
    warning: Unable to find libthread_db matching inferior's thread library, thread debugging will not be available.
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    Core was generated by `postgres:'.
    Program terminated with signal SIGSEGV, Segmentation fault.
    #0  0x000055d9533f5a2a in accum_sum_final (accum=0x55d9552157e0, result=result@entry=0x7ffc044a6e30) at utils/adt/./build/../src/backend/utils/adt/numeric.c:9473
    9473	utils/adt/./build/../src/backend/utils/adt/numeric.c: No such file or directory.
    (gdb) backtrace
    #0  0x000055d9533f5a2a in accum_sum_final (accum=0x55d9552157e0, result=result@entry=0x7ffc044a6e30) at utils/adt/./build/../src/backend/utils/adt/numeric.c:9473
    #1  0x000055d9533f7a9b in numeric_sum (fcinfo=<optimized out>) at utils/adt/./build/../src/backend/utils/adt/numeric.c:4948
    #2  0x00007fdb2bf86f88 in tsl_finalize_agg_ffunc (fcinfo=0x7ffc044a6f20) at /build/timescaledb/tsl/src/partialize_finalize.c:550
    #3  0x000055d9531c467f in finalize_aggregate (aggstate=aggstate@entry=0x55d954498408, peragg=peragg@entry=0x55d95506c3e0, pergroupstate=0x55d95519f988, resultVal=resultVal@entry=0x55d9542e19a8,
        resultIsNull=resultIsNull@entry=0x55d9542e19d5) at executor/./build/../src/backend/executor/nodeAgg.c:954
    #4  0x000055d9531c7a37 in finalize_aggregates (aggstate=aggstate@entry=0x55d954498408, peraggs=peraggs@entry=0x55d95506c228, pergroup=pergroup@entry=0x55d95519f938) at executor/./build/../src/backend/executor/nodeAgg.c:1163
    #5  0x000055d9531c8188 in agg_retrieve_hash_table (aggstate=aggstate@entry=0x55d954498408) at executor/./build/../src/backend/executor/nodeAgg.c:2067
    #6  0x000055d9531c8878 in agg_retrieve_direct (aggstate=<optimized out>) at executor/./build/../src/backend/executor/nodeAgg.c:1682
    #7  ExecAgg (pstate=0x55d954498408) at executor/./build/../src/backend/executor/nodeAgg.c:1563
    #8  0x000055d9531ba08a in ExecScan (node=0x55d9544982d8, accessMtd=0x55d9531e0380 <SubqueryNext>, recheckMtd=0x55d9531dd950 <SubqueryRecheck>) at executor/./build/../src/backend/executor/execScan.c:200
    #9  0x000055d9531c6aa9 in ExecProcNode (node=0x55d9544982d8) at executor/./build/../src/include/executor/executor.h:242
    #10 ExecAppend (pstate=0x55d954497868) at executor/./build/../src/backend/executor/nodeAppend.c:292
    #11 0x000055d9531e1ff9 in ExecProcNode (node=0x55d954497868) at executor/./build/../src/include/executor/executor.h:242
    #12 ExecNestLoop (pstate=0x55d9543e4748) at executor/./build/../src/backend/executor/nodeNestloop.c:160
    #13 0x000055d9531e20dd in ExecProcNode (node=0x55d9543e4748) at executor/./build/../src/include/executor/executor.h:242
    #14 ExecNestLoop (pstate=0x55d9543e4588) at executor/./build/../src/backend/executor/nodeNestloop.c:109
    #15 0x000055d9531e20dd in ExecProcNode (node=0x55d9543e4588) at executor/./build/../src/include/executor/executor.h:242
    #16 ExecNestLoop (pstate=0x55d9543e43c8) at executor/./build/../src/backend/executor/nodeNestloop.c:109
    #17 0x000055d9531c7394 in ExecProcNode (node=0x55d9543e43c8) at executor/./build/../src/include/executor/executor.h:242
    #18 fetch_input_tuple (aggstate=aggstate@entry=0x55d9543e3f60) at executor/./build/../src/backend/executor/nodeAgg.c:406
    #19 0x000055d9531c8a5d in agg_retrieve_direct (aggstate=0x55d9543e3f60) at executor/./build/../src/backend/executor/nodeAgg.c:1748
    #20 ExecAgg (pstate=0x55d9543e3f60) at executor/./build/../src/backend/executor/nodeAgg.c:1563
    #21 0x000055d9531b1b8d in ExecProcNode (node=0x55d9543e3f60) at executor/./build/../src/include/executor/executor.h:242
    #22 ExecutePlan (execute_once=<optimized out>, dest=0x55d9550c8678, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x55d9543e3f60,
        estate=0x55d9543e3c30) at executor/./build/../src/backend/executor/execMain.c:1632
    #23 standard_ExecutorRun (queryDesc=0x55d9543e92c0, direction=<optimized out>, count=0, execute_once=<optimized out>) at executor/./build/../src/backend/executor/execMain.c:350
    #24 0x000055d953349065 in ExecutorRun (execute_once=<optimized out>, count=0, direction=ForwardScanDirection, queryDesc=0x55d9543e92c0) at executor/./build/../src/backend/executor/execMain.c:294
    #25 PortalRunSelect (portal=0x55d95406cdd0, forward=<optimized out>, count=0, dest=<optimized out>) at tcop/./build/../src/backend/tcop/pquery.c:938
    #26 0x000055d95334acb0 in PortalRun (portal=0x55d95406cdd0, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x55d9550c8678, altdest=0x55d9550c8678, completionTag=0x7ffc044a7d90 "")
        at tcop/./build/../src/backend/tcop/pquery.c:779
    #27 0x000055d95334534b in exec_simple_query (
        query_string=0x55d954003b20 "SELECT \"hopthru_tripreference\".\"trip_id\", COALESCE(SUM(\"hopthru_ca_boarding_routes_trips\".\"ons\"), 0) AS \"sum_ons\", COALESCE(SUM(\"hopthru_ca_boarding_routes_trips\".\"offs\"), 0) AS \"sum_offs\", COUNT(DIST"...) at tcop/./build/../src/backend/tcop/postgres.c:1215
    #28 0x000055d95334d4ad in PostgresMain (argc=<optimized out>, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at tcop/./build/../src/backend/tcop/postgres.c:4271
    #29 0x000055d9532c14ea in BackendRun (port=0x55d95402be30) at postmaster/./build/../src/backend/postmaster/postmaster.c:4510
    #30 BackendStartup (port=0x55d95402be30) at postmaster/./build/../src/backend/postmaster/postmaster.c:4193
    #31 ServerLoop () at postmaster/./build/../src/backend/postmaster/postmaster.c:1725
    #32 0x000055d9532c2470 in PostmasterMain (argc=1, argv=<optimized out>) at postmaster/./build/../src/backend/postmaster/postmaster.c:1398
    #33 0x000055d952fd47e7 in main (argc=1, argv=0x55d953ff6d20) at main/./build/../src/backend/main/main.c:228
    

    How can we reproduce the bug?

    No response

    bug continuous_aggregate segfault docker-build 2.5.0 Core Database Team 
    opened by brockhaywood 34
  • TimescaleDB 2.x Continuous Aggrecation long recalculation

    TimescaleDB 2.x Continuous Aggrecation long recalculation

    We have an issue with the Continuous Aggregation feature with TimescaleDB 2.x upgrade. We storing data with 1 seconds granularity in hypertable and doing rollups to 10m and 1h intervals with continuous aggregation views. Before 2.0 we have no issues with refreshes, but with 2.0 rollups get stuck after inserting a bunch of historical data. We've looked through a code and was able to tell that the issue raises in invalidation logic.

    Intervals

    Our system doing separate inserts of data points rounded to 1-second grid (shown in red on the diagram) and this action leads to creating invalidation records in the _timescaledb_catalog.continuous_aggs_hypertable_invalidation_log table. Because all of these records are 0 length intervals - the aggregation process cannot merge them into bigger intervals. This leads to adding dozens of records into _timescaledb_catalog.continuous_aggs_materialization_invalidation_log and the continuous aggregation job invalidates the same 10 minutes interval again and again for each changed record. Maybe the invalidation process can be more clever and join intervals that fall into the same bucket to avoid recalculating the same intervals over and over again? Another thought is to recalculate the bucket once taking into account all intervals currently invalidating it and not once per invalidation log record.

    I hope this problem is not ours only. Please help us to understand and investigate the problem properly.

    postgres=# SELECT version();
                                                     version
    ---------------------------------------------------------------------------------------------------------
     PostgreSQL 12.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
    (1 row)
    postgres=# \dx
                                          List of installed extensions
        Name     | Version |   Schema   |                            Description
    -------------+---------+------------+-------------------------------------------------------------------
     plpgsql     | 1.0     | pg_catalog | PL/pgSQL procedural language
     timescaledb | 2.0.0   | public     | Enables scalable inserts and complex queries for time-series data
    
    bug continuous_aggregate 
    opened by Antiarchitect 33
  • POLL:  For what additional platforms should we offer binary releases?

    POLL: For what additional platforms should we offer binary releases?

    We often get various requests to offer a binary releases, with clear package management, for different platforms. It would be useful to get feedback from the community as to what platforms they'd like a native binary package for.

    So, please signify your interest with a thumbs-up in the comments below. (Or suggest a new platform if we missed it.)

    feedback-wanted packaging 
    opened by mfreed 32
  • Segfault returning text values from hypertable insert

    Segfault returning text values from hypertable insert

    Relevant system information:

    • OS: [e.g. Ubuntu 16.04, Windows 10 x64, etc] Linux 67c644fdfcfc 4.20.1-arch1-1-ARCH #1 SMP PREEMPT Wed Jan 9 20:25:43 UTC 2019 x86_64 GNU/Linux
    • PostgreSQL version (output of postgres --version): PostgreSQL 10.6 (Debian 10.6-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
    • TimescaleDB version (output of \dx in psql): 1.2.1
    • Installation method: [e.g., "using Docker", "apt install", "source"] source build in docker container (from postgres 10)

    Describe the bug Seeing segfaults when returning a lot of text column from an insert on a hyper table. the same table without the hypertable applied things seem to work fine

    INSERT INTO events (
    
             organization_id
            , conversation_id
            , content
            , related_id
            , related_type
            , event_type
            , medium
            , is_staff
            , created_by
            , correlation_id
            , persona
    
            )
            VALUES
            (
    
              'be42e487-ce61-4306-b914-8071b3696c63'
            , '9c43092b-9e12-47e0-87ac-a1bcc37eb727'
            , 'hello'
            , null
            , 'member'
            , 'conversation_created'
            , 'messenger'
            , true
            , 'f6a8bf32-9b81-4814-87e6-c38210fc4d06'
            , NULL
            , 'agent'
    
            )
    
            RETURNING
            id, event_type, content, related_id, related_type, conversation_id, organization_id, medium, created_at, created_by, redacted, is_staff
    
    backtrace:
    /usr/lib/postgresql/10/lib/pipelinedb.so(debug_segfault+0x33)[0x7f0ce609ca53]
    /lib/x86_64-linux-gnu/libpthread.so.0(+0x110e0)[0x7f0cee4dc0e0]
    postgres: help help 172.22.0.1(48228) INSERT(+0xbc642)[0x561d197b2642]
    postgres: help help 172.22.0.1(48228) INSERT(slot_getallattrs+0x45)[0x561d197b4185]
    postgres: help help 172.22.0.1(48228) INSERT(+0xbf8d1)[0x561d197b58d1]
    postgres: help help 172.22.0.1(48228) INSERT(+0x3814ae)[0x561d19a774ae]
    postgres: help help 172.22.0.1(48228) INSERT(+0x381573)[0x561d19a77573]
    postgres: help help 172.22.0.1(48228) INSERT(PortalRun+0x300)[0x561d19a78c30]
    postgres: help help 172.22.0.1(48228) INSERT(+0x37e94a)[0x561d19a7494a]
    postgres: help help 172.22.0.1(48228) INSERT(PostgresMain+0x1a4f)[0x561d19a7674f]
    postgres: help help 172.22.0.1(48228) INSERT(+0xb399d)[0x561d197a999d]
    postgres: help help 172.22.0.1(48228) INSERT(PostmasterMain+0xfea)[0x561d19a0534a]
    postgres: help help 172.22.0.1(48228) INSERT(main+0x854)[0x561d197aaf74]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f0cec0b02e1]
    postgres: help help 172.22.0.1(48228) INSERT(_start+0x2a)[0x561d197ab02a]
    
    'utf-8' codec can't decode byte 0x86 in position 0: invalid start byte
    
    ERROR:  invalid memory alloc request size 18446744073709551613
    

    To Reproduce Steps to reproduce the behavior:

    CREATE TABLE events
    (
        id uuid NOT NULL DEFAULT gen_random_uuid(),
        conversation_id uuid NOT NULL,
        organization_id uuid NOT NULL,
        created_by uuid NOT NULL,
        redacted_by uuid,
        related_id uuid,
        sorter bigint NOT NULL,
        created_at timestamp with time zone NOT NULL DEFAULT (now())::timestamp(6) with time zone,
        redacted boolean NOT NULL DEFAULT false,
        is_staff boolean NOT NULL DEFAULT false,
        persona persona, -- DOMAIN AS TEXT
        medium mediumNOT NULL,  -- DOMAIN AS TEXT
        related_type event_related_type NOT NULL-- DOMAIN AS TEXT
        event_type text NOT NULL,
        correlation_id text,
        page_cursor text,
        content text
    );
    
    SELECT create_hypertable(
      'events'
    , 'created_at'
    , chunk_time_interval => interval '4 weeks'
    , if_not_exists => TRUE
    );
    

    inserting with return values will almost always cause a segfault

    INSERT INTO events (
             organization_id
            , conversation_id
            , content
            , related_id
            , related_type
            , event_type
            , medium
            , is_staff
            , created_by
            , correlation_id
            , persona
            )
    VALUES
            (
              'be42e487-ce61-4306-b914-8071b3696c63'
            , '9c43092b-9e12-47e0-87ac-a1bcc37eb727'
            , 'hello'
            , null
            , 'member'
            , 'conversation_created'
            , 'messenger'
            , true
            , 'f6a8bf32-9b81-4814-87e6-c38210fc4d06'
            , NULL
            , 'agent'
            )
    RETURNING
            id, event_type, content, related_id, related_type, conversation_id, organization_id, medium, created_at, created_by, redacted, is_staff, page_cursor, sorter
    

    If I remove the returning clause it will be just fine. I can also select * from events just fine. But inserting with the return statement will always crash. If I remove a number of the text columns from the return statement it will also get better. We've had to down grade from 1.2.1 back to 1.0.0

    It seems like there is maybe an issue with toasting / detoasting?

    page_cursor is a base64 encoded string

    NTJkZTRiMWMtNTJlMC00MmQyLWE0ZmUtOTVmMjMzMGRkNzVkfDljNDMwOTJiLTllMTItNDdlMC04N2FjLWExYmNjMzdlYjcyN3wyMDE5LTAyLTI3IDIyOjE2OjMwLjQxMTYzMiswMHwyMDE5LTAyLTI3IDIyOjIyOjM1LjcwNDY1OCswMHwyMA==
    

    Expected behavior I would expect it to not segfault

    Actual behavior postgres is segfaulting

    bug 
    opened by esatterwhite 29
  • Trouble finding extension on Windows

    Trouble finding extension on Windows

    Bug report on Windows 10 x64 Postgre 11.6 timescaledb 1.5.1

    Relevant system information:

    • OS: [e.g. Ubuntu 16.04, Windows 10 x64, etc]
    • PostgreSQL version (output of postgres --version): [e.g. 10.5, 9.6.10]
    • TimescaleDB version (output of \dx in psql): [e.g. 1.0.0]
    • Installation method: [e.g., "using Docker", "apt install", "source"]

    Describe the bug A clear and concise description of what the bug is.

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    Actual behavior A clear and concise description of what actually happened.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here.

    build windows unsupported-pg 
    opened by yebidaxiong 27
  • [Bug]: PostgreSQL killed by OOM killer after upgrading TimescaleDB to 2.7.1

    [Bug]: PostgreSQL killed by OOM killer after upgrading TimescaleDB to 2.7.1

    What type of bug is this?

    Other

    What subsystems and features are affected?

    Query executor

    What happened?

    After I upgrade to v2.7.1, PostgreSQL get out-of-memory error and get killed by OOM killer. This never happen with TimescaleDB v2.7.0 and earlier.

    TimescaleDB version affected

    2.7.1

    PostgreSQL version used

    14.4

    What operating system did you use?

    Ubuntu 22.04 x64

    What installation method did you use?

    Deb/Apt

    What platform did you run on?

    On prem/Self-hosted

    Relevant log output and stack trace

    No response

    How can we reproduce the bug?

    - Upgrade TimescaleDB with `apt upgrade`.
    - Run the `ALTER EXTENSION timescaledb UPDATE;` query.
    - Wait for some hours.
    
    bug telemetry 
    opened by hongquan 26
  • could not access file

    could not access file "$libdir/timescaledb-1.4.0": No such file or directory

    Relevant system information:

    • OS: Ubuntu 18.04
    • PostgreSQL version: postgres (PostgreSQL) 11.5
    • TimescaleDB version: 1.4.1
    • Installation method: Docker

    Describe the bug Postgresql showing the error message could not access file "$libdir/timescaledb-1.4.0": No such file or directory on any psql command like SELECT *.

    I did run:

    docker exec -it timescaledb psql -U postgres -X
    
    # within the PostgreSQL instance
    ALTER EXTENSION timescaledb UPDATE;
    

    It says the extension is up to date...I don't understand what's going on.

    From inside docker image, here is the output of:

    bash-5.0# pg_config --pkglibdir
    /usr/local/lib/postgresql
    
    bash-5.0# ls /usr/local/lib/postgresql
    _int.so                   latin_and_mic.so          timescaledb-tsl-1.3.0.so
    adminpack.so              libpqwalreceiver.so       timescaledb-tsl-1.3.1.so
    amcheck.so                lo.so                     timescaledb-tsl-1.3.2.so
    ascii_and_mic.so          ltree.so                  timescaledb-tsl-1.4.0.so
    auth_delay.so             moddatetime.so            timescaledb-tsl-1.4.1.so
    auto_explain.so           pageinspect.so            timescaledb.so
    autoinc.so                passwordcheck.so          timetravel.so
    bloom.so                  pg_buffercache.so         tsm_system_rows.so
    btree_gin.so              pg_freespacemap.so        tsm_system_time.so
    btree_gist.so             pg_prewarm.so             unaccent.so
    citext.so                 pg_stat_statements.so     utf8_and_ascii.so
    cube.so                   pg_trgm.so                utf8_and_big5.so
    cyrillic_and_mic.so       pg_visibility.so          utf8_and_cyrillic.so
    dblink.so                 pgcrypto.so               utf8_and_euc2004.so
    dict_int.so               pgoutput.so               utf8_and_euc_cn.so
    dict_snowball.so          pgrowlocks.so             utf8_and_euc_jp.so
    dict_xsyn.so              pgstattuple.so            utf8_and_euc_kr.so
    earthdistance.so          pgxml.so                  utf8_and_euc_tw.so
    euc2004_sjis2004.so       pgxs                      utf8_and_gb18030.so
    euc_cn_and_mic.so         plpgsql.so                utf8_and_gbk.so
    euc_jp_and_sjis.so        postgres_fdw.so           utf8_and_iso8859.so
    euc_kr_and_mic.so         refint.so                 utf8_and_iso8859_1.so
    euc_tw_and_big5.so        seg.so                    utf8_and_johab.so
    file_fdw.so               sslinfo.so                utf8_and_sjis.so
    fuzzystrmatch.so          tablefunc.so              utf8_and_sjis2004.so
    hstore.so                 tcn.so                    utf8_and_uhc.so
    insert_username.so        test_decoding.so          utf8_and_win.so
    isn.so                    timescaledb-1.4.1.so      uuid-ossp.so
    latin2_and_win1250.so     timescaledb-tsl-1.2.2.so
    

    Thanks for your help!

    opened by waterdrop01 26
  • SUM of int4s becomes int8 in cagg

    SUM of int4s becomes int8 in cagg

    What type of bug is this?

    Incorrect result

    What subsystems and features are affected?

    Continuous aggregate

    What happened?

    The following statements create a simple cagg for the events table's running_time_delta column:

    CREATE TABLE "public"."events" (
        "timestamp" timestamp NOT NULL,
        "running_time_delta" int4 NOT NULL
    );
    
    SELECT create_hypertable('events', 'timestamp');
    
    CREATE MATERIALIZED VIEW continuous_running_time WITH (timescaledb.continuous) AS
        SELECT time_bucket(INTERVAL '1 day', timestamp) AS bucket,
            SUM(running_time_delta) AS running_time
        FROM events
        GROUP BY bucket;
    

    Interestingly in the cagg view, running_time is represented as int8, unlike the originally aggregated column running_time_delta which is int4.

    This may or may not be a bug, but as I couldn't find anything on the matter in the documentation, I wanted to double-check. This is causing trouble for us as the Node.JS postgres driver does not automatically convert int8 to Number while it does so for int4, so the types are not fully interchangeable, even below the max value for 4 byte integers.

    TimescaleDB version affected

    2.9.1

    PostgreSQL version used

    14

    What operating system did you use?

    Multiple

    What installation method did you use?

    Docker

    What platform did you run on?

    On prem/Self-hosted

    Relevant log output and stack trace

    No response

    How can we reproduce the bug?

    see above
    
    bug 
    opened by pehlert 0
  • Insert into index during chunk compression

    Insert into index during chunk compression

    If there is an index on the compressed chunk, insert into it while inserting the heap data rather than reindexing the relation at the end. This reduces the amount of locking on the compressed chunk index which created issues when merging chunks and should help with the future updates of compressed data.

    opened by antekresic 0
  • Improve ASSERT_IS_VALID_CHUNK macro

    Improve ASSERT_IS_VALID_CHUNK macro

    Currently when ASSERT_IS_VALID_CHUNK fails it is impossible to tell which of the conditions fails without opening the coredump in debugger as all the conditions are ANDed in a single Assert. This patch splits the conditions into individual Asserts so you can immediately see from stacktrace which condition failed.

    opened by svenklemm 0
  • Fix CAgg on CAgg different column order

    Fix CAgg on CAgg different column order

    Creating a CAgg on a CAgg where the time column is in a different order of the original hypertable was raising the following exception:

    ERROR: time bucket function must reference a hypertable dimension column

    The problem was during the validation we're initializing internal data structure with the wrong hypertable metadata.

    Fixes #5131

    bug continuous_aggregate time_bucket 
    opened by fabriziomello 1
  • [Feature]: Add new strategy for data retention

    [Feature]: Add new strategy for data retention

    What problem does the new feature solve?

    Hello. I met some problems using timescaleDB. Our Datasource can stop sending data for a long time. But other users expect to see data in the database.

    Example

    1. Create a table CREATE TABLE test_table ( event_time TIMESTAMPTZ NOT NULL, location TEXT NOT NULL ); SELECT create_hypertable('test_table', 'event_time');
    2. Add a retention policy SELECT add_retention_policy('test_table', INTERVAL '1 year');

    If we stop sending data we got the empty table after 1 year.

    If it matches your vision of the product I can try to make a pull request for this feature.

    What does the feature do?

    It is possible to add a feature to make a strategy for retention policy based on max(event_time(or another column that we use for data splitting))?

    I am offering to make a feature like this: when we make a retention policy, add a user possibility to choose a retention strategy for the table. SELECT add_retention_policy('test_table', INTERVAL '1 year', MAX_DATE_MODE); And it will delete data only if it is older than the younger record in the table.

    Implementation challenges

    No response

    feature-request 
    opened by MrLamberg 0
  • [Bug]: Time bucket error while creating an CAGG on another one

    [Bug]: Time bucket error while creating an CAGG on another one

    What type of bug is this?

    Unexpected error

    What subsystems and features are affected?

    Continuous aggregate

    What happened?

    When trying to create a CAGG on another one, we encountered the following error :
    ERROR: time bucket function must reference a hypertable dimension column

    While investigate on the error, we found that the bug was thrown if the order of the time partitioning column mismatch between the first materialized view and the hypertable. For exemple, if the time partitioning column is the second in the table and the time bucket column is the first in the CAGG, creating another CAGG will cause this error.

    As workaround, we use the same position for the time bucket as the time column on the table ; but we manage to have in some case the time column in the sixth position and only four column in the CAGG :sweat_smile: (recreating the CAGG with empty column did the trick)

    TimescaleDB version affected

    2.9.1

    PostgreSQL version used

    13

    What operating system did you use?

    timescale/timescaledb:2.9.1-pg13

    What installation method did you use?

    Docker

    What platform did you run on?

    On prem/Self-hosted

    Relevant log output and stack trace

    ERROR:  time bucket function must reference a hypertable dimension column
    

    How can we reproduce the bug?

    CREATE TABLE data(
        temperature INT8,
        date TIMESTAMPTZ NOT NULL
    );
    
    SELECT create_hypertable('data', 'date');
    
    CREATE MATERIALIZED VIEW cagg
    WITH(timescaledb.continuous) AS
    SELECT time_bucket('1hour', date) AS hour,
      min(temperature),  
      avg(temperature),
      max(temperature)
    FROM data
    GROUP BY hour;
    
    CREATE MATERIALIZED VIEW cagg_on_cagg
    WITH(timescaledb.continuous) AS
    SELECT time_bucket('1day', hour) AS day,
      min(min),
      avg(avg),
      max(max)
    FROM cagg
    GROUP BY day;
    
    bug 
    opened by salquier-appvizer 2
Releases(2.9.1)
  • 2.9.1(Dec 23, 2022)

    This release contains bug fixes since the 2.9.0 release. This release is high priority for upgrade. We strongly recommend that you upgrade as soon as possible.

    Bugfixes

    • #5072 Fix CAgg on CAgg bucket size validation
    • #5101 Fix enabling compression on caggs with renamed columns
    • #5106 Fix building against PG15 on Windows
    • #5117 Fix postgres server restart on background worker exit
    • #5121 Fix privileges for job_errors in update script
    Source code(tar.gz)
    Source code(zip)
  • 2.9.0(Dec 19, 2022)

    This release adds major new features since the 2.8.1 release. We deem it moderate priority for upgrading.

    This release includes these noteworthy features:

    • Hierarchical Continuous Aggregates (aka Continuous Aggregate on top of another Continuous Aggregate)
    • Improve time_bucket_gapfill function to allow specifying the timezone to bucket
    • Introduce fixed schedules for background jobs and the ability to check job errors.
    • Use alter_data_node() to change the data node configuration. This function introduces the option to configure the availability of the data node.

    This release also includes several bug fixes.

    Features

    • #4476 Batch rows on access node for distributed COPY
    • #4567 Exponentially backoff when out of background workers
    • #4650 Show warnings when not following best practices
    • #4664 Introduce fixed schedules for background jobs
    • #4668 Hierarchical Continuous Aggregates
    • #4670 Add timezone support to time_bucket_gapfill
    • #4678 Add interface for troubleshooting job failures
    • #4718 Add ability to merge chunks while compressing
    • #4786 Extend the now() optimization to also apply to CURRENT_TIMESTAMP
    • #4820 Support parameterized data node scans in joins
    • #4830 Add function to change configuration of a data nodes
    • #4966 Handle DML activity when datanode is not available
    • #4971 Add function to drop stale chunks on a datanode

    Bugfixes

    • #4663 Don't error when compression metadata is missing
    • #4673 Fix now() constification for VIEWs
    • #4681 Fix compression_chunk_size primary key
    • #4696 Report warning when enabling compression on hypertable
    • #4745 Fix FK constraint violation error while insert into hypertable which references partitioned table
    • #4756 Improve compression job IO performance
    • #4770 Continue compressing other chunks after an error
    • #4794 Fix degraded performance seen on timescaledb_internal.hypertable_local_size() function
    • #4807 Fix segmentation fault during INSERT into compressed hypertable
    • #4822 Fix missing segmentby compression option in CAGGs
    • #4823 Fix a crash that could occur when using nested user-defined functions with hypertables
    • #4840 Fix performance regressions in the copy code
    • #4860 Block multi-statement DDL command in one query
    • #4898 Fix cagg migration failure when trying to resume
    • #4904 Remove BitmapScan support in DecompressChunk
    • #4906 Fix a performance regression in the query planner by speeding up frozen chunk state checks
    • #4910 Fix a typo in process_compressed_data_out
    • #4918 Cagg migration orphans cagg policy
    • #4941 Restrict usage of the old format (pre 2.7) of continuous aggregates in PostgreSQL 15.
    • #4955 Fix cagg migration for hypertables using timestamp without timezone
    • #4968 Check for interrupts in gapfill main loop
    • #4988 Fix cagg migration crash when refreshing the newly created cagg * #5054 Fix segfault after second ANALYZE
    • #5086 Reset baserel cache on invalid hypertable cache

    Thanks

    • @byazici for reporting a problem with segmentby on compressed caggs
    • @jflambert for reporting a crash with nested user-defined functions.
    • @jvanns for reporting hypertable FK reference to vanilla PostgreSQL partitioned table doesn't seem to work
    • @kou for fixing a typo in process_compressed_data_out
    • @kyrias for reporting a crash when ANALYZE is executed on extended query protocol mode with extension loaded.
    • @tobiasdirksen for requesting Continuous aggregate on top of another continuous aggregate
    • @xima for reporting a bug in Cagg migration
    • @xvaara for helping reproduce a bug with bitmap scans in transparent decompression
    Source code(tar.gz)
    Source code(zip)
  • 2.8.1(Oct 6, 2022)

    This release is a patch release. We recommend that you upgrade at the next available opportunity.

    Bugfixes

    • #4454 Keep locks after reading job status
    • #4658 Fix error when querying a compressed hypertable with compress_segmentby on an enum column
    • #4671 Fix a possible error while flushing the COPY data
    • #4675 Fix bad TupleTableSlot drop
    • #4676 Fix a deadlock when decompressing chunks and performing SELECTs
    • #4685 Fix chunk exclusion for space partitions in SELECT FOR UPDATE queries
    • #4694 Change parameter names of cagg_migrate procedure
    • #4698 Do not use row-by-row fetcher for parameterized plans
    • #4711 Remove support for procedures as custom checks
    • #4712 Fix assertion failure in constify_now
    • #4713 Fix Continuous Aggregate migration policies
    • #4720 Fix chunk exclusion for prepared statements and dst changes
    • #4726 Fix gapfill function signature
    • #4737 Fix join on time column of compressed chunk
    • #4738 Fix error when waiting for remote COPY to finish
    • #4739 Fix continuous aggregate migrate check constraint
    • #4760 Fix segfault when INNER JOINing hypertables
    • #4767 Fix permission issues on index creation for CAggs

    Thanks

    • @boxhock and @cocowalla for reporting a segfault when JOINing hypertables
    • @carobme for reporting constraint error during continuous aggregate migration
    • @choisnetm, @dustinsorensen, @jayadevanm and @joeyberkovitz for reporting a problem with JOINs on compressed hypertables
    • @daniel-k for reporting a background worker crash
    • @justinpryzby for reporting an error when compressing very wide tables
    • @maxtwardowski for reporting problems with chunk exclusion and space partitions
    • @yuezhihan for reporting GROUP BY error when having compress_segmentby on an enum column
    Source code(tar.gz)
    Source code(zip)
  • 2.8.0(Aug 31, 2022)

    This release adds major new features since the 2.7.2 release. We deem it moderate priority for upgrading.

    This release includes these noteworthy features:

    • time_bucket now supports bucketing by month, year and timezone
    • Improve performance of bulk SELECT and COPY for distributed hypertables
    • 1 step CAgg policy management
    • Migrate Continuous Aggregates to the new format

    Features

    • #4188 Use COPY protocol in row-by-row fetcher
    • #4307 Mark partialize_agg as parallel safe
    • #4380 Enable chunk exclusion for space dimensions in UPDATE/DELETE
    • #4384 Add schedule_interval to policies
    • #4390 Faster lookup of chunks by point
    • #4393 Support intervals with day component when constifying now()
    • #4397 Support intervals with month component when constifying now()
    • #4405 Support ON CONFLICT ON CONSTRAINT for hypertables
    • #4412 Add telemetry about replication
    • #4415 Drop remote data when detaching data node
    • #4416 Handle TRUNCATE TABLE on chunks
    • #4425 Add parameter check_config to alter_job
    • #4430 Create index on Continuous Aggregates
    • #4439 Allow ORDER BY on continuous aggregates
    • #4443 Add stateful partition mappings
    • #4484 Use non-blocking data node connections for COPY
    • #4495 Support add_dimension() with existing data
    • #4502 Add chunks to baserel cache on chunk exclusion
    • #4545 Add hypertable distributed argument and defaults
    • #4552 Migrate Continuous Aggregates to the new format
    • #4556 Add runtime exclusion for hypertables
    • #4561 Change get_git_commit to return full commit hash
    • #4563 1 step CAgg policy management
    • #4641 Allow bucketing by month, year, century in time_bucket and time_bucket_gapfill
    • #4642 Add timezone support to time_bucket

    Bugfixes

    • #4359 Create composite index on segmentby columns
    • #4374 Remove constified now() constraints from plan
    • #4416 Handle TRUNCATE TABLE on chunks
    • #4478 Synchronize chunk cache sizes
    • #4486 Adding boolean column with default value doesn't work on compressed table
    • #4512 Fix unaligned pointer access
    • #4519 Throw better error message on incompatible row fetcher settings
    • #4549 Fix dump_meta_data for windows
    • #4553 Fix timescaledb_post_restore GUC handling
    • #4573 Load TSL library on compressed_data_out call
    • #4575 Fix use of get_partition_hash and get_partition_for_key inside an IMMUTABLE function
    • #4577 Fix segfaults in compression code with corrupt data
    • #4580 Handle default privileges on CAggs properly
    • #4582 Fix assertion in GRANT .. ON ALL TABLES IN SCHEMA
    • #4583 Fix partitioning functions
    • #4589 Fix rename for distributed hypertable
    • #4601 Reset compression sequence when group resets
    • #4611 Fix a potential OOM when loading large data sets into a hypertable
    • #4624 Fix heap buffer overflow
    • #4627 Fix telemetry initialization
    • #4631 Ensure TSL library is loaded on database upgrades
    • #4646 Fix time_bucket_ng origin handling
    • #4647 Fix the error "SubPlan found with no parent plan" that occurred if using joins in RETURNING clause.

    Thanks

    • @AlmiS for reporting error on get_partition_hash executed inside an IMMUTABLE function
    • @Creatation for reporting an issue with renaming hypertables
    • @janko for reporting an issue when adding bool column with default value to compressed hypertable
    • @jayadevanm for reporting error of TRUNCATE TABLE on compressed chunk
    • @michaelkitson for reporting permission errors using default privileges on Continuous Aggregates
    • @mwahlhuetter for reporting error in joins in RETURNING clause
    • @ninjaltd and @mrksngl for reporting a potential OOM when loading large data sets into a hypertable
    • @PBudmark for reporting an issue with dump_meta_data.sql on Windows
    • @ssmoss for reporting an issue with time_bucket_ng origin handling
    Source code(tar.gz)
    Source code(zip)
  • 2.7.2(Jul 25, 2022)

    This release is a patch release. We recommend that you upgrade at the next available opportunity. Among other things this release fixes several memory leaks, handling of TOASTed values in GapFill and parameter handling in prepared statements.

    Bugfixes

    • #4517 Fix prepared statement param handling in ChunkAppend
    • #4522 Fix ANALYZE on dist hypertable for a set of nodes
    • #4526 Fix gapfill group comparison for TOASTed values
    • #4527 Handle stats properly for range types
    • #4532 Fix memory leak in function telemetry
    • #4534 Use explicit memory context with hash_create
    • #4538 Fix chunk creation on hypertables with non-default statistics

    Thanks

    • @3a6u9ka, @bgemmill, @hongquan, @stl-leonid-kalmaev and @victor-sudakov for reporting a memory leak
    • @hleung2021 and @laocaixw for reporting an issue with parameter handling in prepared statements
    Source code(tar.gz)
    Source code(zip)
  • 2.7.1(Jul 25, 2022)

    This release is a patch release. We recommend that you upgrade at the next available opportunity.

    Bugfixes

    • #4494 Handle timescaledb versions aptly in multinode
    • #4493 Segfault when executing IMMUTABLE functions
    • #4482 Fix race conditions during chunk (de)compression
    • #4367 Improved buffer management in the copy operator
    • #4375 Don't ask for orderby column if default already set
    • #4400 Use our implementation of find_em_expr_for_rel for PG15+
    • #4408 Fix crash during insert into distributed hypertable
    • #4411 Add shmem_request_hook
    • #4437 Fix segfault in subscription_exec
    • #4442 Fix perms in copy/move chunk
    • #4450 Retain hypertable ownership on attach_data_node
    • #4451 Repair numeric partial state on the fly
    • #4463 Fix empty bytea handlng with distributed tables
    • #4469 Better superuser handling for move_chunk

    Features

    • #4244 Function telemetry
    • #4287 Add internal api for foreign table chunk
    • #4470 Block drop chunk if chunk is in frozen state
    • #4464 Add internal api to associate a hypertable with custom jobs

    Thanks

    • @xin-hedera Finding bug in empty bytea values for distributed tables
    • @jflambert for reporting a bug with IMMUTABLE functions
    • @nikugogoi for reporting a bug with CTEs and upserts on distributed hypertables
    Source code(tar.gz)
    Source code(zip)
  • 2.7.0(May 24, 2022)

    This release adds major new features since the 2.6.1 release. We deem it moderate priority for upgrading.

    This release includes these noteworthy features:

    • Optimize continuous aggregate query performance and storage
    • The following query clauses and functions can now be used in a continuous aggregate: FILTER, DISTINCT, ORDER BY as well as Ordered-Set Aggregate and Hypothetical-Set Aggregate
    • Optimize now() query planning time
    • Improve COPY insert performance
    • Improve performance of UPDATE/DELETE on PG14 by excluding chunks

    This release also includes several bug fixes.

    If you are upgrading from a previous version and were using compression with a non-default collation on a segmentby-column you should recompress those hypertables.

    Features

    • #4045 Custom origin's support in CAGGs
    • #4120 Add logging for retention policy
    • #4158 Allow ANALYZE command on a data node directly
    • #4169 Add support for chunk exclusion on DELETE to PG14
    • #4209 Add support for chunk exclusion on UPDATE to PG14
    • #4269 Continuous Aggregates finals form
    • #4301 Add support for bulk inserts in COPY operator
    • #4311 Support non-superuser move chunk operations
    • #4330 Add GUC "bgw_launcher_poll_time"
    • #4340 Enable now() usage in plan-time chunk exclusion

    Bugfixes

    • #3899 Fix segfault in Continuous Aggregates
    • #4225 Fix TRUNCATE error as non-owner on hypertable
    • #4236 Fix potential wrong order of results for compressed hypertable with a non-default collation
    • #4249 Fix option "timescaledb.create_group_indexes"
    • #4251 Fix INSERT into compressed chunks with dropped columns
    • #4255 Fix option "timescaledb.create_group_indexes"
    • #4259 Fix logic bug in extension update script
    • #4269 Fix bad Continuous Aggregate view definition reported in #4233
    • #4289 Support moving compressed chunks between data nodes
    • #4300 Fix refresh window cap for cagg refresh policy
    • #4315 Fix memory leak in scheduler
    • #4323 Remove printouts from signal handlers
    • #4342 Fix move chunk cleanup logic
    • #4349 Fix crashes in functions using AlterTableInternal
    • #4358 Fix crash and other issues in telemetry reporter

    Thanks

    • @abrownsword for reporting a bug in the telemetry reporter and testing the fix
    • @jsoref for fixing various misspellings in code, comments and documentation
    • @yalon for reporting an error with ALTER TABLE RENAME on distributed hypertables
    • @zhuizhuhaomeng for reporting and fixing a memory leak in our scheduler
    Source code(tar.gz)
    Source code(zip)
  • 2.6.1(Apr 12, 2022)

    This release is a patch release. We recommend that you upgrade at the next available opportunity.

    Bugfixes

    #4121 Fix RENAME TO/SET SCHEMA on distributed hypertable #4122 Fix segfault on INSERT into distributed hypertable #4142 Ignore invalid relid when deleting hypertable #4159 Fix ADD COLUMN IF NOT EXISTS error on compressed hypertable #4161 Fix memory handling during scans #4176 Fix remote EXPLAIN with parameterized queries #4181 Fix spelling errors and omissions #4186 Fix owner change for distributed hypertable #4192 Abort sessions after extension reload #4193 Fix relcache callback handling causing crashes #4199 Remove signal-unsafe calls from signal handlers #4219 Do not modify aggregation state in finalize

    Thanks

    @abrownsword for reporting a crash in the telemetry reporter (fixed with #4161) @daydayup863 for reporting issue with remote explain

    Source code(tar.gz)
    Source code(zip)
  • 2.6.0(Feb 23, 2022)

    This release is a medium priority for upgrade. We recommend that you upgrade at the next available opportunity.

    This release adds major new features since the 2.5.2 release, including:

    • Continuous aggregates with compression
    • time_bucket_ng support for N months and timezones on continuous aggregates
    • You can read more about this release on our blog post. This release also contains bug fixes since the 2.5.0 release.

    The experimental features in the 2.6 release are:

    • The time_bucket_ng function, a newer version of time_bucket. This function supports years, months, days, hours, minutes, seconds, and timezones.
    • time_bucket_ng support for N months and Timezones on continuous aggregates.
    • APIs for chunk manipulation across data nodes in a distributed hypertable setup. This includes the ability to add a data node and move chunks to the new data node for cluster rebalancing.

    Features

    • #3768 Allow ALTER TABLE ADD COLUMN with DEFAULT on compressed hypertable
    • #3769 Allow ALTER TABLE DROP COLUMN on compressed hypertable
    • #3873 Enable compression on continuous aggregates
    • #3943 Optimize first/last
    • #3945 Add support for ALTER SCHEMA on multi-node
    • #3949 Add support for DROP SCHEMA on multi-node
    • #3977 Timezones support in CAGGs

    Bugfixes

    • #3808 Properly handle max_retries option
    • #3863 Fix remote transaction heal logic
    • #3869 Fix ALTER SET/DROP NULL contstraint on distributed hypertable
    • #3944 Fix segfault in add_compression_policy
    • #3961 Fix crash in EXPLAIN VERBOSE on distributed hypertable
    • #4015 Eliminate float rounding instabilities in interpolate
    • #4019 Update ts_extension_oid in transitioning state
    • #4073 Fix buffer overflow in partition scheme

    Improvements Query planning performance is improved for hypertables with a large number of chunks.

    Thanks

    • @fvannee for reporting a first/last memory leak
    • @mmouterde for reporting an issue with floats and interpolate
    Source code(tar.gz)
    Source code(zip)
  • 2.5.2(Feb 10, 2022)

    This release contains bug fixes since the 2.5.1 release. This release is high priority for upgrade. We strongly recommend that you upgrade as soon as possible.

    Bugfixes

    • https://github.com/timescale/timescaledb/pull/3900 Improve custom scan node registration
    • https://github.com/timescale/timescaledb/pull/3911 Fix role type deparsing for GRANT command
    • https://github.com/timescale/timescaledb/pull/3918 Fix DataNodeScan plans with one-time filter
    • https://github.com/timescale/timescaledb/pull/3921 Fix segfault on insert into internal compressed table
    • https://github.com/timescale/timescaledb/pull/3938 Fix subtract_integer_from_now on 32-bit platforms and improve error handling
    • https://github.com/timescale/timescaledb/pull/3939 Fix projection handling in time_bucket_gapfill
    • https://github.com/timescale/timescaledb/pull/3948 Avoid double PGclear() in data fetchers
    • https://github.com/timescale/timescaledb/pull/3979 Fix deparsing of index predicates
    • https://github.com/timescale/timescaledb/pull/4015 Eliminate float rounding instabilities in interpolate
    • https://github.com/timescale/timescaledb/pull/4020 Fix ALTER TABLE EventTrigger initialization
    • https://github.com/timescale/timescaledb/pull/4024 Fix premature cache release call
    • https://github.com/timescale/timescaledb/pull/4037 Fix status for dropped chunks that have catalog entries
    • https://github.com/timescale/timescaledb/pull/4069 Fix riinfo NULL handling in ANY construct
    • https://github.com/timescale/timescaledb/pull/4071 Fix extension installation privilege escalation (CVE-2022-24128)

    Thanks

    • @carlocperez for reporting crash with NULL handling in ANY construct
    • @erikhh for reporting an issue with time_bucket_gapfill
    • @fvannee for reporting a first/last memory leak
    • @kancsuki for reporting drop column and partial index creation not working
    • @mmouterde for reporting an issue with floats and interpolate
    • Pedro Gallegos for reporting a possible privilege escalation during extension installation
    Source code(tar.gz)
    Source code(zip)
  • 2.5.1(Dec 3, 2021)

    This release contains bug fixes since the 2.5.0 release. We deem it medium priority to upgrade.

    Bugfixes

    • #3706 Test enabling dist compression within a procedure
    • #3734 Rework distributed DDL processing logic
    • #3737 Fix flaky pg_dump
    • #3739 Fix compression policy on tables using INTEGER
    • #3766 Fix segfault in ts_hist_sfunc
    • #3779 Support GRANT/REVOKE on distributed database
    • #3789 Fix time_bucket comparison transformation
    • #3797 Fix DISTINCT ON queries for distributed hyperatbles
    • #3799 Fix error printout on correct security label
    • #3801 Fail size utility functions when data nodes do not respond
    • #3809 Fix NULL pointer evaluation in fill_result_error()
    • #3811 Fix INSERT..SELECT involving dist hypertables
    • #3819 Fix reading garbage value from TSConnectionError
    • #3824 Remove pointers from CAGG lists for 64-bit archs
    • #3846 Eliminate deadlock in recompress chunk policy
    • #3881 Fix SkipScan crash due to pruned unique path
    • #3884 Fix create_distributed_restore_point memory issue

    Thanks

    • @cbisnett for reporting and fixing a typo in an error message
    • @CaptainCuddleCube for reporting bug on compression policy procedure on tables using INTEGER on time dimension
    • @phemmer for reporting bugs on multi-node
    Source code(tar.gz)
    Source code(zip)
  • 2.5.0(Oct 28, 2021)

    This release adds major new features since the 2.4.2 release. We deem it moderate priority for upgrading.

    This release includes these noteworthy features:

    • Continuous Aggregates for Distributed Hypertables
    • Support for PostgreSQL 14
    • Experimental: Support for timezones in time_bucket_ng(), including the origin argument

    This release also includes several bug fixes.

    Features

    • #3034 Add support for PostgreSQL 14
    • #3435 Add continuous aggregates for distributed hypertables
    • #3505 Add support for timezones in time_bucket_ng()

    Bugfixes

    • #3580 Fix memory context bug executing TRUNCATE
    • #3592 Allow alter column type on distributed hypertable
    • #3598 Improve evaluation of stable functions such as now() on access node
    • #3618 Fix execution of refresh_caggs from user actions
    • #3625 Add shared dependencies when creating chunk
    • #3626 Fix memory context bug executing TRUNCATE
    • #3627 Schema qualify UDTs in multi-node
    • #3638 Allow owner change of a data node
    • #3654 Fix index attnum mapping in reorder_chunk
    • #3661 Fix SkipScan path generation with constant DISTINCT column
    • #3667 Fix compress_policy for multi txn handling
    • #3673 Fix distributed hypertable DROP within a procedure
    • #3701 Allow anyone to use size utilities on distributed hypertables
    • #3708 Fix crash in get_aggsplit
    • #3709 Fix ordered append pathkey check
    • #3712 Fix GRANT/REVOKE ALL IN SCHEMA handling
    • #3717 Support transparent decompression on individual chunks
    • #3724 Fix inserts into compressed chunks on hypertables with caggs
    • #3727 Fix DirectFunctionCall crash in distributed_exec
    • #3728 Fix SkipScan with varchar column
    • #3733 Fix ANALYZE crash with custom statistics for custom types
    • #3747 Always reset expr context in DecompressChunk

    Thanks

    • @binakot and @sebvett for reporting an issue with DISTINCT queries
    • @hardikm10, @DavidPavlicek and @pafiti for reporting bugs on TRUNCATE
    • @mjf for reporting an issue with ordered append and JOINs
    • @phemmer for reporting the issues on multinode with aggregate queries and evaluation of now()
    • @abolognino for reporting an issue with INSERTs into compressed hypertables that have cagg
    • @tanglebones for reporting the ANALYZE crash with custom types on multinode
    • @amadeubarbosa and @felipenogueirajack for reporting crash using JSONB column in compressed chunks
    Source code(tar.gz)
    Source code(zip)
  • 2.4.2(Sep 21, 2021)

    This release contains bug fixes since the 2.4.1 release. We deem it high priority to upgrade.

    Bugfixes

    #3437 Rename on all continuous aggregate objects #3469 Use signal-safe functions in signal handler #3520 Modify compression job processing logic #3527 Fix time_bucket_ng behaviour with origin argument #3532 Fix bootstrap with regresschecks disabled #3574 Fix failure on job execution by background worker #3590 Call cleanup functions on backend exit

    Thanks

    @jankatins for reporting a crash with background workers @LutzWeischerFujitsu for reporting an issue with bootstrap

    Source code(tar.gz)
    Source code(zip)
  • 2.4.1(Aug 24, 2021)

    This release contains bug fixes since the 2.4.0 release. We deem it high priority to upgrade.

    The release fixes continuous aggregate refresh for postgres 12.8 and 13.4, a crash with ALTER TABLE commands and a crash with continuous aggregates with HAVING clause.

    Bugfixes

    • #3430 Fix havingqual processing for continuous aggregates
    • #3468 Disable tests by default if tools are not found
    • #3462 Fix crash while tracking alter table commands
    • #3489 Fix continuous agg bgw job failure for PG 12.8 and 13.4
    • #3494 Improve error message when adding data nodes

    Thanks

    • @brianbenns for reporting a segfault with continuous aggregates
    • @usego for reporting an issue with continuous aggregate refresh on PG 13.4
    Source code(tar.gz)
    Source code(zip)
  • 2.4.0(Aug 3, 2021)

    This release adds new experimental features since the 2.3.1 release.

    The experimental features in this release are:

    • APIs for chunk manipulation across data nodes in a distributed hypertable setup. This includes the ability to add a data node and move chunks to the new data node for cluster rebalancing.
    • The time_bucket_ng function, a newer version of time_bucket. This function supports years, months, days, hours, minutes, and seconds.

    We’re committed to developing these experiments, giving the community a chance to provide early feedback and influence the direction of TimescaleDB’s development. We’ll travel faster with your input!

    Please create your feedback as a GitHub issue (using the experimental-schema label), describe what you found, and tell us the steps or share the code snip to recreate it.

    This release also includes several bug fixes.

    PostgreSQL 11 deprecation announcement Timescale is working hard on our next exciting features. To make that possible, we require functionality that is available in Postgres 12 and above. Postgres 11 is not supported with TimescaleDB 2.4.

    Experimental Features

    • #3293 Add timescaledb_experimental schema
    • #3302 Add block_new_chunks and allow_new_chunks API to experimental schema. Add chunk based refresh_continuous_aggregate.
    • #3211 Introduce experimental time_bucket_ng function
    • #3366 Allow use of experimental time_bucket_ng function in continuous aggregates
    • #3408 Support for seconds, minutes and hours in time_bucket_ng
    • #3446 Implement cleanup for chunk copy/move.

    Bugfixes

    • #3401 Fix segfault for RelOptInfo without fdw_private
    • #3411 Verify compressed chunk validity for compressed path
    • #3416 Fix targetlist names for continuous aggregate views
    • #3434 Remove extension check from relcache invalidation callback
    • #3440 Fix remote_tx_heal_data_node to work with only current database

    Thanks

    • @fvannee for reporting an issue with hypertable expansion in functions
    • @amalek215 for reporting an issue with cache invalidation during pg_class vacuum full
    • @hardikm10 for reporting an issue with inserting into compressed chunks
    • @dberardo-com and @iancmcc for reporting an issue with extension updates after renaming columns of continuous aggregates.
    Source code(tar.gz)
    Source code(zip)
  • 2.3.1(Jul 7, 2021)

    This maintenance release contains bugfixes since the 2.3.0 release. We deem it moderate priority for upgrading. The release introduces the possibility of generating downgrade scripts, improves the trigger handling for distributed hypertables, adds some more randomness to chunk assignment to avoid thundering herd issues in chunk assignment, and fixes some issues in update handling as well as some other bugs.

    Bugfixes

    • #3279 Add some more randomness to chunk assignment
    • #3288 Fix failed update with parallel workers
    • #3300 Improve trigger handling on distributed hypertables
    • #3304 Remove paths that reference parent relids for compressed chunks
    • #3305 Fix pull_varnos miscomputation of relids set
    • #3310 Generate downgrade script
    • #3314 Fix heap buffer overflow in hypertable expansion
    • #3317 Fix heap buffer overflow in remote connection cache.
    • #3327 Make aggregate in caggs fully qualified
    • #3327 Make aggregates in caggs fully qualified
    • #3336 Fix pg_init_privs objsubid handling
    • #3345 Fix SkipScan distinct column identification
    • #3355 Fix heap buffer overflow when renaming compressed hypertable columns.
    • #3367 Improve DecompressChunk qual pushdown
    • #3377 Fix bad use of repalloc

    Thanks

    • @db-adrian for reporting an issue when accessing cagg view through postgres_fdw
    • @fncaldas and @pgwhalen for reporting an issue accessing caggs when public is not in search_path
    • @fvannee, @mglonnro and @ebreijo for reporting an issue with the upgrade script
    • @fvannee for reporting a performance regression with SkipScan

    The music for this release was Izzy Stradlin's On Down the Road.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.3.1.tar.lzma(1.19 MB)
  • 2.3.0(May 26, 2021)

    This release adds major new features since the 2.2.1 release. We deem it moderate priority for upgrading.

    This release adds support for inserting data into compressed chunks and improves performance when inserting data into distributed hypertables. Distributed hypertables now also support triggers and compression policies.

    The bug fixes in this release address issues related to the handling of privileges on compressed hypertables, locking, and triggers with transition tables.

    Features

    • #3116 Add distributed hypertable compression policies
    • #3162 Use COPY when executing distributed INSERTs
    • #3199 Add GENERATED column support on distributed hypertables
    • #3210 Add trigger support on distributed hypertables
    • #3230 Support for inserts into compressed chunks

    Bugfixes

    • #3213 Propagate grants to compressed hypertables
    • #3229 Use correct lock mode when updating chunk
    • #3243 Fix assertion failure in decompress_chunk_plan_create
    • #3250 Fix constraint triggers on hypertables
    • #3251 Fix segmentation fault due to incorrect call to chunk_scan_internal
    • #3252 Fix blocking triggers with transition tables

    Thanks

    • @yyjdelete for reporting a crash with decompress_chunk and identifying the bug in the code
    • @fabriziomello for documenting the prerequisites when compiling against PostgreSQL 13

    The music for this release was Queen's The Works.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.3.0.tar.lzma(1.18 MB)
  • 2.2.1(May 6, 2021)

    This maintenance release contains bugfixes since the 2.2.0 release. We deem it high priority for upgrading.

    This release extends Skip Scan to multinode by enabling the pushdown of DISTINCT to data nodes. It also fixes a number of bugs in the implementation of Skip Scan, in distributed hypertables, in creation of indexes, in compression, and in policies.

    Features

    • #3113 Pushdown "SELECT DISTINCT" in multi-node to allow use of Skip Scan

    Bugfixes

    • #3101 Use commit date in get_git_commit()
    • #3102 Fix REINDEX TABLE for distributed hypertables
    • #3104 Fix use after free in add_reorder_policy
    • #3106 Fix use after free in chunk_api_get_chunk_stats
    • #3109 Copy recreated object permissions on update
    • #3111 Fix CMAKE_BUILD_TYPE check
    • #3112 Use %u to format Oid instead of %d
    • #3118 Fix use after free in cache
    • #3123 Fix crash while using REINDEX TABLE CONCURRENTLY
    • #3135 Fix SkipScan path generation in DISTINCT queries with expressions
    • #3146 Fix SkipScan for IndexPaths without pathkeys
    • #3147 Skip ChunkAppend if AppendPath has no children
    • #3148 Make SELECT DISTINCT handle non-var targetlists
    • #3151 Fix fdw_relinfo_get assertion failure on DELETE
    • #3155 Inherit CFLAGS from PostgreSQL
    • #3169 Fix incorrect type cast in compression policy
    • #3183 Fix segfault in calculate_chunk_interval
    • #3185 Fix wrong datatype for integer based retention policy

    Thanks

    • @Dead2, @dv8472 and @einsibjarni for reporting an issue with multinode queries and views
    • @aelg for reporting an issue with policies on integer-based hypertables
    • @hperez75 for reporting an issue with Skip Scan
    • @nathanloisel for reporting an issue with compression on hypertables with integer-based timestamps
    • @xin-hedera for fixing an issue with compression on hypertables with integer-based timestamps

    The music for this release was Justin Timberlake's The 20/20 Experience – 2 of 2.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.2.1.tar.lzma(1.16 MB)
  • 2.2.0(Apr 14, 2021)

    This release adds major new features since the 2.1.1 release. We deem it moderate priority for upgrading.

    This release adds the Skip Scan optimization, which significantly improves the performance of queries with DISTINCT ON. This optimization is not yet available for queries on distributed hypertables.

    This release also adds a function to create a distributed restore point, which allows performing a consistent restore of a multi-node cluster from a backup.

    The bug fixes in this release address issues with size and stats functions, high memory usage in distributed inserts, slow distributed ORDER BY queries, indexes involving INCLUDE, and single chunk query planning.

    PostgreSQL 11 deprecation announcement

    Timescale is working hard on our next exciting features. To make that possible, we require functionality that is unfortunately absent on PostgreSQL 11. For this reason, we will continue supporting PostgreSQL 11 until mid-June 2021. Sooner to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 11 support will not be included going forward.

    Major Features

    • #2843 Add distributed restore point functionality
    • #3000 SkipScan to speed up SELECT DISTINCT

    Bugfixes

    • #2989 Refactor and harden size and stats functions
    • #3058 Reduce memory usage for distributed inserts
    • #3067 Fix extremely slow multi-node order by queries
    • #3082 Fix chunk index column name mapping
    • #3083 Keep Append pathkeys in ChunkAppend

    Thanks

    • @BowenGG for reporting an issue with indexes with INCLUDE
    • @fvannee for reporting an issue with ChunkAppend pathkeys
    • @pedrokost and @RobAtticus for reporting an issue with size functions on empty hypertables
    • @phemmer and @ryanbooz for reporting issues with slow multi-node order by queries
    • @stephane-moreau for reporting an issue with high memory usage during single-transaction inserts on a distributed hypertable.

    The music for this release was Michael Jackson's Thriller.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.2.0.tar.lzma(1.15 MB)
  • 2.1.1(Mar 31, 2021)

    This maintenance release contains bugfixes since the 2.1.0 release. We deem it high priority for upgrading.

    The bug fixes in this release address issues with CREATE INDEX and UPSERT for hypertables, custom jobs, and gapfill queries.

    This release marks TimescaleDB as a trusted extension in PG13, so that superuser privileges are not required anymore to install the extension.

    Minor features

    • #2998 Mark timescaledb as trusted extension

    Bugfixes

    • #2948 Fix off by 4 error in histogram deserialize
    • #2974 Fix index creation for hypertables with dropped columns
    • #2990 Fix segfault in job_config_check for cagg
    • #2987 Fix crash due to txns in emit_log_hook_callback
    • #3042 Commit end transaction for CREATE INDEX
    • #3053 Fix gapfill/hashagg planner interaction
    • #3059 Fix UPSERT on hypertables with columns with defaults

    Thanks

    • @eloyekunle and @kitwestneat for reporting an issue with UPSERT
    • @jocrau for reporting an issue with index creation
    • @kev009 for fixing a compilation issue
    • @majozv and @pehlert for reporting an issue with time_bucket_gapfill

    The music for this release was David Bowie's Let's Dance.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.1.1.tar.lzma(1.12 MB)
  • 2.1.0(Feb 23, 2021)

    This release adds major new features since the 2.0.2 release. We deem it moderate priority for upgrading.

    This release adds the long-awaited support for PostgreSQL 13 to TimescaleDB. The minimum required PostgreSQL 13 version is 13.2 due to a security vulnerability affecting TimescaleDB functionality present in earlier versions of PostgreSQL 13.

    This release also relaxes some restrictions for compressed hypertables; namely, TimescaleDB now supports adding columns to compressed hypertables and renaming columns of compressed hypertables.

    Major Features

    • #2779 Add support for PostgreSQL 13

    Minor features

    • #2736 Support adding columns to hypertables with compression enabled
    • #2909 Support renaming columns of hypertables with compression enabled

    The music for this release was Rush's 2112.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.1.0.tar.lzma(1.12 MB)
  • 2.0.2(Feb 22, 2021)

    This maintenance release contains bugfixes since the 2.0.1 release. We deem it high priority for upgrading.

    The bug fixes in this release address issues with joins, the status of background jobs, and disabling compression. It also includes enhancements to continuous aggregates, including improved validation of policies and optimizations for faster refreshes when there are a lot of invalidations.

    Minor features

    • #2926 Optimize cagg refresh for small invalidations

    Bugfixes

    • #2850 Set status for backend in background jobs
    • #2883 Fix join qual propagation for nested joins
    • #2884 Add GUC to control join qual propagation
    • #2885 Fix compressed chunk check when disabling compression
    • #2908 Fix changing column type of clustered hypertables
    • #2942 Validate continuous aggregate policy

    Thanks

    • @zeeshanshabbir93 for reporting an issue with joins
    • @Antiarchitect for reporting the issue with slow refreshes of continuous aggregates.
    • @diego-hermida for reporting the issue about being unable to disable compression
    • @mtin for reporting the issue about wrong job status

    The music for this release was Justin Timberlake's The 20/20 Experience.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.0.2.tar.lzma(1.07 MB)
  • 2.0.1(Jan 28, 2021)

    This maintenance release contains bugfixes since the 2.0.0 release. We deem it high priority for upgrading.

    In particular the fixes contained in this maintenance release address issues in continuous aggregates, compression, JOINs with hypertables and when upgrading from previous versions.

    Bugfixes

    • #2772 Always validate existing database and extension
    • #2780 Fix config enum entries for remote data fetcher
    • #2806 Add check for dropped chunk on update
    • #2828 Improve cagg watermark caching
    • #2838 Fix catalog repair in update script
    • #2842 Do not mark job as started when setting next_start field
    • #2845 Fix continuous aggregate privileges during upgrade
    • #2851 Fix nested loop joins that involve compressed chunks
    • #2860 Fix projection in ChunkAppend nodes
    • #2861 Remove compression stat update from update script
    • #2865 Apply volatile function quals at decompresschunk node
    • #2866 Avoid partitionwise planning of partialize_agg
    • #2868 Fix corruption in gapfill plan
    • #2874 Fix partitionwise agg crash due to uninitialized memory

    Thanks

    • @alex88 for reporting an issue with joined hypertables
    • @brian-from-quantrocket for reporting an issue with extension update and dropped chunks
    • @dhodyn for reporting an issue when joining compressed chunks
    • @markatosi for reporting a segfault with partitionwise aggregates enabled
    • @PhilippJust for reporting an issue with add_job and initial_start
    • @sgorsh for reporting an issue when using pgAdmin on windows
    • @WarriorOfWire for reporting the bug with gapfill queries not being able to find pathkey item to sort

    The music for this release was Buckethead's Electric Tears.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.0.1.tar.lzma(1.06 MB)
  • 2.0.0(Dec 21, 2020)

    With this release, we are officially moving TimescaleDB 2.0 to GA, concluding several release candidates.

    TimescaleDB 2.0 adds the much-anticipated support for distributed hypertables (multi-node TimescaleDB), as well as new features and enhancements to core functionality to give users better clarity and more control and flexibility over their data.

    Multi-node architecture: In particular, with TimescaleDB 2.0, users can now create distributed hypertables across multiple instances of TimescaleDB, configured so that one instance serves as an access node and multiple others as data nodes. All queries for a distributed hypertable are issued to the access node, but inserted data and queries are pushed down across data nodes for greater scale and performance.

    Multi-node TimescaleDB can be self managed or, for easier operation, launched within Timescale's fully-managed cloud services.

    This release also adds:

    • Support for user-defined actions, allowing users to define, customize, and schedule automated tasks, which can be run by the built-in jobs scheduling framework now exposed to users.
    • Significant changes to continuous aggregates, which now separate the view creation from the policy. Users can now refresh individual regions of the continuous aggregate materialized view, or schedule automated refreshing via policy.
    • Redesigned informational views, including new (and more general) views for information about hypertable's dimensions and chunks, policies and user-defined actions, as well as support for multi-node TimescaleDB.
    • Moving all formerly enterprise features into our Community Edition, and updating Timescale License, which now provides additional (more permissive) rights to users and developers.

    Some of the changes above (e.g., continuous aggregates, updated informational views) do introduce breaking changes to APIs and are not backwards compatible. While the update scripts in TimescaleDB 2.0 will upgrade databases running TimescaleDB 1.x automatically, some of these API and feature changes may require changes to clients and/or upstream scripts that rely on the previous APIs. Before upgrading, we recommend reviewing upgrade documentation at docs.timescale.com for more details.

    Major Features

    TimescaleDB 2.0 moves the following major features to GA:

    • #1923 Add support for distributed hypertables
    • #2006 Add support for user-defined actions
    • #2125 #2221 Improve Continuous Aggregate API
    • #2084 #2089 #2098 #2417 Redesign informational views
    • #2435 Move enterprise features to community
    • #2437 Update Timescale License

    Previous Release Candidates

    • #2702 Release Candidate 4 (December 2, 2020)
    • #2637 Release Candidate 3 (November 12, 2020)
    • #2554 Release Candidate 2 (October 20, 2020)
    • #2478 Release Candidate 1 (October 1, 2020)

    Minor Features

    Since the last release candidate 4, there are several minor improvements:

    • #2746 Optimize locking for create chunk API
    • #2705 Block tableoid access on distributed hypertable
    • #2730 Do not allow unique index on compressed hypertables
    • #2764 Bootstrap data nodes with versioned extension

    Bugfixes

    Since the last release candidate 4, there are several bugfixes:

    • #2719 Support disabling compression on distributed hypertables
    • #2742 Fix compression status in chunks view for distributed chunks
    • #2751 Fix crash and cancel when adding data node
    • #2763 Fix check constraint on hypertable metadata table

    Thanks

    Thanks to all contributors for the TimescaleDB 2.0 release:

    • @airton-neto for reporting a bug in executing some queries with UNION
    • @nshah14285 for reporting an issue with propagating privileges
    • @kalman5 for reporting an issue with renaming constraints
    • @LbaNeXte for reporting a bug in decompression for queries with subqueries
    • @semtexzv for reporting an issue with continuous aggregates on int-based hypertables
    • @mr-ns for reporting an issue with privileges for creating chunks
    • @cloud-rocket for reporting an issue with setting an owner on continuous aggregate
    • @jocrau for reporting a bug during creating an index with transaction per chunk
    • @fvannee for reporting an issue with custom time types
    • @ArtificialPB for reporting a bug in executing queries with conditional ordering on compressed hypertable
    • @dutchgecko for reporting an issue with continuous aggregate datatype handling
    • @lambdaq for suggesting to improve error message in continuous aggregate creation
    • @francesco11112 for reporting memory issue on COPY
    • @Netskeh for reporting bug on time_bucket problem in continuous aggregates
    • @mr-ns for reporting the issue with CTEs on distributed hypertables
    • @akamensky for reporting an issue with recursive cache invalidation
    • @ryanbooz for reporting slow queries with real-time aggregation on continuous aggregates
    • @cevian for reporting an issue with disabling compression on distributed hypertables

    The music for this release was Synchroncity by The Police.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-2.0.0.tar.lzma(1.06 MB)
  • 2.0.0-rc4(Dec 21, 2020)

    This release candidate contains bugfixes since the previous release candidate, as well as additional minor features. It improves validation of configuration changes for background jobs, adds support for gapfill on distributed tables, contains improvements to the memory handling for large COPY, and contains improvements to compression for distributed hypertables.

    Minor Features

    • #2689 Check configuration in alter_job and add_job
    • #2696 Support gapfill on distributed hypertable
    • #2468 Show more information in get_git_commit
    • #2678 Include user actions into job stats view
    • #2664 Fix support for complex aggregate expression
    • #2672 Add hypertable to continuous aggregates view
    • #2662 Save compression metadata settings on access node
    • #2707 Introduce additional db for data node bootstrapping

    Bugfixes

    • #2688 Fix crash for concurrent drop and compress chunk
    • #2666 Fix timeout handling in async library
    • #2683 Fix crash in add_job when given NULL interval
    • #2698 Improve memory handling for remote COPY
    • #2555 Set metadata for chunks compressed before 2.0
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0-rc3(Dec 21, 2020)

    This release candidate contains bugfixes since the previous release candidate, as well as additional minor features including support for "user-mapping" authentication between access/data nodes and an experimental API for refreshing continuous aggregates on individual chunks.

    Minor Features

    • #2627 Add optional user mappings support
    • #2635 Add API to refresh continuous aggregate on chunk

    Bugfixes

    • #2560 Fix SCHEMA DROP CASCADE with continuous aggregates
    • #2593 Set explicitly all lock parameters in alter_job
    • #2604 Fix chunk creation on hypertables with foreign key constraints
    • #2610 Support analyze of internal compression table
    • #2612 Optimize internal cagg_watermark function
    • #2613 Refresh correct partial during refresh on drop
    • #2617 Fix validation of available extensions on data node
    • #2619 Fix segfault in decompress_chunk for chunks with dropped columns
    • #2620 Fix DROP CASCADE for continuous aggregate
    • #2625 Fix subquery errors when using AsyncAppend
    • #2626 Fix incorrect total_table_pages setting for compressed scan
    • #2628 Stop recursion in cache invalidation
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0-rc2(Dec 21, 2020)

    This release candidate contains bugfixes since the previous release candidate.

    Minor Features

    • #2520 Support non-transactional distibuted_exec

    Bugfixes

    • #2307 Overflow handling for refresh policy with integer time
    • #2503 Remove error for correct bootstrap of data node
    • #2507 Fix validation logic when adding a new data node
    • #2510 Fix outer join qual propagation
    • #2514 Lock dimension slices when creating new chunk
    • #2515 Add if_attached argument to detach_data_node()
    • #2517 Fix member access within misaligned address in chunk_update_colstats
    • #2525 Fix index creation on hypertables with dropped columns
    • #2543 Pass correct status to lock_job
    • #2544 Assume custom time type range is same as bigint
    • #2563 Fix DecompressChunk path generation
    • #2564 Improve continuous aggregate datatype handling
    • #2568 Change use of ssl_dir GUC
    • #2571 Make errors and messages conform to style guide
    • #2577 Exclude compressed chunks from ANALYZE/VACUUM
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0-rc1(Dec 21, 2020)

    This release adds major new features and bugfixes since the 1.7.4 release. We deem it moderate priority for upgrading.

    This release adds the long-awaited support for distributed hypertables to TimescaleDB. With 2.0, users can create distributed hypertables across multiple instances of TimescaleDB, configured so that one instance serves as an access node and multiple others as data nodes. All queries for a distributed hypertable are issued to the access node, but inserted data and queries are pushed down across data nodes for greater scale and performance.

    This release also adds support for user-defined actions allowing users to define actions that are run by the TimescaleDB automation framework.

    In addition to these major new features, the 2.0 branch introduces breaking changes to APIs and existing features, such as continuous aggregates. These changes are not backwards compatible and might require changes to clients and/or scripts that rely on the previous APIs. Please review our updated documentation and do proper testing to ensure compatibility with your existing applications.

    The noticeable breaking changes in APIs are:

    • Redefined functions for policies
    • A continuous aggregate is now created with CREATE MATERIALIZED VIEW instead of CREATE VIEW and automated refreshing requires adding a policy via add_continuous_aggregate_policy
    • Redesign of informational views, including new (and more general) views for information about policies and user-defined actions

    This release candidate is upgradable, so if you are on a previous release (e.g., 1.7.4) you can upgrade to the release candidate and later expect to be able to upgrade to the final 2.0 release. However, please carefully consider your compatibility requirements before upgrading.

    Major Features

    • #1923 Add support for distributed hypertables
    • #2006 Add support for user-defined actions
    • #2435 Move enterprise features to community
    • #2437 Update Timescale License

    Minor Features

    • #2011 Constify TIMESTAMPTZ OP INTERVAL in constraints
    • #2105 Support moving compressed chunks

    Bugfixes

    • #1843 Improve handling of "dropped" chunks
    • #1886 Change ChunkAppend leader to use worker subplan
    • #2116 Propagate privileges from hypertables to chunks
    • #2263 Fix timestamp overflow in time_bucket optimization
    • #2270 Fix handling of non-reference counted TupleDescs in gapfill
    • #2325 Fix rename constraint/rename index
    • #2370 Fix detection of hypertables in subqueries
    • #2376 Fix caggs width expression handling on int based hypertables
    • #2416 Check insert privileges to create chunk
    • #2428 Allow owner change of continuous aggregate
    • #2436 Propagate grants in continuous aggregates
    Source code(tar.gz)
    Source code(zip)
  • 1.7.4(Sep 8, 2020)

    This maintenance release contains bugfixes since the 1.7.3 release. We deem it high priority for upgrading if TimescaleDB is deployed with replicas (synchronous or asynchronous).

    In particular, the fixes contained in this maintenance release address an issue with running queries on compressed hypertables on standby nodes.

    Bugfixes

    • #2336 Remove tuple lock on select path

    The music for this release was Rufus and Chaka Khan's 1974 classic Rags to Rufus.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.4.tar.lzma(834.48 KB)
  • 1.7.3(Aug 27, 2020)

    This maintenance release contains bugfixes since the 1.7.2 release. We deem it high priority for upgrading.

    In particular the fixes contained in this maintenance release address issues in compression, drop_chunks and the background worker scheduler.

    Bugfixes

    • #2059 Improve infering start and stop arguments from gapfill query
    • #2067 Support moving compressed chunks
    • #2068 Apply SET TABLESPACE for compressed chunks
    • #2090 Fix index creation with IF NOT EXISTS for existing indexes
    • #2092 Fix delete on tables involving hypertables with compression
    • #2164 Fix telemetry installed_time format
    • #2184 Fix background worker scheduler memory consumption
    • #2222 Fix negative bitmapset member not allowed in decompression
    • #2255 Propagate privileges from hypertables to chunks
    • #2256 Fix segfault in chunk_append with space partitioning
    • #2259 Fix recursion in cache processing
    • #2261 Lock dimension slice tuple when scanning

    Thanks

    • @akamensky for reporting an issue with drop_chunks and ChunkAppend with space partitioning
    • @dewetburger430 for reporting an issue with setting tablespace for compressed chunks
    • @fvannee for reporting an issue with cache invalidation
    • @nexces for reporting an issue with ChunkAppend on space-partitioned hypertables
    • @PichetGoulu for reporting an issue with index creation and IF NOT EXISTS
    • @prathamesh-sonpatki for contributing a typo fix
    • @sezaru for reporting an issue with background worker scheduler memory consumption

    The music for this release was Bob Dylan's Blonde on Blonde.

    Source code(tar.gz)
    Source code(zip)
    timescaledb-1.7.3.tar.lzma(834.18 KB)
SiriDB is a highly-scalable, robust and super fast time series database

SiriDB is a highly-scalable, robust and super fast time series database. Build from the ground up SiriDB uses a unique mechanism to operate without a global index and allows server resources to be added on the fly. SiriDB's unique query language includes dynamic grouping of time series for easy analysis over large amounts of time series.

SiriDB 471 Jan 9, 2023
GridDB is a next-generation open source database that makes time series IoT and big data fast,and easy.

Overview GridDB is Database for IoT with both NoSQL interface and SQL Interface. Please refer to GridDB Features Reference for functionality. This rep

GridDB 2k Jan 8, 2023
HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine

HybridSE (Hybrid SQL Engine) is an LLVM-based, hybrid-execution and high-performance SQL engine. It can provide fast and consistent execution on heterogeneous SQL data systems, e.g., OLAD database, HTAP system, SparkSQL, and Flink Stream SQL.

4Paradigm 45 Sep 12, 2021
以简单、易用、高性能为目标、开源的时序数据库,支持Linux和Windows, Time Series Database

松果时序数据库(pinusdb) 松果时序数据库是一款针对中小规模(设备数少于10万台,每天产生的数据量少于10亿条)场景设计的时序数据库。以简单、易用、高性能为设计目标。使用SQL语句进行交互,拥有极低的学习、使用成本, 提供了丰富的功能、较高的性能。 我们的目标是成为最简单、易用、健壮的单机时序

null 99 Nov 19, 2022
DuckDB is an in-process SQL OLAP Database Management System

DuckDB is an in-process SQL OLAP Database Management System

DuckDB 7.8k Jan 3, 2023
YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features

YugabyteDB is a high-performance, cloud-native distributed SQL database that aims to support all PostgreSQL features. It is best to fit for cloud-native OLTP (i.e. real-time, business-critical) applications that need absolute data correctness and require at least one of the following: scalability, high tolerance to failures, or globally-distributed deployments.

yugabyte 7.4k Jan 7, 2023
OceanBase is an enterprise distributed relational database with high availability, high performance, horizontal scalability, and compatibility with SQL standards.

What is OceanBase database OceanBase Database is a native distributed relational database. It is developed entirely by Alibaba and Ant Group. OceanBas

OceanBase 5.1k Jan 4, 2023
dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover

dqlite dqlite is a C library that implements an embeddable and replicated SQL database engine with high-availability and automatic failover. The acron

Canonical 3.3k Jan 9, 2023
An open-source big data platform designed and optimized for the Internet of Things (IoT).

An open-source big data platform designed and optimized for the Internet of Things (IoT).

null 20.3k Dec 29, 2022
PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider: High-Performance SQL Cluster Engine for distributed big data.

PGSpider 132 Sep 8, 2022
The database built for IoT streaming data storage and real-time stream processing.

The database built for IoT streaming data storage and real-time stream processing.

HStreamDB 575 Dec 26, 2022
A mini database for learning database

A mini database for learning database

Chuckie Tan 4 Nov 14, 2022
Trilogy is a client library for MySQL-compatible database servers, designed for performance, flexibility, and ease of embedding.

Trilogy is a client library for MySQL-compatible database servers, designed for performance, flexibility, and ease of embedding.

GitHub 482 Dec 31, 2022
Nebula Graph is a distributed, fast open-source graph database featuring horizontal scalability and high availability

Nebula Graph is an open-source graph database capable of hosting super large scale graphs with dozens of billions of vertices (nodes) and trillions of edges, with milliseconds of latency.

vesoft inc. 834 Dec 24, 2022
PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL.

PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.

Alibaba 2.5k Dec 31, 2022
DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.

DB Browser for SQLite What it is DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files c

null 17.5k Jan 2, 2023
The open-source database for the realtime web.

RethinkDB What is RethinkDB? Open-source database for building realtime web applications NoSQL database that stores schemaless JSON documents Distribu

RethinkDB 25.9k Jan 9, 2023
SpDB is a data integration tool designed to organize scientific data from different sources under the same namespace according to a global schema and to provide access to them in a unified form (views)

SpDB is a data integration tool designed to organize scientific data from different sources under the same namespace according to a global schema and to provide access to them in a unified form (views). Its main purpose is to provide a unified data access interface for complex scientific computations in order to enable the interaction and integration between different programs and databases.

YU Zhi 0 Jun 22, 2022
High-performance time-series aggregation for PostgreSQL

PipelineDB has joined Confluent, read the blog post here. PipelineDB will not have new releases beyond 1.0.0, although critical bugs will still be fix

PipelineDB 2.5k Dec 26, 2022