Alternative LAZ implementation for C++ and JavaScript

Overview

Build Status AppVeyor Status

What is this?

Alternative LAZ implementation. It supports compilation and usage in JavaScript, usage in database contexts such as pgpointcloud and Oracle Point Cloud, and it executes faster than the LASzip codebase.

Why?

The Emscripten/WebAssembly output for LASzip to JavaScript was unusably slow. The generated JavaScript code is not going to be a feasible solution to bring LASzip to all browsers. This project provides an alternative implementation that plays nice with Emscripten and provides a more rigorous software engineering approach to a LASzip implementation.

How do I build this?

There are two ways you can build this: using Docker (which may not use the most recent Emscripten SDK) or manually, where you install the Emscripten SDK to your computer and use that for building laz-perf.

Using Docker

If you're a docker commando, you can run the provided docker build script to build both WASM and JS builds like so:

docker run -it -v $(pwd):/src trzeci/emscripten:sdk-incoming-64bit bash emscripten-docker-build.sh

You should then end up with a build-wasm and build-js directories with respective builds.

Manual method

You need to download the most recent version of Emscripten toolchain from Emscripten's Web Page and follow their setup process.

Once done, navigate to the root directory of laz-perf project and make a directory to stage build files in:

git clone https://github.com/hobu/laz-perf.git 
cd laz-perf
mkdir build ; cd build

Then run cmake like so:

cmake .. \
    -DEMSCRIPTEN=1 \
    -DCMAKE_TOOLCHAIN_FILE=<path-to-emsdk>/emscripten/<emsdk-version>/cmake/Modules/Platform/Emscripten.cmake

To perform a WebAssembly build, pass the -DWASM=1 parameter to the command above.

You should now be able to build JS/WASM output like so:

VERBOSE=1 make

Benchmark results so far

All tests were run on a 2013 Macbook Pro 2.6 Ghz Intel Core i7 16GB 1600 MHz DD3. Arithmetic encoder was run on a 4 field struct with two signed and two unsigned fields. Please see the benchmarks/brute.cpp for how these tests were run. The emscriten version used was Emscripten v1.14.0, fastcomp LLVM, JS host: Node v0.10.18

Native:

      Count       Comp Init       Comp Time      Comp Flush     Decomp Init     Decomp Time
       1000        0.000001        0.000279        0.000000        0.000000        0.000297
      10000        0.000000        0.001173        0.000000        0.000000        0.001512
     100000        0.000000        0.009104        0.000000        0.000000        0.011168
    1000000        0.000000        0.082419        0.000000        0.000000        0.108797

Node.js, test runtime JS v0.10.25

      Count       Comp Init       Comp Time      Comp Flush     Decomp Init     Decomp Time
       1000        0.000586        0.014682        0.000273        0.000383        0.008012
      10000        0.000022        0.017960        0.000009        0.000004        0.020219
     100000        0.000030        0.128615        0.000008        0.000004        0.141459
    1000000        0.000010        1.245053        0.000009        0.000005        1.396419

Firefox, v28.0

      Count       Comp Init       Comp Time      Comp Flush     Decomp Init     Decomp Time
       1000        0.000005        0.001311        0.000006        0.000003        0.000820
      10000        0.000003        0.007966        0.000004        0.000001        0.007299
     100000        0.000001        0.062016        0.000003        0.000001        0.064037
    1000000        0.000002        0.662454        0.000009        0.000003        0.673866

Google Chrome, v34.0.1847.116

      Count       Comp Init       Comp Time      Comp Flush     Decomp Init     Decomp Time
       1000        0.000751        0.012357        0.000424        0.000516        0.008413
      10000        0.000016        0.006971        0.000016        0.000004        0.009481
     100000        0.000008        0.059768        0.000009        0.000004        0.070253
    1000000        0.000009        0.576017        0.000019        0.000005        0.658435
Comments
  • Cannot pip install lazperf 1.2 from pypi

    Cannot pip install lazperf 1.2 from pypi

    The installation of the lazperf-1.2 package available on pypi fails. It looks to me that the installation fails because the lazperf 1.2 tarball does not include the laz-perf directory (and the cpp/hpp source files that should be in there). See below.

    $ virtualenv --python=python3 venv
    $ source env/bin/activate
    (venv) $ pip install --no-cache-dir numpy             # installs numpy 1.14.3 from pypi
    (venv) $ pip install --no-cache-dir 'lazperf==1.2'
    Collecting lazperf==1.2
      Downloading https://files.pythonhosted.org/packages/da/4a/9ea92d0d5133047036561299190acf47f6fadf4cfe53795d9b7e9de759af/lazperf-1.2.tar.gz (144kB)
        100% |████████████████████████████████| 153kB 105kB/s 
    Requirement already satisfied: numpy>=1.11 in ./venv/lib/python3.6/site-packages (from lazperf==1.2) (1.14.3)
    Installing collected packages: lazperf
      Running setup.py install for lazperf ... error
        Complete output from command /home/elemoine/src/laz-perf/python/venv/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-927_o0iy/lazperf/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-yvh8vshl/install-record.txt --single-version-externally-managed --compile --install-headers /home/elemoine/src/laz-perf/python/venv/include/site/python3.6/lazperf:
        running install
        running build
        running build_py
        creating build
        creating build/lib.linux-x86_64-3.6
        creating build/lib.linux-x86_64-3.6/lazperf
        copying lazperf/__init__.py -> build/lib.linux-x86_64-3.6/lazperf
        running egg_info
        writing lazperf.egg-info/PKG-INFO
        writing dependency_links to lazperf.egg-info/dependency_links.txt
        writing requirements to lazperf.egg-info/requires.txt
        writing top-level names to lazperf.egg-info/top_level.txt
        reading manifest file 'lazperf.egg-info/SOURCES.txt'
        reading manifest template 'MANIFEST.in'
        warning: no files found matching '*' under directory 'laz-perf'
        writing manifest file 'lazperf.egg-info/SOURCES.txt'
        copying lazperf/PyLazperf.cpp -> build/lib.linux-x86_64-3.6/lazperf
        copying lazperf/PyLazperf.hpp -> build/lib.linux-x86_64-3.6/lazperf
        copying lazperf/PyLazperfTypes.hpp -> build/lib.linux-x86_64-3.6/lazperf
        copying lazperf/PyVlrCompressor.cpp -> build/lib.linux-x86_64-3.6/lazperf
        copying lazperf/pylazperfapi.cpp -> build/lib.linux-x86_64-3.6/lazperf
        copying lazperf/pylazperfapi.pyx -> build/lib.linux-x86_64-3.6/lazperf
        running build_ext
        building 'lazperf.pylazperfapi' extension
        creating build/temp.linux-x86_64-3.6
        creating build/temp.linux-x86_64-3.6/lazperf
        x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fdebug-prefix-map=/build/python3.6-bL6UQz/python3.6-3.6.5=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I. -I/home/elemoine/src/laz-perf/python/venv/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -I/home/elemoine/src/laz-perf/python/venv/include/python3.6m -c lazperf/pylazperfapi.cpp -o build/temp.linux-x86_64-3.6/lazperf/pylazperfapi.o -std=c++11 -g -O0
        cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
        In file included from lazperf/pylazperfapi.cpp:482:0:
        lazperf/PyLazperf.hpp:3:10: fatal error: laz-perf/common/common.hpp: No such file or directory
         #include <laz-perf/common/common.hpp>
                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
        compilation terminated.
        error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
        
        ----------------------------------------
    Command "/home/elemoine/src/laz-perf/python/venv/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-927_o0iy/lazperf/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-yvh8vshl/install-record.txt --single-version-externally-managed --compile --install-headers /home/elemoine/src/laz-perf/python/venv/include/site/python3.6/lazperf" failed with error code 1 in /tmp/pip-install-927_o0iy/lazperf/
    
    opened by elemoine 17
  • Future plans for laz-perf

    Future plans for laz-perf

    I been watching some of the recent pull requests and merges from @tmontaigu (really awesome BTW!) and also noticed that you've had a lot of recent work to add support to read from buffers in laspy. Do you think you'll try add laz-perf to laspy to get native LAZ io?

    @hobu , also noticed you had a pull request from a few years ago to do just this, but it doesn't look like the laspy devs went anywhere with it...

    opened by mccarthyryanc 16
  • Python setup.py import numpy while having as dependency

    Python setup.py import numpy while having as dependency

    In laz-perf/python/setup.py the numpy module is imported whilst also being an installation dependency. This means that if lazperf is in a dependency chain, such as a requirements.txt, then the installation will fail. A possible solution would be to remove numpy from setup.py but this may not be easy in this use case. Looking at it briefly, would it be possible to move the setup() call to the top of the file so that numpy loads the dependency before the import?

    In my situation I can reproduce by installing the module in a docker instance before numpy is installed. You should also be able to reproduce with a fresh virtualenv of Python installation.

    This is the error that is produced in my situation:

      Downloading https://files.pythonhosted.org/packages/07/e9/2020d50b7c9465831f6246812c382b03cff14c5b4708b53cbebd1d2e61c1/lazperf-1.3.tar.gz (186kB)
        ERROR: Command errored out with exit status 1:
         command: /usr/local/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-whuivcst/lazperf/setup.py'"'"'; __file__='"'"'/tmp/pip-install-whuivcst/lazperf/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info
             cwd: /tmp/pip-install-whuivcst/lazperf/
        Complete output (5 lines):
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-install-whuivcst/lazperf/setup.py", line 11, in <module>
            import numpy
        ModuleNotFoundError: No module named 'numpy'
        ----------------------------------------
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    
    opened by matthew-jones-uk 10
  • Postgres 10 crashing on decompressing patch with laz-perf

    Postgres 10 crashing on decompressing patch with laz-perf

    I've recently rebuild my pgpointcloud setup but am running into a segfault when doing PC_Intersection(patch, geom). According to the backtrace below this seems to be traceable to the laz-perf decoder but I'm not entirely sure. Can someone confirm that it is indeed coming from the laz-perf decoder?

    I am running on a postgres 10 ubuntu package and fresh builds of: laszip-src-3.2.9 laz-perf-1.3.0 pointcloud-1.2.0

    Data in the database has been placed there before by pgpointcloud with likely the same release numbers.

    #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
    #1  0x00007fc15eda2535 in __GI_abort () at abort.c:79
    #2  0x00007fc15eda240f in __assert_fail_base (fmt=0x7fc15ef30588 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", 
        assertion=0x7fb9202660dc "sym < (1<<16)", file=0x7fb920266050 "/usr/local/include/laz-perf/decoder.hpp", 
        line=170, function=<optimized out>) at assert.c:92
    #3  0x00007fc15edb2012 in __GI___assert_fail (assertion=0x7fb9202660dc "sym < (1<<16)", 
        file=0x7fb920266050 "/usr/local/include/laz-perf/decoder.hpp", line=170, 
        function=0x7fb920266600 <laszip::decoders::arithmetic<LazPerfBuf>::readShort()::__PRETTY_FUNCTION__> "U16 laszip::decoders::arithmetic<TInputStream>::readShort() [with TInputStream = LazPerfBuf; U16 = short unsigned int]")
        at assert.c:101
    #4  0x00007fb920262a2e in laszip::decoders::arithmetic<LazPerfBuf>::readShort() ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #5  0x00007fb920262458 in laszip::decoders::arithmetic<LazPerfBuf>::readBits(unsigned int) ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #6  0x00007fb920261f50 in int laszip::decompressors::integer::readCorrector<laszip::decoders::arithmetic<LazPerfBuf>, laszip::models::arithmetic>(laszip::decoders::arithmetic<LazPerfBuf>&, laszip::models::arithmetic&) ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #7  0x00007fb920261bb8 in int laszip::decompressors::integer::decompress<laszip::decoders::arithmetic<LazPerfBuf> >(laszip::decoders::arithmetic<LazPerfBuf>&, int, unsigned int) ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #8  0x00007fb9202611c8 in int laszip::formats::field<int, laszip::formats::standard_diff_method<int> >::decompressWith<laszip::decoders::arithmetic<LazPerfBuf> >(laszip::decoders::arithmetic<LazPerfBuf>&) ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #9  0x00007fb920260b9e in laszip::formats::dynamic_decompressor_field<laszip::decoders::arithmetic<LazPerfBuf>, laszip::formats::field<int, laszip::formats::standard_diff_method<int> > >::decompressRaw(char*) ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #10 0x00007fb920259c1e in laszip::formats::dynamic_field_decompressor<laszip::decoders::arithmetic<LazPerfBuf> >::decompress(char*) () from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #11 0x00007fb920257d64 in LazPerfDecompressor::decompress(unsigned char*, unsigned long) ()
       from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #12 0x00007fb9202574f1 in lazperf_uncompress_from_compressed () from /usr/lib/postgresql/10/lib/pointcloud-1.2.so
    #13 0x00007fb920256f83 in pc_patch_uncompressed_from_lazperf (palaz=0x55e6f8560cc0) at pc_patch_lazperf.c:96
    #14 0x00007fb92025705b in pc_pointlist_from_lazperf (palaz=<optimized out>) at pc_patch_lazperf.c:77
    #15 0x00007fb9202536b5 in pc_pointlist_from_patch (patch=patch@entry=0x55e6f8560cc0) at pc_pointlist.c:130
    #16 0x00007fb92024a395 in pcpatch_unnest (fcinfo=0x55e6f8537d08) at pc_access.c:589
    #17 0x000055e6ec765e8b in ExecMakeFunctionResultSet (fcache=0x55e6f8537c98, 
        econtext=econtext@entry=0x55e6f85371d8, isNull=<optimized out>, isDone=isDone@entry=0x55e6f8537c80)
        at ./build/../src/backend/executor/execSRF.c:579
    #18 0x000055e6ec77f3ef in ExecProjectSRF (node=node@entry=0x55e6f85370c8, continuing=continuing@entry=0 '\000')
        at ./build/../src/backend/executor/nodeProjectSet.c:166
    #19 0x000055e6ec77f4b4 in ExecProjectSet (pstate=0x55e6f85370c8)
        at ./build/../src/backend/executor/nodeProjectSet.c:96
    #20 0x000055e6ec781e2d in ExecProcNode (node=0x55e6f85370c8) at ./build/../src/include/executor/executor.h:250
    #21 CteScanNext (node=0x55e6f8538c18) at ./build/../src/backend/executor/nodeCtescan.c:103
    #22 0x000055e6ec764b89 in ExecScanFetch (recheckMtd=0x55e6ec781d10 <CteScanRecheck>, 
        accessMtd=0x55e6ec781d40 <CteScanNext>, node=0x55e6f8538c18) at ./build/../src/backend/executor/execScan.c:97
    #23 ExecScan (node=0x55e6f8538c18, accessMtd=0x55e6ec781d40 <CteScanNext>, 
        recheckMtd=0x55e6ec781d10 <CteScanRecheck>) at ./build/../src/backend/executor/execScan.c:164
    #24 0x000055e6ec781e2d in ExecProcNode (node=0x55e6f8538c18) at ./build/../src/include/executor/executor.h:250
    #25 CteScanNext (node=0x55e6f853a1e8) at ./build/../src/backend/executor/nodeCtescan.c:103
    #26 0x000055e6ec764b89 in ExecScanFetch (recheckMtd=0x55e6ec781d10 <CteScanRecheck>, 
        accessMtd=0x55e6ec781d40 <CteScanNext>, node=0x55e6f853a1e8) at ./build/../src/backend/executor/execScan.c:97
    #27 ExecScan (node=0x55e6f853a1e8, accessMtd=0x55e6ec781d40 <CteScanNext>, 
        recheckMtd=0x55e6ec781d10 <CteScanRecheck>) at ./build/../src/backend/executor/execScan.c:164
    #28 0x000055e6ec781e2d in ExecProcNode (node=0x55e6f853a1e8) at ./build/../src/include/executor/executor.h:250
    #29 CteScanNext (node=0x55e6f853c028) at ./build/../src/backend/executor/nodeCtescan.c:103
    #30 0x000055e6ec764d39 in ExecScanFetch (recheckMtd=0x55e6ec781d10 <CteScanRecheck>, 
        accessMtd=0x55e6ec781d40 <CteScanNext>, node=0x55e6f853c028) at ./build/../src/backend/executor/execScan.c:97
    #31 ExecScan (node=0x55e6f853c028, accessMtd=0x55e6ec781d40 <CteScanNext>, 
    --Type <RET> for more, q to quit, c to continue without paging--
        recheckMtd=0x55e6ec781d10 <CteScanRecheck>) at ./build/../src/backend/executor/execScan.c:147
    #32 0x000055e6ec76acdc in ExecProcNode (node=0x55e6f853c028) at ./build/../src/include/executor/executor.h:250
    #33 fetch_input_tuple (aggstate=aggstate@entry=0x55e6f853b9e0) at ./build/../src/backend/executor/nodeAgg.c:695
    #34 0x000055e6ec76cf58 in agg_retrieve_direct (aggstate=0x55e6f853b9e0)
        at ./build/../src/backend/executor/nodeAgg.c:2362
    #35 ExecAgg (pstate=0x55e6f853b9e0) at ./build/../src/backend/executor/nodeAgg.c:2173
    #36 0x000055e6ec781e2d in ExecProcNode (node=0x55e6f853b9e0) at ./build/../src/include/executor/executor.h:250
    #37 CteScanNext (node=0x55e6f854a938) at ./build/../src/backend/executor/nodeCtescan.c:103
    #38 0x000055e6ec764d39 in ExecScanFetch (recheckMtd=0x55e6ec781d10 <CteScanRecheck>, 
        accessMtd=0x55e6ec781d40 <CteScanNext>, node=0x55e6f854a938) at ./build/../src/backend/executor/execScan.c:97
    #39 ExecScan (node=0x55e6f854a938, accessMtd=0x55e6ec781d40 <CteScanNext>, 
        recheckMtd=0x55e6ec781d10 <CteScanRecheck>) at ./build/../src/backend/executor/execScan.c:147
    #40 0x000055e6ec75e9b3 in ExecProcNode (node=0x55e6f854a938) at ./build/../src/include/executor/executor.h:250
    #41 ExecutePlan (execute_once=<optimized out>, dest=0x55e6f8524aa8, direction=<optimized out>, numberTuples=1, 
        sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, 
        planstate=0x55e6f854a938, estate=0x55e6f8536e28) at ./build/../src/backend/executor/execMain.c:1723
    #42 standard_ExecutorRun (queryDesc=0x55e6f8524af8, direction=<optimized out>, count=1, 
        execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:364
    #43 0x00007fc15fc09075 in pgss_ExecutorRun (queryDesc=0x55e6f8524af8, direction=ForwardScanDirection, count=1, 
        execute_once=<optimized out>) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:889
    #44 0x000055e6ec769b37 in postquel_getnext (es=0x55e6f8524778, es=0x55e6f8524778, fcache=0x55e6f8514da8, 
        fcache=0x55e6f8514da8) at ./build/../src/backend/executor/functions.c:1169
    #45 fmgr_sql (fcinfo=0x55e6f7736d58) at ./build/../src/backend/executor/functions.c:1159
    #46 0x000055e6ec75b131 in ExecInterpExpr (state=0x55e6f77368c0, econtext=0x55e6eddf1608, isnull=<optimized out>)
        at ./build/../src/backend/executor/execExprInterp.c:650
    #47 0x000055e6ec77f09e in ExecEvalExprSwitchContext (isNull=0x7ffd688ec637 "", econtext=0x55e6eddf1608, 
        state=0x55e6f77368c0) at ./build/../src/include/executor/executor.h:308
    #48 ExecProject (projInfo=0x55e6f77368b8) at ./build/../src/include/executor/executor.h:342
    #49 ExecNestLoop (pstate=<optimized out>) at ./build/../src/backend/executor/nodeNestloop.c:241
    #50 0x000055e6ec75e9b3 in ExecProcNode (node=0x55e6eddf14f8) at ./build/../src/include/executor/executor.h:250
    #51 ExecutePlan (execute_once=<optimized out>, dest=0x55e6f6f91288, direction=<optimized out>, numberTuples=0, 
        sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, 
        planstate=0x55e6eddf14f8, estate=0x55e6eddf12b8) at ./build/../src/backend/executor/execMain.c:1723
    #52 standard_ExecutorRun (queryDesc=0x55e6eea34e98, direction=<optimized out>, count=0, 
        execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:364
    #53 0x00007fc15fc09075 in pgss_ExecutorRun (queryDesc=0x55e6eea34e98, direction=ForwardScanDirection, count=0, 
        execute_once=<optimized out>) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:889
    #54 0x000055e6ec8a11fc in PortalRunSelect (portal=portal@entry=0x55e6ede53698, forward=forward@entry=1 '\001', 
        count=0, count@entry=9223372036854775807, dest=dest@entry=0x55e6f6f91288)
        at ./build/../src/backend/tcop/pquery.c:932
    #55 0x000055e6ec8a2788 in PortalRun (portal=portal@entry=0x55e6ede53698, count=count@entry=9223372036854775807, 
        isTopLevel=isTopLevel@entry=1 '\001', run_once=run_once@entry=1 '\001', dest=dest@entry=0x55e6f6f91288, 
        altdest=altdest@entry=0x55e6f6f91288, completionTag=0x7ffd688eca00 "")
        at ./build/../src/backend/tcop/pquery.c:773
    #56 0x000055e6ec89e4fa in exec_simple_query (
        query_string=0x55e6eddf61f8 "SELECT PC_Intersection(pa,geom) FROM tmptom.blocks a INNER JOIN ahn3_pointcloud.vw_buildings b ON PC_Intersects(a.geom, b.pa) WHERE blockid = '0363100012073393';")
        at ./build/../src/backend/tcop/postgres.c:1122
    #57 0x000055e6ec89fcd8 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x55e6edd9a870, 
        dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4117
    #58 0x000055e6ec82cdd3 in BackendRun (port=0x55e6edd920b0) at ./build/../src/backend/postmaster/postmaster.c:4402
    #59 BackendStartup (port=0x55e6edd920b0) at ./build/../src/backend/postmaster/postmaster.c:4074
    #60 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1756
    #61 0x000055e6ec82dc52 in PostmasterMain (argc=5, argv=<optimized out>)
        at ./build/../src/backend/postmaster/postmaster.c:1364
    #62 0x000055e6ec5cbd29 in main (argc=5, argv=0x55e6edd37c00) at ./build/../src/backend/main/main.c:228
    
    opened by tomvantilburg 10
  • addField() deprecation warning

    addField() deprecation warning

    The commit https://github.com/hobu/laz-perf/commit/e0bd817dd15cf57aa04d8b6b3ae809fbc1af84bc marks eb_vlr::addField() as deprecated, but it's still used in the code. When I use the code in QGIS, I get the following warning (which is a problem because in CI warnings are treated as errors):

    [5/24] Building CXX object src/core/CMakeFiles/qgis_core.dir/__/__/external/lazperf/vlr.cpp.o
    ../external/lazperf/vlr.cpp: In constructor ‘lazperf::eb_vlr::eb_vlr(int)’:
    ../external/lazperf/vlr.cpp:329:18: warning: ‘void lazperf::eb_vlr::addField()’ is deprecated [-Wdeprecated-declarations]
      329 |         addField();
          |                  ^
    In file included from ../external/lazperf/vlr.cpp:37:
    ../external/lazperf/vlr.hpp:166:25: note: declared here
      166 |     [[deprecated]] void addField();
          |                         ^~~~~~~~
    
    opened by wonder-sk 9
  • CMake: allow to add laz-perf with add_subdirectory()

    CMake: allow to add laz-perf with add_subdirectory()

    When laz-perf tree is added through add_subdirectory(), CMake generation fails with Run cmake from the top level of the source tree due to https://github.com/hobu/laz-perf/blob/2e3c316248fa534cdeba1b47b2e9fe1a0ecf5dca/cpp/CMakeLists.txt#L1-L3

    Indeed CMAKE_PROJECT_NAME always refers to the top level project, not the closest one (which is PROJECT_NAME). I advice to completly remove those 3 lines.

    opened by SpaceIm 6
  • portable_endian license

    portable_endian license

    Currently, the portable endian license file does not carry a license. The version below however clearly states that it is in the public domain, which was confirmed by the author https://github.com/panzi/mathfun/issues/1 . Perhaps it is better to link to this file (and merge emscripten changes back?).

    https://gist.githubusercontent.com/panzi/6856583/raw/1eca2ab34f2301b9641aa73d1016b951fff3fc39/portable_endian.h

    opened by johanvdw 6
  • laz-perf (included in qgis 3.22) fails to build on NetBSD

    laz-perf (included in qgis 3.22) fails to build on NetBSD

    On NetBSD 9, I got complains of redefinitions of many macros like le32toh and finally an undefined variable.

    Reading portable_endian.h I see that NetBSD is treated like Dragonfly and there is renaming, while on FreeBSD and OpenBSD sys/endian.h is simply included. Only NetBSD 8 and later matter, but NetBSD got sys/endian.h in 1999, and it doesn't seem to have definitions for the target symbols in the Dragonfly case.

    I'm guessing this was just an incorrect guess that NetBSD was like DragonFly, when really it seems all are similar except for DragonFly. I'm happy to submit a PR; the following diff (which github web UI refuses to attach) caused the qgis build to make vastly more progress (still running):

    --- portable_endian.hpp.orig    2022-01-14 07:06:34.000000000 -0500
    +++ portable_endian.hpp 2022-02-02 20:11:56.651221385 -0500
    @@ -43,11 +43,11 @@
     #   define __PDP_ENDIAN    PDP_ENDIAN
     **/
     
    -#elif defined(__OpenBSD__)|| defined(__FreeBSD__) 
    +#elif defined(__OpenBSD__)|| defined(__FreeBSD__) || defined(__NetBSD__)
     
     #   include <sys/endian.h>
     
    -#elif defined(__NetBSD__) || defined(__DragonFly__)
    +#elif defined(__DragonFly__)
     
     #   define be16toh betoh16
     #   define le16toh letoh16
    
    

    I'm happy to submit a PR if you are inclined to take it.

    opened by gdt 5
  • LAZ-perf or LAZ-zip for write

    LAZ-perf or LAZ-zip for write

    Hi,

    I am new to these tools, but by reading the README I understand:

    • that LAZ-perf makes a performance when reading LAZ especially when reading from a browser.
    • it is not clear if LAZ-perf browser features are only available if the LAZ was written initially with LAZperf or if it is also working with any LAZ.
    • it is not clear if LAZ-perf is more performant than LASzip (when building LAZ from a LAS for example) ?

    Would it be please possible to clarify this and maybe put this details into the README ?

    Regards

    opened by julienlau 5
  • portable_endian.hpp includes windows.h - which tends to cause several issues in including files

    portable_endian.hpp includes windows.h - which tends to cause several issues in including files

    Including windows.h has the tendency to break code, e.g.:

    • https://stackoverflow.com/questions/11544073/how-do-i-deal-with-the-max-macro-in-windows-h-colliding-with-max-in-std

    and often also in not easily addressable ways:

    • https://developercommunity.visualstudio.com/t/error-c2872-byte-ambiguous-symbol/93889

    Trying to include laz-perf in PotreeConverter is a pretty difficult experience due to that. Would you accept a PR that moves the # include <winsock2.h> from portable_endian.hpp into a separate portable_endian.cpp file? As long as the include is in a cpp file, it won't cause issues.

    Right now I'm having succes after replacing

     #       define htobe16 htons
     #       define htole16(x) (x)
     #       define be16toh ntohs
     #       define le16toh(x) (x)
    
     #       define htobe32 htonl
     #       define htole32(x) (x)
     #       define be32toh ntohl
     #       define le32toh(x) (x)
    
     #       define htobe64 htonll
     #       define htole64(x) (x)
     #       define be64toh ntohll
     #       define le64toh(x) (x)
    

    with

    inline uint16_t htobe16(uint16_t value);
    inline uint16_t htole16(uint16_t value) {return value;};
    inline uint16_t be16toh(uint16_t value);
    inline uint16_t le16toh(uint16_t value) {return value;};
    
    inline uint32_t htobe32(uint32_t value);
    inline uint32_t htole32(uint32_t value) {return value;};
    inline uint32_t be32toh(uint32_t value);
    inline uint32_t le32toh(uint32_t value) {return value;};
    
    inline uint64_t htobe64(uint64_t value);
    inline uint64_t htole64(uint64_t value) {return value;};
    inline uint64_t be64toh(uint64_t value);
    inline uint64_t le64toh(uint64_t value) {return value;};
    
    

    And then implementing the functions that rely on the windows api in portable_endian.cpp as follows:

    inline uint16_t htobe16(uint16_t value){
    	return htons(value);
    }
    
    inline uint16_t be16toh(uint16_t value){
    	return ntohs(value);
    }
    
    inline uint32_t htobe32(uint32_t value){
    	return htonl(value);
    }
    
    inline uint32_t be32toh(uint32_t value){
    	return ntohl(value);
    }
    
    inline uint64_t htobe64(uint64_t value){
    	return htonll(value);
    }
    
    inline uint64_t be64toh(uint64_t value){
    	return ntohll(value);
    }
    
    opened by m-schuetz 5
  • LAZPERF_EXPORT macro is wrong on Windows

    LAZPERF_EXPORT macro is wrong on Windows

    LAZPERF_EXPORT macro is defined in lazperf.h: https://github.com/hobu/laz-perf/blob/2e3c316248fa534cdeba1b47b2e9fe1a0ecf5dca/cpp/lazperf/lazperf.hpp#L42-L47

    It's broken on Windows.

    • If static, this macro should be empty (all OS)
    • If shared, this macro should be:
      • Windows: __declspec(dllexport) at build time, and __declspec(dllimport) at consume time
      • others OS: __attribute__((visibility ("default")))
    opened by SpaceIm 5
  • Uncompressed / LAS output is broken

    Uncompressed / LAS output is broken

    ... due to a bug in writers.cpp

    This line https://github.com/hobuinc/laz-perf/blob/9048d0dc394465c07a95d0495178d976b733af6f/cpp/lazperf/writers.cpp#L178

    Needs to move one down past the closing } so it applies to both compressed and uncompressed output.

    Right now when writing LAS files (by setting chunk size == 0 in the config) the point count in the header is always 0.

    opened by rafaelspring 0
  • Multi-threaded encoding/compression?

    Multi-threaded encoding/compression?

    I don't have any hands-on experience with laz-perf so this is more of a question than an issue.

    I'm interested in multi-threaded LAZ-writing and I do get the impression this API might facilitate multiple threads each encoding separate chunks, and a main thread "gathering" and outputting those in the appropriate order.

    Just curious if there are any known obstacles in the way of doing that? In the context of garden variety Windows and Linux binaries.

    opened by nigels-com 1
  • Document how to decompress only some attributes in LAZ

    Document how to decompress only some attributes in LAZ

    I have learned it is possible to make laz-perf read only some attributes of a LAZ file, saving some CPU time when reading data. It would be useful to have some documentation / code sample how to actually do that...

    opened by wonder-sk 0
  • lazperf::reader objects refuse to open las format 1.0 and 1.1

    lazperf::reader objects refuse to open las format 1.0 and 1.1

    On line 159 of readers.cpp, in the function for reading the header, we have the following check:

    if (head12.version.minor < 2 || head12.version.minor > 4)
        return false;
    

    In other words, if the file isn't las/laz version 1.2 through 1.4, give up on even attempting to read the header.

    I assume the reason for this is because version 1.2 added the global encoding flags, in bytes which were previously marked as reserved. Theoretically, a las version 1.0 or 1.1 file that doesn't have those bytes zeroed out is an improperly formatted file. In practice, it seems to be industry standard to pretend that files that record their version as 1.0 or 1.1 are actually version 1.2. For example, I have a file which claims to be version 1.0 but which has point format 1--a theoretically invalid combination. But in my testing, lidR, PDAL, and FUSION will all open it without complaints.

    Removing the check forbidding versions 1.0-1.1 and rebuilding lazperf, I am able to read version 1.0 files without issue, including the invalid file mentioned above. I don't see anything that was changed in version 1.2 that makes it dangerous to read a version 1.0 or 1.1 header as if it were 1.2--1.2 just added more options in bytes that ought to be zeroed out in well-formatted 1.0 files. In practice, this is already happening when reading version 1.2 files--the global encoding bytes are being copied uncritically from the harddrive into memory, even though only one bit has any meaning in the 1.2 specification.

    If this solution is considered a bit too loose, I'd be willing to try my hand at a pull request to implement proper header10 and header11 support, which would skip the reserved bytes and not even copy them into memory (or always record them as 0). Assuming I'm not missing a good reason why those formats aren't supported by reader objects, that is.

    opened by jontkane 1
  • decompressor doesn't know when it's done

    decompressor doesn't know when it's done

    The stream API doesn't allow the decompressor know when it has consumed a stream. It depends on the user knowing how many points are being decompressed.

    opened by abellgithub 3
Releases(3.2.0)
  • 3.2.0(Jul 21, 2022)

  • 3.1.0(Jul 20, 2022)

    • LASzip and laz-perf are now relicensed as APLv2. See https://github.com/LASzip/LASzip/pull/80 for details
    • Add non-stream VLR packing bbb31580fcb367788e23bb3469e2f2d0a4362d7e
    • Fix up mingw definitions https://github.com/hobuinc/laz-perf/pull/110 (thanks @xantares !)
    • Endian handling on BSD platforms #119
    • Do not install gtest stuff #123
    Source code(tar.gz)
    Source code(zip)
  • 3.0.0(Nov 17, 2021)

  • 2.1.0(Aug 31, 2021)

    2.1.0

    New features

    • Support for variable-sized chunks was added (#92)
    • Added support to encode and decode data chunks without reference to a LAZ file.

    API Changes

    API changes in the support code are substantial.

    • lazperf::reader interface has been moved from "io.hpp" to "readers.hpp"
    • lazperf::writer interface has been moved from "io.hpp" to "writers.hpp"
    • Moved header support from "io.hpp" to "header.hpp"
    • Added chunk_compressor and chunk_decompressor to allow manipulation of LAZ data chunks without reference to a file.
    • Added writer::basic_file::newChunk() to create a new data chunk when writing points.
    • Added writer::basic_file::firstChunkOffset() to return an offset to the location where the first chunk should be written.
    • Header support for versions 2-4 has been altered. The API works with a LAS 1.4 header, some of which may be zero if the file version is 2 or 3.
    • VLR support has been modified to allow simple reading from or writing to an open stream.
    • All data is now properly set to little-endian when written and converted from little-endian when read if necessary.
    • Added ChunkDecompressor emscripten binding to provide access to the C++ chunk_decompressor class.

    Resolved Issues

    • An assertion that could be raised when encoding INT_MIN has been fixed. (#94)
    • An error in decoding some fields where no change existed in a chunk has been fixed. (#95)
    • An assertion raised when decoding files with more than one chunk has been fixed. (#96)
    • A bad check on compressor type in the LASzip VLR has been fixed. (#100)
    Source code(tar.gz)
    Source code(zip)
  • 2.0.5(Jun 15, 2021)

  • 2.0.4(Jun 14, 2021)

  • 2.0.3(Jun 9, 2021)

  • 2.0.2(Jun 9, 2021)

  • 2.0.1(Jun 7, 2021)

  • 2.0.0(May 14, 2021)

    2.0.0

    • LAZ 1.4 read and write support
    • WASM updates and enhancements
    • Minor performance improvements
    • CMake configuration enhancements
    • Python support dropped - Use lazrs and laspy
    Source code(tar.gz)
    Source code(zip)
  • 1.5.0(Oct 2, 2020)

  • 1.4.4(Apr 16, 2020)

  • 1.4.0(Apr 15, 2020)

Owner
Howard Butler
Developer of @PDAL, @libLAS, @libspatialindex, and others.
Howard Butler
C++ implementation of R*-tree, an MVR-tree and a TPR-tree with C API

libspatialindex Author: Marios Hadjieleftheriou Contact: [email protected] Revision: 1.9.3 Date: 10/23/2019 See http://libspatialindex.org for full doc

null 633 Dec 28, 2022
MITRE's C/C++ implementation of WGS84 geodesic algorithms documented in FAA Order 8260.58A, Appendix E.

MITRE Geodetic Library Geodetic library (or geolib) is a library for performing WGS-84 calculations with high precision. We think it's very handy and

The MITRE Corporation 2 Oct 14, 2022
Interactive, thoroughly customizable maps in native Android, iOS, macOS, Node.js, and Qt applications, powered by vector tiles and OpenGL

Mapbox GL Native A C++ library that powers customizable vector maps in native applications on multiple platforms by taking stylesheets that conform to

Mapbox 4.2k Jan 6, 2023
2D and 3D map renderer using OpenGL ES

Tangram ES Tangram ES is a C++ library for rendering 2D and 3D maps from vector data using OpenGL ES. It is a counterpart to Tangram. This repository

Tangram 750 Jan 1, 2023
Terrain Analysis Using Digital Elevation Models (TauDEM) software for hydrologic terrain analysis and channel network extraction.

TauDEM (Terrain Analysis Using Digital Elevation Models) is a suite of Digital Elevation Model (DEM) tools for the extraction and analysis of hydrolog

David Tarboton 191 Dec 28, 2022
A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷

An open source command line toolkit for processing aerial drone imagery. ODM turns simple 2D images into: Classified Point Clouds 3D Textured Models G

OpenDroneMap 3.9k Jan 6, 2023
Computational geometry and spatial indexing on the sphere

S2 Geometry Library Overview This is a package for manipulating geometric shapes. Unlike many geometry libraries, S2 is primarily designed to work wit

Google 1.9k Dec 31, 2022
A C++17 image representation, processing and I/O library.

Selene Selene is a C++17 image representation, processing, and I/O library, focusing on ease of use and a clean, modern, type-safe API. Overview: Brie

Michael Hofmann 286 Oct 26, 2022
A lean, efficient, accurate geohash encoder and decoder library implemented in C

Geohash encoder/decoder in C A lean, efficient, accurate geohash encoder and decoder library implemented in C. It does not depend on the C standard li

Christopher Wellons 20 Nov 20, 2022
A library of distance and occlusion generation routines

Distance/Occlusion Library + Tool From left to right: original, signed distance with zero at 0.5, red/green SDF, delta vectors to closest boundary poi

Andrew Willmott 105 Nov 2, 2022
An alternative mlatu implementation, written in C

Readable Mlatu An implementation of Mlatu written in C Some Quiche Eaters have been complaining that code here is illegible, so I have named it "Reada

The Mlatu programming language 4 May 5, 2022
A small self-contained alternative to readline and libedit that supports UTF-8 and Windows and is BSD licensed.

Linenoise Next Generation A small, portable GNU readline replacement for Linux, Windows and MacOS which is capable of handling UTF-8 characters. Unlik

ArangoDB 340 Dec 6, 2022
A fast and easy to configure alternative to neofetch written in C and configured using Lua

lcfetch A fast and easy to configure alternative to neofetch written in C and configured using Lua (still in a very early stage)! IMPORTANT: I'm a new

Alejandro 23 Jul 29, 2022
A small self-contained alternative to readline and libedit

Linenoise A minimal, zero-config, BSD licensed, readline replacement used in Redis, MongoDB, and Android. Single and multi line editing mode with the

Salvatore Sanfilippo 3.1k Jan 2, 2023
RE2 is a fast, safe, thread-friendly alternative to backtracking regular expression engines like those used in PCRE, Perl, and Python. It is a C++ library.

This is the source code repository for RE2, a regular expression library. For documentation about how to install and use RE2, visit https://github.co

Google 7.5k Jan 4, 2023
A small self-contained alternative to readline and libedit

Linenoise A minimal, zero-config, BSD licensed, readline replacement used in Redis, MongoDB, and Android. Single and multi line editing mode with the

Salvatore Sanfilippo 3.1k Dec 30, 2022
Alternative firmware for IP cameras based on the HiSilicon (and other) SoC's

OpenIPC v2.1 (experimental, buildroot based..) Alternative firmware for IP cameras based on the HiSilicon (and other) SoC's More information about the

OpenIPC 363 Jan 7, 2023
A cleaner and more intuitive std::variant alternative

[WIP] ExtendedVariant This single header library is part of my C++ extended standard stdex libraries. Check our my profile for more. Working with C++

pinsrq 3 Jun 13, 2021
anthemtotheego 402 Dec 26, 2022
Vendor and game agnostic latency reduction middleware. An alternative to NVIDIA Reflex.

LatencyFleX (LFX) Vendor and game agnostic latency reduction middleware. An alternative to NVIDIA Reflex. Why LatencyFleX? There is a phenomenon commo

Tatsuyuki Ishi 559 Dec 29, 2022