A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷

Overview

ODM Logo

An open source command line toolkit for processing aerial drone imagery. ODM turns simple 2D images into:

  • Classified Point Clouds
  • 3D Textured Models
  • Georeferenced Orthorectified Imagery
  • Georeferenced Digital Elevation Models

images-diag

The application is available for Windows, Mac and Linux and it works from the command line, making it ideal for power users, scripts and for integration with other software.

If you would rather not type commands in a shell and are looking for a friendly user interface, check out WebODM.

Quickstart

The easiest way to run ODM is via docker. To install docker, see docs.docker.com. Once you have docker installed and working, you can run ODM by placing some images (JPEGs or TIFFs) in a folder named “images” (for example C:\Users\youruser\datasets\project\images or /home/youruser/datasets/project/images) and simply run from a Command Prompt / Terminal:

# Windows
docker run -ti --rm -v c:/Users/youruser/datasets:/datasets opendronemap/odm --project-path /datasets project

# Mac/Linux
docker run -ti --rm -v /home/youruser/datasets:/datasets opendronemap/odm --project-path /datasets project

You can pass additional parameters by appending them to the command:

docker run -ti --rm -v /datasets:/datasets opendronemap/odm --project-path /datasets project [--additional --parameters --here]

For example, to generate a DSM (--dsm) and increase the orthophoto resolution (--orthophoto-resolution 2) :

docker run -ti --rm -v /datasets:/datasets opendronemap/odm --project-path /datasets project --dsm --orthophoto-resolution 2

Viewing Results

When the process finishes, the results will be organized as follows:

|-- images/
    |-- img-1234.jpg
    |-- ...
|-- opensfm/
    |-- see mapillary/opensfm repository for more info
|-- odm_meshing/
    |-- odm_mesh.ply                    # A 3D mesh
|-- odm_texturing/
    |-- odm_textured_model.obj          # Textured mesh
    |-- odm_textured_model_geo.obj      # Georeferenced textured mesh
|-- odm_georeferencing/
    |-- odm_georeferenced_model.laz     # LAZ format point cloud
|-- odm_orthophoto/
    |-- odm_orthophoto.tif              # Orthophoto GeoTiff

You can use the following free and open source software to open the files generated in ODM:

  • .tif (GeoTIFF): QGIS
  • .laz (Compressed LAS): CloudCompare
  • .obj (Wavefront OBJ), .ply (Stanford Triangle Format): MeshLab

Note! Opening the .tif files generated by ODM in programs such as Photoshop or GIMP might not work (they are GeoTIFFs, not plain TIFFs). Use QGIS instead.

API

ODM can be made accessible from a network via NodeODM.

Documentation

See http://docs.opendronemap.org for tutorials and more guides.

Forum

We have a vibrant community forum. You can search it for issues you might be having with ODM and you can post questions there. We encourage users of ODM to partecipate in the forum and to engage with fellow drone mapping users.

Snap Package

ODM is now available as a Snap Package from the Snap Store. To install you may use the Snap Store (available itself as a Snap Package) or the command line:

sudo snap install opendronemap

To run, you will need a terminal window into which you can type:

opendronemap

# or

snap run opendronemap

# or

/snap/bin/opendronemap

Snap packages will be kept up-to-date automatically, so you don't need to update ODM manually.

GPU Acceleration

ODM has support for doing SIFT feature extraction on a GPU, which is about 2x faster than the CPU on a typical consumer laptop. To use this feature, you need to use the opendronemap/odm:gpu docker image instead of opendronemap/odm and you need to pass the --gpus all flag:

docker run -ti --rm -v c:/Users/youruser/datasets:/datasets --gpus all opendronemap/odm:gpu --project-path /datasets project

When you run ODM, if the GPU is recognized, in the first few lines of output you should see:

[INFO]    Writing exif overrides
[INFO]    Maximum photo dimensions: 4000px
[INFO]    Found GPU device: Intel(R) OpenCL HD Graphics
[INFO]    Using GPU for extracting SIFT features

The SIFT GPU implementation is OpenCL-based, so should work with most graphics card (not just NVIDIA).

If you have an NVIDIA card, you can test that docker is recognizing the GPU by running:

docker run --rm --gpus all nvidia/cuda:10.0-base nvidia-smi

If you see an output that looks like this:

Fri Jul 24 18:51:55 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82       Driver Version: 440.82       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |

You're in good shape!

See https://github.com/NVIDIA/nvidia-docker and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker for information on docker/NVIDIA setup.

WSL or WSL2 Install

Note: This requires that you have installed WSL already by following the instructions on Microsoft's Website.

You can run ODM via WSL or WSL2 by downloading the rootfs.tar.gz file from the releases page on GitHub. Once you have the file saved to your Downloads folder in Windows, open a PowerShell or CMD window by right-clicking the Flag Menu (bottom left by default) and selecting "Windows PowerShell", or alternatively by using the Windows Terminal from the Windows Store.

Inside a PowerShell window, or Windows Terminal running PowerShell, type the following:

# PowerShell
wsl.exe --import ODM $env:APPDATA\ODM C:\path\to\your\Downloads\rootfs.tar.gz

Alternatively if you're using CMD.exe or the CMD support in Windows Terminal type:

# CMD
wsl.exe --import ODM %APPDATA%\ODM C:\path\to\your\Downloads\rootfs.tar.gz

In either case, make sure you replace C:\path\to\your\Downloads\rootfs.tar.gz with the actual path to your rootfs.tar.gz file.

This will save a new Hard Disk image to your Windows AppData folder at C:\Users\username\AppData\roaming\ODM (where username is your Username in Windows), and will set-up a new WSL "distro" called ODM.

You may start the ODM distro by using the relevant option in the Windows Terminal (from the Windows Store) or by executing wsl.exe -d ODM in a PowerShell or CMD window.

ODM is installed to the distro's /code directory. You may execute it with:

/code/run.sh

Updating ODM in WSL

The easiest way to update the installation of ODM is to download the new rootfs.tar.gz file and import it as another distro. You may then unregister the original instance the same way you delete ODM from WSL (see next heading).

Deleting an ODM in WSL instance

wsl.exe --unregister ODM

Finally you'll want to delete the files by using your Windows File Manager (Explorer) to navigate to %APPDATA%, find the ODM directory, and delete it by dragging it to the recycle bin. To permanently delete it empty the recycle bin.

If you have installed to a different directory by changing the --import command you ran to install you must use that directory name to delete the correct files. This is likely the case if you have multiple ODM installations or are updating an already-installed installation.

Native Install (Ubuntu 20.04)

You can run ODM natively on Ubuntu 20.04 LTS (although we don't recommend it):

  1. Download the source from here
  2. Run bash configure.sh install
  3. Download a sample dataset from here (about 550MB) and extract it in /datasets/aukerman
  4. Run ./run.sh --project-path /datasets odm_data_aukerman

Updating a native installation

When updating to a newer version of ODM, it is recommended that you run

bash configure.sh reinstall

to ensure all the dependent packages and modules get updated.

Build From Source

If you want to rebuild your own docker image (if you have changed the source code, for example), from the ODM folder you can type:

docker build -t my_odm_image --no-cache .

When building your own Docker image, if image size is of importance to you, you should use the --squash flag, like so:

docker build --squash -t my_odm_image .

This will clean up intermediate steps in the Docker build process, resulting in a significantly smaller image (about half the size).

Experimental flags need to be enabled in Docker to use the --squash flag. To enable this, insert the following into the file /etc/docker/daemon.json:

{
   "experimental": true
}

After this, you must restart docker.

Developers

Help improve our software! We welcome contributions from everyone, whether to add new features, improve speed, fix existing bugs or add support for more cameras. Check our code of conduct, the contributing guidelines and how decisions are made.

For Linux users, the easiest way to modify the software is to make sure docker is installed, clone the repository and then run from a shell:

$ DATA=/path/to/datasets ./start-dev-env.sh

Where /path/to/datasets is a directory where you can place test datasets (it can also point to an empty directory if you don't have test datasets).

Run configure to set up the required third party libraries:

(odmdev) [user:/code] master+* ± bash configure.sh reinstall

You can now make changes to the ODM source. When you are ready to test the changes you can simply invoke:

(odmdev) [user:/code] master+* ± ./run.sh --project-path /datasets mydataset

If you have questions, join the developer's chat at https://community.opendronemap.org/c/developers-chat/21

  1. Try to keep commits clean and simple
  2. Submit a pull request with detailed changes and test results
  3. Have fun!

Credits

ODM makes use of several libraries and other awesome open source projects to perform its tasks. Among them we'd like to highlight:

Citation

OpenDroneMap Authors ODM - A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. OpenDroneMap/ODM GitHub Page 2020; https://github.com/OpenDroneMap/ODM

Comments
  • Segfault while merging depthmaps

    Segfault while merging depthmaps

    Hi, I'm running the latest ODM Docker image (pulled last night) on a VPS running Debian with 24GB of RAM and 8 CPU cores. I'm currently trying to stitch together a survey that consists of around 270 photos, totaling 1-1.5GB. During processing, I get this:

    2017-03-10 13:48:42,183 Cleaning depthmap for image DJI_0062.JPG 2017-03-10 13:48:43,404 Cleaning depthmap for image DJI_0021.JPG 2017-03-10 13:48:44,682 Cleaning depthmap for image DJI_0060.JPG 2017-03-10 13:48:46,072 Cleaning depthmap for image DJI_0162.JPG 2017-03-10 13:48:47,415 Cleaning depthmap for image DJI_0236.JPG 2017-03-10 13:48:52,428 Cleaning depthmap for image DJI_0050.JPG 2017-03-10 13:48:53,772 Cleaning depthmap for image DJI_0052.JPG 2017-03-10 13:48:55,157 Cleaning depthmap for image DJI_0228.JPG 2017-03-10 13:48:56,464 Cleaning depthmap for image DJI_0087.JPG 2017-03-10 13:48:57,821 Cleaning depthmap for image DJI_0124.JPG 2017-03-10 13:48:59,103 Cleaning depthmap for image DJI_0118.JPG 2017-03-10 13:49:00,459 Cleaning depthmap for image DJI_0079.JPG 2017-03-10 13:49:01,845 Cleaning depthmap for image DJI_0241.JPG 2017-03-10 13:49:06,852 Cleaning depthmap for image DJI_0051.JPG 2017-03-10 13:49:08,309 Merging depthmaps Segmentation fault (core dumped) Traceback (most recent call last): File "/code/run.py", line 55, in plasm.execute(niter=1) File "/code/scripts/opensfm.py", line 90, in process (context.pyopencv_path, context.opensfm_path, tree.opensfm)) File "/code/opendm/system.py", line 28, in run raise Exception("Child returned {}".format(retcode)) Exception: Child returned 139 Any ideas? I'm fairly new to the project and I'm not sure where to dig in.

    troubleshooting 
    opened by bstempi 63
  • Align of resulting mosaic (in case of multispectral imaging)

    Align of resulting mosaic (in case of multispectral imaging)

    Each band of multispectral sensors (e.g. Micasense Rededge, Sequoia) is taken with separate lens, where different angle for each band is appied (each band image is in separate file). When sources images are merged to multiband image, there bands are significantly shifted each other. Therefore, each band must be processed separately in case of mosaicking. Unfortunately, when merging resulting mosaics, similar shift is present: odm Possibility of aligning resulting mosaics would be appreciated.

    Thanks.

    enhancement 
    opened by ivopavlik 59
  • Update OpenSfM

    Update OpenSfM

    This PR brings in the latest changes to OpenSfM, which among other things include the reprojection of derivatives which are now computed analytically instead of autodiff-ed (20%-45% speed-up on opensfm reconstruct)

    WIP

    enhancement 
    opened by pierotofy 45
  • Planar Reconstruction

    Planar Reconstruction

    This PR adds support for really fast planar reconstructions (e.g. agricultural fields)

    Requirements:

    • Constant flight altitude
    • Nadir-only camera shots
    • Single pass lawnmower pattern
    • Planar or mostly planar terrain
    • Single camera (no multi-camera support at this moment). Multispectral images are fine.

    Then to obtain an orthophoto really quickly, one can pass:

    --sfm-algorithm planar --matcher-neighbors 4 --fast-orthophoto

    If one needs a full 3D model from the mostly planar scene, one can omit the --fast-orthophoto flag and a full 3D reconstruction will still take place.

    Experimental! :boom:

    opened by pierotofy 44
  • Fewer than 3 GCPs have correspondences in the generated model.

    Fewer than 3 GCPs have correspondences in the generated model.

    I have been repeatedly encountering an issue with the GCPs by running ODM with the Toledo input and also with my own data. I have checked the GPS locations in the exif data from the images. The GCPs are in the right locations and the coordinates are in UTM with the correct header. But I get always the same error:

    Error in Georef: Fewer than 3 GCPs have correspondences in the generated model. [ERROR] Georeferencing failed. Traceback (most recent call last): File "/home/vman/vman/OpenDroneMap/scripts/odm_georeferencing.py", line 124, in process '-outputCoordFile {coords}'.format(**kwargs)) File "/home/vman/vman/OpenDroneMap/opendm/system.py", line 28, in run raise Exception("Child returned {}".format(retcode)) Exception: Child returned 1

    The odm_georeferencing_log.txt files says at the bottom: Successfully loaded /home/vman/vman/OpenDroneMap/tests/test_data/odm_texturing/odm_textured_model.obj. Error in Georef: Fewer than 3 GCPs have correspondences in the generated model.

    I have more than 3 GCPs in the gcp_list.txt file. Everything works fine by ignoring the gcp_list.txt file (--use-exif option).

    Is there still something I am overseeing?

    Thanks Volker

    bug 
    opened by vroeb 43
  • Fast Orthophoto via sparse reconstruction

    Fast Orthophoto via sparse reconstruction

    This PR adds support for a new --fast-orthophoto flag, which skips the opensfm dense reconstruction step and uses the sparse reconstruction to generate an orthophoto, while skipping the generation of the full 3D model (the poisson mesh). This is going to benefit users that just want an orthophoto and works especially well on flat areas such as farm fields.

    List of changes:

    • Added --fast-orthophoto, --mesh-neighbors and --mesh-resolution parameters.
    • Removed old 2.5D mesh module and related flags.
    • Wrote new 2.5D mesh program, based on VTK7. We cannot use the binaries that ship with Ubuntu because the parallel features, which are critical for good performance, are not available in the binary packages. VTK7 adds time to the compilation, but doesn't increase image size in docker. The 2.5D mesh works by interpolating a DSM using shepard's method and performing a greedy terrain decimation, which allows us to retain the shape of buildings even with few points and fill-in gaps in the terrain. It's multithreaded, so meshing will be relatively fast especially on machines with multiple cores.

    image

    image

    Fast orthophoto:

    image

    • Cleaned up code that wasn't being used anymore
    • Made learner docker images, most of the files in /code/SuperBuild/src are not needed because they are installed in /code/SuperBuild/install (with a few exceptions)
    • Users can continue computing a DSM/DTM from the sparse point cloud when --fast-orthophoto is used.
    • Increased the min-num-features parameter default to 8000. This is both to help users get better orthophotos using the --fast-orthophoto flag, but also to help users get better overall results even when using the dense reconstruction. Often times 4000 isn't sufficient for areas of high vegetation and new users seem confused. If this is not a good idea, please let me know.
    • Users can keep using the dense reconstruction for generating the orthophoto and they can choose to use the new 2.5D mesh module. The --use-25dmesh flag remains. When --fast-orthophoto is used, --use-25dmesh is implicitly set.

    I recommend testing also with the changes introduced here: https://github.com/OpenDroneMap/mvs-texturing/pull/1

    The PR is stable enough to be tested, but do not merge yet. I'd love to hear feedback and do some more testing, especially to make sure that nothing in the existing pipeline broke. 😄

    enhancement 
    opened by pierotofy 42
  • Does ODM support Tif format

    Does ODM support Tif format

    I often use the multi-spectral camera to capture images in the experiment filed. So, I wonder if I can use Tif format files with multi-band, e.g. 10 bands in one Tif image, as input source images after some modifications. Is it a heavy work if I do this modifications? Thank you.

    enhancement help wanted 
    opened by xialang2012 42
  • Add SLAM module

    Add SLAM module

    This PR adds a SLAM module to OpenDroneMap so that it is possible to use videos as input instead of still images. It uses the open-source library ORB_SLAM2 to compute the camera trajectory and then continues with PMVS and the rest of the pipeline.

    Building the SLAM module is set as an option in the CMake files and the default is not to build it.

    Here's a short guide on how to use it

    opened by paulinus 41
  • Generating a footprint and average % overlap

    Generating a footprint and average % overlap

    At some point during reconstruction, is it possible to produce an estimate of overlap or generate something like this:

    footprint

    I think it would help with benchmarking and being able to discriminate between bad runs and bad photosets.

    enhancement help wanted 
    opened by dakotabenjamin 41
  • OpenCV-3.3.1 w/ nvidia/cuda:8.0-devel-ubuntu16.04 image for GPU processing tests

    OpenCV-3.3.1 w/ nvidia/cuda:8.0-devel-ubuntu16.04 image for GPU processing tests

    OpenCV-2.4.xx w/ Cuda does not recognize newer classes of NVidia GPU when building, aka only up to Kepler. I need Pascal, therefore need to build with OpenCV-3.3.0

    Updated SuperBuild/CMakeLists.txt & Extenal-xxxx.cmake to newest versions: Ecto --> 0.6.12 OpenCV-2.4.11 --> 3.3.0 PCL 1.8.0 -->1.8.1

    On host: $ docker build --tag gei/opendronemap:cuda-8.0-opencv-3.3.0 Builds successfully through [rest of Dockerfile is commented out]: $ cd SuperBuild/build && cmake .. && make -j20 no errors

    $ docker run -it --name odm-cuda-8.0-opencv-3.3.0 gei/opendronemap:cuda-8.0-opencv-3.3.0 bash In container -> moving up to next Dockerfile step: $ cd ../.. && mkdir build && cd build && cmake .. a few warnings, but -- Configuring done -- Generating done -- Build files have been written to: /code/build

    $ make -j20 throws errors at odm_texturing: [ 86%] Linking CXX executable ../../bin/odm_meshing [ 86%] Built target odm_meshing [ 89%] Linking CXX executable ../../bin/odm_texturing CMakeFiles/odm_texturing.dir/src/OdmTexturing.cpp.o: In function OdmTexturing::loadCameras()': OdmTexturing.cpp:(.text+0x2a3b): undefined reference tocv::imread(cv::String const&, int)' CMakeFiles/odm_texturing.dir/src/OdmTexturing.cpp.o: In function OdmTexturing::createTextures()': OdmTexturing.cpp:(.text+0x84ef): undefined reference tocv::imread(cv::String const&, int)' OdmTexturing.cpp:(.text+0x8a43): undefined reference to cv::imwrite(cv::String const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)' OdmTexturing.cpp:(.text+0x8d35): undefined reference tocv::imwrite(cv::String const&, cv::_InputArray const&, std::vector<int, std::allocator > const&)' collect2: error: ld returned 1 exit status modules/odm_texturing/CMakeFiles/odm_texturing.dir/build.make:479: recipe for target 'bin/odm_texturing' failed make[2]: *** [bin/odm_texturing] Error 1 CMakeFiles/Makefile2:338: recipe for target 'modules/odm_texturing/CMakeFiles/odm_texturing.dir/all' failed make[1]: *** [modules/odm_texturing/CMakeFiles/odm_texturing.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... [ 93%] Linking CXX executable ../../bin/odm_orthophoto CMakeFiles/odm_orthophoto.dir/src/OdmOrthoPhoto.cpp.o: In function OdmOrthoPhoto::createOrthoPhoto()': OdmOrthoPhoto.cpp:(.text+0x916c): undefined reference tocv::imread(cv::String const&, int)' OdmOrthoPhoto.cpp:(.text+0x96bb): undefined reference to cv::imwrite(cv::String const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)' collect2: error: ld returned 1 exit status modules/odm_orthophoto/CMakeFiles/odm_orthophoto.dir/build.make:453: recipe for target 'bin/odm_orthophoto' failed make[2]: *** [bin/odm_orthophoto] Error 1 CMakeFiles/Makefile2:283: recipe for target 'modules/odm_orthophoto/CMakeFiles/odm_orthophoto.dir/all' failed make[1]: *** [modules/odm_orthophoto/CMakeFiles/odm_orthophoto.dir/all] Error 2 [ 96%] Linking CXX executable ../../bin/odm_25dmeshing [ 96%] Built target odm_25dmeshing [100%] Linking CXX executable ../../bin/odm_georef CMakeFiles/odm_georef.dir/src/Georef.cpp.o: In functionGeoref::performGeoreferencingWithGCP()': Georef.cpp:(.text+0xaec2): undefined reference to `cv::imread(cv::String const&, int)' collect2: error: ld returned 1 exit status modules/odm_georef/CMakeFiles/odm_georef.dir/build.make:506: recipe for target 'bin/odm_georef' failed make[2]: *** [bin/odm_georef] Error 1 CMakeFiles/Makefile2:173: recipe for target 'modules/odm_georef/CMakeFiles/odm_georef.dir/all' failed make[1]: *** [modules/odm_georef/CMakeFiles/odm_georef.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    Ideas? I'm thinking that possibly the ODM scripts calls need to be tuned for the newer versions of software?

    opened by PeterSprague 38
  • Python-port: odm_georeferencing, odm_orthophoto

    Python-port: odm_georeferencing, odm_orthophoto

    Please assist...

    ...run.py with --use-opensfm True and --odm_georeferencingGcp False produces no orthophoto. The arguments do produce .ply meshes and texturing with mosaicked photos on top though? odm_georeferencing folder under results also then contain no files. Also ran ./install.sh recently through python-port.

    Terminal output shows that I must specify ...Gcp False even though I did include the argument.

    Run from Uhuntu 14

    opened by rion-saeon 38
  • gcp_list.txt does not work when JPEG file names have spaces

    gcp_list.txt does not work when JPEG file names have spaces

    How did you install ODM? (Docker, installer, natively, ...)?

    Docker Desktop downloaded and drag-installed, and WebODM cloned from github on 2022-12-22 onto macOS Intel running Ventura 13.0.1.

    What is the problem?

    WebODM does not work when you use gcp_list.txt and when there is a space in the names of the JPEG files.

    What should be the expected behavior? If this is a feature request, please describe in detail the changes you think should be made to the code, citing files and lines where changes should be made, if possible.

    I generate the gcp_list.txt using GCP Editor Pro.app. I was able to stitch my files (e.g., into an orthophoto) as long as I did not include GCPs. But when I included GCPs I got the error "Cannot process dataset" about 1 minute into the task.

    I got the same error processing locally and on WebODM Lightning.

    I was able to fix this error by renaming all my JPEGs to eliminate space characters, changing, e.g., 2022-09-26 15-24-47 DJI_0293.JPG to 2022-09-26_15-24-47_DJI_0293.JPG, and making the same change in gcp_list.txt.

    I did not look at the code, but it seems likely that the code for parsing gcp_list.txt uses both spaces and tabs as column delimiters. Since GCP Editor Pro.app only outputs the file with tabs as a delimiter, I suggest that it is not a good idea to use spaces when parsing this file.

    How can we reproduce this? What steps did you do to trigger the problem? If this is an issue with processing a dataset, YOU MUST include a copy of your dataset uploaded on Google Drive or Dropbox (otherwise we cannot reproduce this).

    The task settings are:

    image

    The files are at https://www.dropbox.com/sh/h1xfba79hby7e3k/AAD0oDF8CEnYh4AvLs_tUS-la?dl=0

    bug 
    opened by alanterra 1
  • Possible regression in OpenMVS, geometrically consistent views.

    Possible regression in OpenMVS, geometrically consistent views.

    Here is the failure I am seeing. It's not the most enjoyable test dataset with 2k camera positions, but happy to share as needed.

    Ubuntu 20.04, docker install, FWIW.

    Geometric-consistent estimated depth-maps 1448 (72.11%, 27m13s, ETA 10m)...
    Geometric-consistent estimated depth-maps 2008 (100%, 27m13s529ms)
    
    ===== Dumping Info for Geeks (developers need this to fix bugs) =====
    Child returned 1
    Traceback (most recent call last):
    File "/code/stages/odm_app.py", line 81, in execute
    self.first_stage.run()
    File "/code/opendm/types.py", line 386, in run
    self.next_stage.run(outputs)
    File "/code/opendm/types.py", line 386, in run
    self.next_stage.run(outputs)
    File "/code/opendm/types.py", line 386, in run
    self.next_stage.run(outputs)
    [Previous line repeated 1 more time]
    File "/code/opendm/types.py", line 365, in run
    self.process(self.args, outputs)
    File "/code/stages/openmvs.py", line 117, in process
    raise e
    File "/code/stages/openmvs.py", line 103, in process
    run_densify()
    File "/code/stages/openmvs.py", line 99, in run_densify
    system.run('"%s" "%s" %s' % (context.omvs_densify_path,
    File "/code/opendm/system.py", line 109, in run
    raise SubprocessException("Child returned {}".format(retcode), retcode)
    opendm.system.SubprocessException: Child returned 1
    
    ===== Done, human-readable information to follow... =====
    
    [ERROR]   Uh oh! Processing stopped because of strange values in the reconstruction. This is often a sign that the input data has some issues or the software cannot deal with it. Have you followed best practices for data acquisition? See https://docs.opendronemap.org/flying/
    
    bug 
    opened by smathermather 12
  • WedODM1.9.17(Engine3.0.2):

    WedODM1.9.17(Engine3.0.2):"Cannot process dataset"

    How did you install ODM? (Docker, installer, natively, ...)?

    Manual Install(Docker Version 18.06.1-ce-win73 ) Windows10pro

    What is the problem?

    I've installed and used WebODM for 2years with no problem. yesterday,my laptop suddenly could not have a connection "./webodm.sh start" to localhost:8000 and log in.

    So, I've done ./webodm.sh start && ./webodm.sh resetadminpassword newpass and had log in. Then, I've never seen Error "Cannot process dataset" with below Task Output:

    Traceback (most recent call last): File "/code/run.py", line 18, in from stages.odm_app import ODMApp File "/code/stages/odm_app.py", line 14, in from stages.odm_georeferencing import ODMGeoreferencingStage File "/code/stages/odm_georeferencing.py", line 22, in from opendm.align import compute_alignment_matrix, transform_point_cloud, transform_obj File "/code/opendm/align.py", line 4, in import codem File "/usr/local/lib/python3.9/dist-packages/codem/init.py", line 5, in from codem.main import apply_registration File "/usr/local/lib/python3.9/dist-packages/codem/main.py", line 28, in from codem.preprocessing.preprocess import GeoData File "/usr/local/lib/python3.9/dist-packages/codem/preprocessing/preprocess.py", line 36, in import pdal File "/code/SuperBuild/install/lib/python3.8/dist-packages/pdal/init.py", line 8, in inject_pdal_drivers() File "/code/SuperBuild/install/lib/python3.8/dist-packages/pdal/drivers.py", line 66, in inject_pdal_drivers drivers = libpdalpython.getDrivers() RuntimeError: filesystem error: cannot make canonical path: Operation not permitted [.]

    What should be the expected behavior? If this is a feature request, please describe in detail the changes you think should be made to the code, citing files and lines where changes should be made, if possible.

    I want to revert to previous stable state.

    How can we reproduce this? What steps did you do to trigger the problem? If this is an issue with processing a dataset, YOU MUST include a copy of your dataset uploaded on Google Drive or Dropbox (otherwise we cannot reproduce this).

    another my WebODM1.8.1 installed desktop computer can process dataset. trigger is WedODM1.9.17(Engine3.0.2)?

    bug 
    opened by horsehaircrab 4
  • Windows installation in non-default path causes error

    Windows installation in non-default path causes error

    installed at a non-default path (in my user dir) and tried to run run.bat but it failed because it couldn't find geos_c.dll at ./venv/Library/bin/geos_c.dll . Copying it from the shapely site-packages to that location solved it though

    help wanted possible bug 
    opened by pierotofy 0
  • Added video2dataset module

    Added video2dataset module

    This PR adds the module for creating the dataset from a video. The algorithm used is able to discard out of focus frames and too similar ones.

    Input parameters:

    • input = path to input video file
    • output = path to output directory
    • start = start frame index
    • end = end frame index
    • output_resolution = Override output resolution (ex. 640x480)
    • blur_percentage = discard the lowest X percent of frames based on blur score (allowed values from 0.0 to 1.0)
    • blur_threshold = blur measures that fall below this value will be considered 'blurry' (to be used in exclusion with -bp)
    • distance_threshold = distance measures that fall below this value will be considered 'similar'
    • frame_format = frame format (jpg, png, tiff, etc.)
    • stats_file = Save statistics to csv file
    • internal_width = We will resize the image to this width before processing
    • internal_height = We will resize the image to this height before processing
    opened by HeDo88TH 6
  • Possible Bug: QHull - did int overflow due to high-S?

    Possible Bug: QHull - did int overflow due to high-S?

    Dataset: https://hub.dronedb.app/r/saijinnaib/gordon-11365

    Thread: https://community.opendronemap.org/t/cannot-process-dataset-did-int-overflow-due-to-high-d/11365?u=saijin_naib

    Possibly related to Issue #1564

    possible bug 
    opened by Saijin-Naib 0
Releases(v3.0.1)
Owner
OpenDroneMap
Creating the most sustainable drone mapping software with the friendliest community on earth.
OpenDroneMap
A command-line tool for converting heightmaps in GeoTIFF format into tiled optimized meshes.

TIN Terrain TIN Terrain is a command-line tool for converting heightmaps presented in GeoTIFF format into tiled optimized meshes (Triangulated Irregul

HERE Technologies 516 Dec 23, 2022
OpenOrienteering Mapper is a software for creating maps for the orienteering sport.

OpenOrienteering Mapper OpenOrienteering Mapper is an orienteering mapmaking program and provides a free and open source alternative to existing comme

null 342 Jan 8, 2023
Terrain Analysis Using Digital Elevation Models (TauDEM) software for hydrologic terrain analysis and channel network extraction.

TauDEM (Terrain Analysis Using Digital Elevation Models) is a suite of Digital Elevation Model (DEM) tools for the extraction and analysis of hydrolog

David Tarboton 191 Dec 28, 2022
Entwine - point cloud organization for massive datasets

Build Status Entwine is a data organization library for massive point clouds, designed to conquer datasets of hundreds of billions of points as well a

Connor Manning 358 Dec 30, 2022
C++ implementation of R*-tree, an MVR-tree and a TPR-tree with C API

libspatialindex Author: Marios Hadjieleftheriou Contact: [email protected] Revision: 1.9.3 Date: 10/23/2019 See http://libspatialindex.org for full doc

null 633 Dec 28, 2022
2D and 3D map renderer using OpenGL ES

Tangram ES Tangram ES is a C++ library for rendering 2D and 3D maps from vector data using OpenGL ES. It is a counterpart to Tangram. This repository

Tangram 750 Jan 1, 2023
Alternative LAZ implementation for C++ and JavaScript

What is this? Alternative LAZ implementation. It supports compilation and usage in JavaScript, usage in database contexts such as pgpointcloud and Ora

Howard Butler 55 Oct 25, 2022
Computational geometry and spatial indexing on the sphere

S2 Geometry Library Overview This is a package for manipulating geometric shapes. Unlike many geometry libraries, S2 is primarily designed to work wit

Google 1.9k Dec 31, 2022
A C++17 image representation, processing and I/O library.

Selene Selene is a C++17 image representation, processing, and I/O library, focusing on ease of use and a clean, modern, type-safe API. Overview: Brie

Michael Hofmann 286 Oct 26, 2022
A fast algorithm for finding the pole of inaccessibility of a polygon (in JavaScript and C++)

A fast algorithm for finding polygon pole of inaccessibility, the most distant internal point from the polygon outline (not to be confused with centroid), implemented as a JavaScript library. Useful for optimal placement of a text label on a polygon.

Mapbox 1.2k Jan 6, 2023
A lean, efficient, accurate geohash encoder and decoder library implemented in C

Geohash encoder/decoder in C A lean, efficient, accurate geohash encoder and decoder library implemented in C. It does not depend on the C standard li

Christopher Wellons 20 Nov 20, 2022
A library of distance and occlusion generation routines

Distance/Occlusion Library + Tool From left to right: original, signed distance with zero at 0.5, red/green SDF, delta vectors to closest boundary poi

Andrew Willmott 105 Nov 2, 2022
Organic Maps is a better fork of MAPS.ME, an Android & iOS offline maps app for travelers, tourists, hikers, and cyclists based on top of crowd-sourced OpenStreetMap data and curated with love by MAPS.ME founders.

?? Organic Maps is a better fork of MAPS.ME, an Android & iOS offline maps app for travelers, tourists, hikers, and cyclists based on top of crowd-sourced OpenStreetMap data and curated with love by MAPS.ME founders. No ads, no tracking, no data collection, no crapware.

Organic Maps 4.4k Jan 2, 2023
LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

LIDAR(Livox Horizon) point cloud preprocessing, including point cloud filtering and point cloud feature extraction (edge points and plane points)

hongyu wang 12 Dec 28, 2022
🗺️ OMAPS.APP — Offline OpenStreetMap maps for iOS and Android. A community-driven fork of MAPS.ME.

OMaps is an open source cross-platform offline maps application, built on top of crowd-sourced OpenStreetMap data. It was publicly released for iOS and Android.

OMaps 4.4k Jan 7, 2023
Draco is a library for compressing and decompressing 3D geometric meshes and point clouds.

Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.

Google 5.4k Dec 30, 2022
Polyscope is a C++/Python viewer and user interface for 3D data such as meshes and point clouds

Polyscope is a C++/Python viewer and user interface for 3D data such as meshes and point clouds. It allows you to register your data and quickly generate informative and beautiful visualizations, either programmatically or via a dynamic GUI.

Nicholas Sharp 1.3k Dec 30, 2022
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Kangcheng LIU 80 Dec 28, 2022
copc-lib provides an easy-to-use interface for reading and creating Cloud Optimized Point Clouds

copc-lib copc-lib is a library which provides an easy-to-use reader and writer interface for COPC point clouds. This project provides a complete inter

Rock Robotic 25 Nov 29, 2022
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution w

VECTR at UCLA 369 Dec 30, 2022