An open autonomous driving platform

Overview

Build Status Simulation Status


We choose to go to the moon in this decade and do the other things,

not because they are easy, but because they are hard.

-- John F. Kennedy, 1962

Welcome to Apollo's GitHub page!

Apollo is a high performance, flexible architecture which accelerates the development, testing, and deployment of Autonomous Vehicles.

For business and partnership, please visit our website.

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Individual Versions
  4. Architecture
  5. Installation
  6. Quick Starts
  7. Documents

Introduction

Apollo is loaded with new modules and features but needs to be calibrated and configured perfectly before you take it for a spin. Please review the prerequisites and installation steps in detail to ensure that you are well equipped to build and launch Apollo. You could also check out Apollo's architecture overview for a greater understanding of Apollo's core technology and platforms.

Prerequisites

[New 2021-01] The Apollo platform (stable version) is now upgraded with software packages and library dependencies of newer versions including:

  1. CUDA upgraded to version 11.1 to support Nvidia Ampere (30x0 series) GPUs, with NVIDIA driver >= 455.32
  2. LibTorch (both CPU and GPU version) bumped to version 1.7.0 accordingly.

We do not expect a disruption to your current work, but to ease your life of migratation, you would need to:

  1. Update NVIDIA driver on your host to version >= 455.32. (Web link)
  2. Pull latest code and run the following commands after restarting and logging into Apollo Development container:
# Remove Bazel output of previous builds
rm -rf /apollo/.cache/{bazel,build,repos}
# Re-configure bazelrc.
./apollo.sh config --noninteractive

  • The vehicle equipped with the by-wire system, including but not limited to brake-by-wire, steering-by-wire, throttle-by-wire and shift-by-wire (Apollo is currently tested on Lincoln MKZ)

  • A machine with a 8-core processor and 16GB memory minimum

  • NVIDIA Turing GPU is strongly recommended

  • Ubuntu 18.04

  • NVIDIA driver version 455.32.00 and above (Web link)

  • Docker-CE version 19.03 and above (Official doc)

  • NVIDIA Container Toolkit (Official doc)

Please note, it is recommended that you install the versions of Apollo in the following order: 1.0 -> whichever version you would like to test out. The reason behind this recommendation is that you need to confirm whether individual hardware components and modules are functioning correctly, and clear various version test cases before progressing to a higher and more capable version for your safety and the safety of those around you.

Individual Versions:

The following diagram highlights the scope and features of each Apollo release:

Apollo 1.0:

Apollo 1.0, also referred to as the Automatic GPS Waypoint Following, works in an enclosed venue such as a test track or parking lot. This installation is necessary to ensure that Apollo works perfectly with your vehicle. The diagram below lists the various modules in Apollo 1.0.

Apollo 1.5:

Apollo 1.5 is meant for fixed lane cruising. With the addition of LiDAR, vehicles with this version now have better perception of its surroundings and can better map its current position and plan its trajectory for safer maneuvering on its lane. Please note, the modules highlighted in Yellow are additions or upgrades for version 1.5.

Apollo 2.0:

Apollo 2.0 supports vehicles autonomously driving on simple urban roads. Vehicles are able to cruise on roads safely, avoid collisions with obstacles, stop at traffic lights, and change lanes if needed to reach their destination. Please note, the modules highlighted in Red are additions or upgrades for version 2.0.

Apollo 2.5:

Apollo 2.5 allows the vehicle to autonomously run on geo-fenced highways with a camera for obstacle detection. Vehicles are able to maintain lane control, cruise and avoid collisions with vehicles ahead of them.

Please note, if you need to test Apollo 2.5; for safety purposes, please seek the help of the
Apollo Engineering team. Your safety is our #1 priority,
and we want to ensure Apollo 2.5 was integrated correctly with your vehicle before you hit the road.

Apollo 3.0:

Apollo 3.0's primary focus is to provide a platform for developers to build upon in a closed venue low-speed environment. Vehicles are able to maintain lane control, cruise and avoid collisions with vehicles ahead of them.

Apollo 3.5:

Apollo 3.5 is capable of navigating through complex driving scenarios such as residential and downtown areas. The car now has 360-degree visibility, along with upgraded perception algorithms to handle the changing conditions of urban roads, making the car more secure and aware. Scenario-based planning can navigate through complex scenarios, including unprotected turns and narrow streets often found in residential areas and roads with stop signs.

Apollo 5.0:

Apollo 5.0 is an effort to support volume production for Geo-Fenced Autonomous Driving. The car now has 360-degree visibility, along with upgraded perception deep learning model to handle the changing conditions of complex road scenarios, making the car more secure and aware. Scenario-based planning has been enhanced to support additional scenarios like pull over and crossing bare intersections.

Apollo 5.5:

Apollo 5.5 enhances the complex urban road autonomous driving capabilities of previous Apollo releases, by introducing curb-to-curb driving support. With this new addition, Apollo is now a leap closer to fully autonomous urban road driving. The car has complete 360-degree visibility, along with upgraded perception deep learning model and a brand new prediction model to handle the changing conditions of complex road and junction scenarios, making the car more secure and aware.

Apollo 6.0:

Apollo 6.0 incorporates new deep learning models to enhance the capabilities for certain Apollo modules. This version works seamlessly with new additions of data pipeline services to better serve Apollo developers. Apollo 6.0 is also the first version to integrate certain features as a demonstration of our continuous exploration and experimentation efforts towards driverless technology.

Architecture

  • Hardware/ Vehicle Overview

  • Hardware Connection Overview

  • Software Overview

Installation

Congratulations! You have successfully built out Apollo without Hardware. If you do have a vehicle and hardware setup for a particular version, please pick the Quickstart guide most relevant to your setup:

Quick Starts:

Documents

  • Technical Tutorials: Everything you need to know about Apollo. Written as individual versions with links to every document related to that version.

  • How-To Guides: Brief technical solutions to common problems that developers face during the installation and use of the Apollo platform

  • Specs: A Deep dive into Apollo's Hardware and Software specifications (only recommended for expert level developers that have successfully installed and launched Apollo)

  • FAQs

Questions

You are welcome to submit questions and bug reports as GitHub Issues.

Copyright and License

Apollo is provided under the Apache-2.0 license.

Disclaimer

Apollo open source platform only has the source code for models, algorithms and processes, which will be integrated with cybersecurity defense strategy in the deployment for commercialization and productization.

Please refer to the Disclaimer of Apollo in Apollo's official website.

Connect with us

Comments
  • How to generate openDRIVE formate file like base_map.xml

    How to generate openDRIVE formate file like base_map.xml

    Hi, I want to generate map file like base_map.xml, is there any toturials about how to generate and load openDRIVE formate file? I've download the OpenDRIVE Format Specification, Rev. 1.4 but did not find any code samples to generate .xml file, thank you very much!

    Type: Question Module: HD Map 
    opened by hyx007 318
  • hdmap - I want to know more about the map in the simulation

    hdmap - I want to know more about the map in the simulation

    I see the map tool in modules, but I guess the hdmap in your simulation is sunnyvale_loop.bin,

    but I can't find this file, I dont really understand the map

    it comples the map in opendrive format to binary, all i know is this

    but I hope there is some more document to use the map tool

    could someone who know about it tell me how the original file .xml to the final file .bin

    Type: Help wanted Module: HD Map Module: Simulation & Dreamview 
    opened by carlin314 47
  • Cannot see any car image in the Dreamview or see any trajectory.

    Cannot see any car image in the Dreamview or see any trajectory.

    Hi All,

    I successfully finish all the instructions for the demo, but when I switch into Dreamview window, there is neither car image nor trajectory such as in the readme when I rosbag play -l.

    I can see the bag file playing, but there's nothing showing on the localhost:8888. Has anyone encounter same problem as me ?

    Thanks!

    Type: Question 
    opened by tomzhang666 39
  • How to feed images to lane detection module properly?

    How to feed images to lane detection module properly?

    I'm trying to feed images from the KAIST data set to the lane detection module.

    My procedure is like this:

    1. Build a cyber record file. First, create a \tf message. Then, resize KAIST images from 1280x560 to 1920x1080, and create /apollo/sensor/camera/front_6mm/image messages. Pack them as a cyber record file.
    2. Play the record file while running the lane detection module. The lane detection module is run as indicated in this page

    The lane detection module is running without reporting any error, and the detected lanes match the lanes in the picture very well. But in the BEV, the detected lanes are not parallel while they should be (see the picture below). My guess is that the camera location of the KAIST data set is not the same as Apollo's default camera location, but I couldn't find a way to correct it.

    So my question is, how should I properly feed images to the lane detection module so that it can yield parallel lanes? Thanks a lot.

    screenshot of lane detection module In the right part of the screenshot, the detected lanes are not parallel.

    opened by tandf 35
  • Any plan for supporting Nvidia RTX 2080?

    Any plan for supporting Nvidia RTX 2080?

    Hi,

    I wonder if Apollo is going to support the new 20 series GPUs such as RTX 2080?

    As indicated in another issue, Apollo comes with a pre-compiled version of Caffe which doesn't support new GPU architectures such as Volta. Is this still the case in version 3.5? If not, would Apollo provide support in next releases for 20 series GPUs?

    Thanks, Junjie

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 14.04): Ubuntu 14.04
    • Apollo installed from (source or binary): source
    • Apollo version (1.0, 1.5, 2.0, 2.5, 3.0): 3.0

    Steps to reproduce the issue:

    • Please use bullet points and include as much details as possible:

    Supporting materials (screenshots, command lines, code/script snippets):

    Type: Question Module: Perception Module: Hardware 
    opened by junjieshen 33
  • Build did NOT complete successfully

    Build did NOT complete successfully

    We appreciate you go through Apollo documentations and search previous issues before creating an new one. If neither of the sources helped you with your issues, please report the issue using the following form. Please note missing info can delay the response time.

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 18.04): 20.04
    • Apollo installed from (source or binary): source? (git clone)
    • Apollo version (3.5, 5.0, 5.5, 6.0): 7.0
    • Output of apollo.sh config if on master branch:

    Steps to reproduce the issue:

    • Please use bullet points and include as much details as possible: I couldn't build successfully as the instructions. When run the below commands, it shows "Build did NOT complete successfully"
    sudo proxychains bash apollo.sh build
    
    # OR
    sudo  bash apollo.sh build
    

    Supporting materials (screenshots, command lines, code/script snippets):

    [email protected]:/apollo# sudo proxychains bash apollo.sh build
    ProxyChains-3.1 (http://proxychains.sf.net)
    [INFO] Apollo Environment Settings:
    [INFO]     APOLLO_ROOT_DIR: /apollo
    [INFO]     APOLLO_CACHE_DIR: /apollo/.cache
    [INFO]     APOLLO_IN_DOCKER: true
    [INFO]     APOLLO_VERSION: predtr-2021-12-28-463fb82f9e
    [INFO]     DOCKER_IMG: 
    [INFO]     APOLLO_ENV:  STAGE=dev USE_ESD_CAN=false
    [INFO]     USE_GPU: USE_GPU_HOST= USE_GPU_TARGET=1
    [ OK ] Running GPU build on x86_64 platform.
    [WARNING] ESD CAN library supplied by ESD Electronics doesn't exist.
    [WARNING] If you need ESD CAN, please refer to:
    [WARNING]   third_party/can_card_library/esd_can/README.md
    [INFO] Build Overview: 
    [INFO]     USE_GPU: 1  [ 0 for CPU, 1 for GPU ]
    [INFO]     Bazel Options: --config=gpu
    [INFO]     Build Targets: //modules/... union //cyber/...
    [INFO]     Disabled:      except //modules/drivers/canbus/can_client/esd/...
    Starting local Bazel server and connecting to it...
    WARNING: ignoring LD_PRELOAD in environment.
    (00:29:33) INFO: Invocation ID: 40695d7b-c7dd-416c-b7ae-de72a1612dec
    (00:29:33) INFO: Current date is 2022-01-28
    (00:30:32) INFO: Repository build_bazel_rules_swift instantiated at:
      /apollo/WORKSPACE:68:16: in <toplevel>
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/com_github_grpc_grpc/bazel/grpc_extra_deps.bzl:38:29: in grpc_extra_deps
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/build_bazel_rules_apple/apple/repositories.bzl:117:11: in apple_rules_dependencies
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/build_bazel_rules_apple/apple/repositories.bzl:84:14: in _maybe
    Repository rule http_archive defined at:
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
    (00:30:32) INFO: Repository rules_java instantiated at:
      /apollo/WORKSPACE:68:16: in <toplevel>
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/com_github_grpc_grpc/bazel/grpc_extra_deps.bzl:29:18: in grpc_extra_deps
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/com_google_protobuf/protobuf_deps.bzl:44:21: in protobuf_deps
    Repository rule http_archive defined at:
      /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/bazel_tools/tools/build_defs/repo/http.bzl:336:31: in <toplevel>
    (00:30:32) WARNING: Download from https://github.com/bazelbuild/rules_java/archive/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz failed: class java.io.IOException connect timed out
    (00:30:32) ERROR: An error occurred during the fetch of repository 'rules_java':
       Traceback (most recent call last):
    	File "/apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/bazel_tools/tools/build_defs/repo/http.bzl", line 111, column 45, in _http_archive_impl
    		download_info = ctx.download_and_extract(
    Error in download_and_extract: java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_java/archive/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz] to /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/rules_java/temp14011213055655260428/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz: connect timed out
    (00:30:32) ERROR: While resolving toolchains for target //modules/v2x/v2x_proxy/os_interface:os_interface_cpplint: invalid registered toolchain '@bazel_tools//tools/jdk:all': while parsing '@bazel_tools//tools/jdk:all': no such package '@rules_java//java': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_java/archive/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz] to /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/rules_java/temp14011213055655260428/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz: connect timed out
    (00:30:32) ERROR: Analysis of target '//modules/v2x/v2x_proxy/os_interface:os_interface_cpplint' failed; build aborted: invalid registered toolchain '@bazel_tools//tools/jdk:all': while parsing '@bazel_tools//tools/jdk:all': no such package '@rules_java//java': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_java/archive/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz] to /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/rules_java/temp14011213055655260428/981f06c3d2bd10225e85209904090eb7b5fb26bd.tar.gz: connect timed out
    (00:30:32) INFO: Elapsed time: 59.517s
    (00:30:32) INFO: 0 processes.
    (00:30:32) FAILED: Build did NOT complete successfully (595 packages loaded, 6242 targets configured)
        currently loading: @bazel_tools//tools/jdk ... (3 packages)
    [email protected]:/apollo# 
    

    image

    Module: Build 
    opened by Xinchengzelin 32
  • Questions in running cyber_recorder

    Questions in running cyber_recorder

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 18.04): 18.04
    • Apollo installed from (source or binary):
    • Apollo version (3.5, 5.0, 5.5, 6.0): 6.0
    • Output of apollo.sh config if on master branch:

    Steps to reproduce the issue:

    • run simulation together with apollo-6.0 and lgsvl in modular test
    • run cyber_recorder to record message from channel
    • fail to play or get information from record

    Supporting materials (screenshots, command lines, code/script snippets):

    error_recorder

    opened by Yimags 30
  • can not connect to http://localhost:8887/

    can not connect to http://localhost:8887/

    I do as follow

    bash docker/scripts/install_docker.sh
    docker ps  
    bash docker/scripts/dev_start.sh
    bash docker/scripts/dev_into.sh
    bash scripts/hmi.sh
    

    and then

    ...
    [WARNING] Failed to find device with pattern "ttyUSB*" ...
    ...
    [WARNING] Failed to find device with pattern "ram*" ...
    ...
    ...
    Start roscore...
    HMI ros node service running at localhost:8887
    HMI running at http://localhost:8887
    
    

    but I can not connect to can not connect to http://localhost:8887/

    Type: Bug 
    opened by PikachuHy 28
  • Dreamview does not appear on localhost after build succeeds.

    Dreamview does not appear on localhost after build succeeds.

    After the build has passed successfully on Ubuntu 17.10:

    [email protected]_dev_docker:/apollo$ sudo bash scripts/bootstrap.sh
    Started supervisord with dev conf
    Start roscore...
    voice_detector: started
    Dreamview is running at http://localhost:8888
    

    But Firefox cannot connect to localhost:8888. apollo/data/log/ has the following:

    -rw-r--r-- 1 root root   0 Mar 19 17:23 dreamview.out
    lrwxrwxrwx 1 root root  57 Mar 19 17:23 monitor.INFO -> monitor.in_dev_docker.root.log.INFO.20180319-172355.15403
    -rw-r--r-- 1 root root 21K Mar 19 17:23 SystemMonitor.flags
    lrwxrwxrwx 1 root root  60 Mar 19 17:23 monitor.WARNING -> monitor.in_dev_docker.root.log.WARNING.20180319-172357.15403
    lrwxrwxrwx 1 root root  58 Mar 19 17:23 monitor.ERROR -> monitor.in_dev_docker.root.log.ERROR.20180319-172357.15403
    -rw-r--r-- 1 root root 16M Mar 28 11:48 monitor.out
    -rw-r--r-- 1 root root 16M Mar 28 11:48 monitor.in_dev_docker.root.log.WARNING.20180319-172357.15403
    -rw-r--r-- 1 root root 17M Mar 28 11:48 monitor.in_dev_docker.root.log.INFO.20180319-172355.15403
    -rw-r--r-- 1 root root 16M Mar 28 11:48 monitor.in_dev_docker.root.log.ERROR.20180319-172357.15403
    -rw-r--r-- 1 root root   0 Mar 28 16:48 roscore.out
    -rw-r--r-- 1 root root   0 Mar 28 16:48 voice_detector.out
    

    The contents of monitor.ERROR are:

    Log file created at: 2018/03/19 17:23:57
    Running on machine: in_dev_docker
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    E0319 17:23:57.933039 15439 can_checker_factory.cc:48] Failed to create CAN checker with parameter: brand: ESD_CAN
    type: PCI_CARD
    channel_id: CHANNEL_ID_ZERO
    E0319 17:24:05.134542 15439 info_collector.cc:55] Cannot load file data/log/canbus.flags
    E0319 17:24:05.134588 15439 info_collector.cc:55] Cannot load file data/log/control.flags
    E0319 17:24:05.134618 15439 info_collector.cc:55] Cannot load file data/log/localization.flags
    E0319 17:24:05.134646 15439 info_collector.cc:55] Cannot load file data/log/perception.flags
    E0319 17:24:05.134675 15439 info_collector.cc:55] Cannot load file data/log/planning.flags
    E0319 17:24:05.134704 15439 info_collector.cc:55] Cannot load file data/log/prediction.flags
    E0319 17:24:05.134734 15439 info_collector.cc:55] Cannot load file data/log/routing.flags
    E0319 17:24:32.730955 15439 can_checker_factory.cc:48] Failed to create CAN checker with parameter: brand: ESD_CAN
    

    Any help would be appreciated...

    Type: Question Module: Simulation & Dreamview 
    opened by abhisheknaik96 27
  • Build Apollo in Visual Studio Code

    Build Apollo in Visual Studio Code

    Hi Guys,

    I am an autonomous vehicle enthusiast who recently started working on Apollo. My question/doubt is as follows.

    I have successfully built the Apollo code using command line and ran the simulation as well. However to get better understanding on the control flow of the code I would want to build the code in Visual Studio code and run unit tests.

    I followed the procedure given in apollo/docs/howto/how_to_build_and_release.md. But when I run build in VS code I am getting the below error.

    [ERROR] Failed to start docker container "apollo_dev" based on image: apolloauto/apollo:dev-x86_64-20180130_1338 apollo_docker.sh: line 140: docker: command not found The terminal process terminated with exit code: 127

    Does that mean even VS code should be run inside the docker. Currently I am running it in my host machine outside docker.

    Excuse me the question looks silly.

    Thank you, KK

    Type: Question Module: Docker 
    opened by kk2491 26
  • How can I see lane detection from a camera in SVL

    How can I see lane detection from a camera in SVL

    Hi, I'm trying to see lane detection from a camera in SVL simulator, and I trying to do as is specified in this document.

    First I create a record file from de SVL. Next I do point 1,2,3,4,5,6,7. In the point 8, I chose "If you want to test lane detection alone use" using the command mainboard -d ./modules/perception/production/dag/dag_streaming_perception_lane.dag. To see the lane results, I mark enable_visualization: true in modules/perception/production/conf/perception/camera/lane_detection_component.config before executing the point 1 form the document. I play de recorded file it the file of the perception camera turns off

    How can I fix that? Thank you

    Module: Simulation & Dreamview 
    opened by rzr900ipl 25
  • [Question]-Apollo Auto can send and Receive data(like lidar and Radar) using WebSocket to Application via listen port- TCP/UPD & IP address?

    [Question]-Apollo Auto can send and Receive data(like lidar and Radar) using WebSocket to Application via listen port- TCP/UPD & IP address?

    The Apollo Auto simulator Can sends like car telemetry information the data specifications to the application using WebSocketi.e WebSocket(uWebSockets) Data-describes the JSON object send back from Application to the simulator

    For sample example i.e understanding: https://github.com/shazraz/Extended-Kalman-Filter.

    Here bins tested with Udacity SDC Term 2 Simulator (Extended-Kalman-Filter connected/communicat Udacity on Port TCP and data receive Json msg) Similar Apollo Auto Can be used send/receive other from other Application I think Apollo-Auto dream view connected via WebSocket. so, Apollo auto will be provided WebSocket interfaces to send/receive data to Application/Hardware-TCP/UDP port and Ip Address. ?

    opened by anilbommareddy 0
  • Using OpenScenario 1.0 with Apollo- Carla co-simulation

    Using OpenScenario 1.0 with Apollo- Carla co-simulation

    Is it possible to use OpenScenario 1.0 (.xosc) for traffic vehicle maneuvering with Carla- Apollo co-simulation? if not, Is there any alternative way for maneuvering traffic vehicles with Carla- Apollo co-simulation?

    Apollo : 7.0 Carla : 0.9.13 Ubuntu: 20.04

    Type: Help wanted Module: Simulation & Dreamview 
    opened by vigneshm123 1
  • Problem about MODE_MOCK and timestamp

    Problem about MODE_MOCK and timestamp

    Hi, I meet a problem when I am trying to use the MODE_MOCK. I am attempting to send simulation data to Apollo at fixed time intervals, for example with a 0.05s time period. But I found that the timestamp_sec value for the perception/planning outputs is based on the Apollo system time, rather than the timestamps of the simulated data. Will this affect the performane of Apollo?

    I am concerned that this difference may cause latency problems. Therefore, I tried using MODE_MOCK. However, in MODE_MOCK mode, the planning module is unable to run. Could you please tell me how to solve this problem?

    System information

    • OS Platform:Linux Ubuntu 18.04
    • Apollo version: 6.0

    Steps to reproduce the issue:

    • Change the mode in /apollo/cyber/conf/cyber.pb.conf to MODE_MOCK
    • Publish time to the channel /Clock
    Module:Cyber 
    opened by MingfeiCheng 0
  • [Planning] pull over issue about #14264

    [Planning] pull over issue about #14264

    image

    Hi, I come here to ask a question about recent Pull request. I have applied #14264 "Planning: fix pull over add path bounds debug info for dreamview" ths pull request to my own main. and I got the problem that the car automatically parks along the lane edge as you see the picture above. I think it looks like the car tries to make a space for emergency cars behind to pass our car but I put the debug messages whether it has pull over status and It doesn't have. I wonder why this situation happens after applying the pull request.

    Module: Planning 
    opened by HyeonseopLim 0
Releases(v8.0.0)
  • v8.0.0(Dec 25, 2022)

    Apollo 8.0 is an effort to provide an extensible software framework and complete development cycle for Autonomous Driving developer. Apollo 8.0 introduces easily-reused Package to organize software modules. Apollo 8.0 integrates the whole process of perception development ,by combining model training service, model deployment tool and end-to-end visual validation tool . And another 3 new deep learning models are incorporated in Apollo 8.0 for perception module. Simulation service is upgraded by integrating local simulator in Dreamview to provide powerful debug tool for PnC developer.

    Major Features and Improvements

    • Reusable Software Package
      • Reorganize the modules based on Package to provide the functionality in an easy-to-consume manner
      • Fast installation experience based on Package, refer to Installation - Package Method
      • Support customizing and sharing Package
    • Brand New Deep Learning Models
      • CenterPoint, center-based two-stage 3D obstacle detection model
      • CaDDN, camera obstacle detection model
      • BEV PETR, camera obstacle detection model
    • Complete Perception Development Process
      • Support Paddle3D to provide Model Training service
      • Provide model deployment tool by normalizing the model meta.
      • Provide visual validation tool in Dreamview
    • Upgraded PnC Simulation Service
      • Provide PnC debug tool by integrating local simulator in Dreamview
      • Support scenario editing online and download in Dreamview

    [Note] All models and methodologies included in Apollo 8.0 are for research purposes only. Productized and commercial uses of these models are NOT encouraged, and it is at your own risk. Please be cautious to try Apollo 8.0 with enough safety protection mechanism. Your feedback is highly appreciated for us to continuously improve the models.

    Source code(tar.gz)
    Source code(zip)
  • v7.0.0(Dec 28, 2021)

    Apollo 7.0 incorporates 3 brand new deep learning models to enhance the capabilities for Apollo Perception and Prediction modules. Apollo Studio is introduced in this version, combining with Data Pipeline, to provide a one-stop online development platform to better serve Apollo developers. Apollo 7.0 also publishes the PnC reinforcement learning model training and simulation evaluation service based on previous simulation service.

    Major Features and Improvements

    • Brand New Deep Learning Models
      • Mask-Pillars obstacle detection model based on PointPillars
      • Inter-TNT prediction model based on interactive prediction & planning evaluator
      • Camera obstacle detection model based on SMOKE
    • Apollo Studio Services
      • Practice environment service
      • Vehicle management service
    • PnC Reinforcement Learning Services
      • Smart training and evaluation service
      • Extension interface
    • Upgraded Perception Module Code Structure

    [Note] All models and methodologies included in Apollo 7.0 are for research purposes only. Productized and commercial uses of these models are NOT encouraged, and it is at your own risk. Please be cautious to try Apollo 7.0 with enough safety protection mechanism. Your feedback is highly appreciated for us to continuously improve the models.

    Source code(tar.gz)
    Source code(zip)
  • v6.0.0(Sep 21, 2020)

    Apollo 6.0 incorporates new deep learning models to enhance the capabilities for certain Apollo modules. This version works seamlessly with new additions of data pipeline services to better serve Apollo developers. Apollo 6.0 is also the first version to integrate certain features as a demonstration of our continuous exploration and experimentation efforts towards driverless technology.

    Major Features and Improvements

    • Upgraded Deep Learning Models
      • PointPillars based obstacle detection model
      • Semantic map based pedestrian prediction model
      • Learning based trajectory planning model
    • Brand New Data Pipeline Services
      • Low speed obstacle prediction model training service with semantic map support
      • PointPillars based obstacle detection model training service
      • Control profiling service
      • Vehicle dynamic model training service
      • Open space planner profiling service
      • Complete control parameter auto-tune service
    • Driverless Research
      • Remote control interface with DreamView integration
      • Audio based emergency vehicle detection system
    • Upgraded dev environment including build and dependency updates

    [Note] All models and methodologies included in Apollo 6.0 are for research purposes only. Productized and commercial uses of these models are NOT encouraged, and it is at your own risk. Please be cautious to try Apollo 6.0 with enough safety protection mechanism. Your feedback is highly appreciated for us to continuously improve the models.

    Source code(tar.gz)
    Source code(zip)
  • v5.5.0(Jan 6, 2020)

    Apollo 5.5 enhances the complex urban road autonomous driving capabilities of previous Apollo releases, by introducing curb-to-curb driving support. With this new addition, Apollo is now a leap closer to fully autonomous urban road driving. The car has complete 360-degree visibility, along with upgraded perception deep learning model a brand new prediction model to handle the changing conditions of complex road and junction scenarios, making the car more secure and aware. New Planning scenarios have been introduced to support curb-side functionality.

    Major Features And Improvements

    • Brand new Data Pipeline Service
      • Sensor Calibration service
    • Brand new module - Storytelling
    • Scenario - Based Planning with a new planning scenarios to support curb-to-curb driving
      • Park-and-go
      • Emergency
    • Prediction Model - Caution Obstacle
      • Semantic LSTM evaluator
      • Extrapolation predictor
    • Control module
      • Model Reference Adaptive Control (MRAC)
      • Control profiling service
    • Simulation scenarios

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously in complex urban road conditions including both residential and downtown areas. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT. URBAN DRIVING INVOLVES NAVIGATING HIGH RISK ZONES LIKE SCHOOLS, PLEASE TEST APOLLO 5.5 WITH THE SUPPORT FROM APOLLO ENGINEERING TEAM, PLEASE AVOID DRIVING THE VEHICLE ON THE HIGHWAY OR AT SPEEDS THAT ARE ABOVE OUR SUPPORTED THRESHOLD.

    Source code(tar.gz)
    Source code(zip)
  • v5.0.0(Jun 29, 2019)

    Apollo 5.0 is an effort to support volume production for Geo-Fenced Autonomous Driving. The car now has 360-degree visibility, along with upgraded perception deep learning model to handle the changing conditions of complex road scenarios, making the car more secure and aware. Scenario-based planning has been enhanced to support additional scenarios like pull over and crossing bare intersections.

    Major Features And Improvements

    • Brand new Data Pipeline Service
      • Vehicle Calibration
    • New Perception algorithms
    • Sensor Calibration Service
    • Scenario - Based Planning with a new planning algorithm, Open Space Planner and new scenarios supported
      • Intersection - STOP Sign, Traffic Light, Bare Intersection
      • Park - Valet, Pull Over
    • Map Data Verification tool
    • Prediction Evaluators
    • Simulation web platform - Dreamland
      • Scenario Editor
      • Control-in-loop Simulation
    • Cyber RT runtime framework
      • Official ARM CPU support with full docker dev environment support: https://github.com/ApolloAuto/apollo/blob/master/docs/cyber/CyberRT_Docker.md
      • Python language support for full set of Cyber RT API
      • Cyber RT API website goes online: https://cyber-rt.readthedocs.io/en/latest/

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously in complex urban road conditions including both residential and downtown areas. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT. URBAN DRIVING INVOLVES NAVIGATING HIGH RISK ZONES LIKE SCHOOLS, PLEASE TEST APOLLO 5.0 WITH THE SUPPORT FROM APOLLO ENGINEERING TEAM, PLEASE AVOID DRIVING THE VEHICLE ON THE HIGHWAY OR AT SPEEDS THAT ARE ABOVE OUR SUPPORTED THRESHOLD.

    Source code(tar.gz)
    Source code(zip)
  • v3.5.0(Jan 7, 2019)

    Apollo 3.5 is capable of navigating through complex driving scenarios such as residential and downtown areas. With 360-degree visibility and upgraded perception algorithms to handle the changing conditions of urban roads, the car is more secure and aware.

    Major Features And Improvements

    • Upgraded Sensor Suite
      • VLS-128 Line LiDAR
      • FPD-Link Cameras
      • Continental long-range radars
      • Apollo Extension Unit (AXU)
      • Additional IPC
    • Brand New Runtime Framework - Apollo Cyber RT which is specifically targeted towards autonomous driving
    • New Perception algorithms
    • Scenario - Based Planning with a new planning algorithm, Open Space Planner
    • New Localization algorithm
    • V2X Capabilities
    • Open Vehicle Certification platform - 2 new vehicles added GAC GE3 and GWM WEY VV6

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously in complex urban road conditions including both residential and downtown areas. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT. URBAN DRIVING INVOLVES NAVIGATING HIGH RISK ZONES LIKE SCHOOLS, PLEASE TEST APOLLO 3.5 WITH THE SUPPORT FROM APOLLO ENGINEERING TEAM.

    Source code(tar.gz)
    Source code(zip)
    demo_3.5.record(91.78 MB)
  • v3.0.0(Jul 3, 2018)

    Apollo 3.0 enables L4 product level solution that allows vehicles to drive in a closed venue setting at a low speed. Automakers can now leverage this one stop solution for autonomous driving without having to customize on their own.

    Major Features And Improvements

    • New Safety module called Guardian
    • Enhanced Surveillance module - Monitor
    • Hardware service layer that will now act like a platform and not a product, giving developers the flexibility to integrate their own Hardware
    • Apollo Sensor Unit (ASU)
    • New Gatekeeper - Ultrasonic Sensor
    • Perception module changes:
      • CIPV detection/ Tailgating – moving within a single lane
      • Whole lane line support - bold line support for long range accuracy. There are 2 different types on installations for Camera, low and high installation.
      • Online pose estimation – determines angle change and estimates it when there are bumps or slopes to ensure that the sensors move with the car and the angle/pose changes accordingly
      • Visual localization – we now use camera for localization. This functionality is currently being tested.
      • Ultrasonic Sensor – Currently being tested as the final gatekeeper to be used in conjunction with Guardian for Automated Emergency brake and vertical/perpendicular parking.
    Source code(tar.gz)
    Source code(zip)
  • v2.5.0(Apr 18, 2018)

    This release allows the vehicle to autonomously run on geo-fenced highways. Vehicles are able to do lane keeping cruise and avoid collisions with the leading vehicles.

    Major Features And Improvements

    • Upgrade MSF localization
    • Upgrade DreamView with more visualization features
    • Add HD map data collection tool
    • Add vision based perception with obstacle and lane mark detections
    • Add relative map to support ACC and lane keeping for planning and control
    • Make dockerfile available

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously on highways at higher speed without HD map support. The highway needs to have clear white painted lane marks with minimum curvatures. The performance of vision based perception will degrade significantly at night or with strong light flares. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT.

    Source code(tar.gz)
    Source code(zip)
    demo_2.5.bag(1633.54 MB)
    multi_lidar_gnss_calibrator_and_doc.zip(1.42 MB)
  • v2.0.0(Dec 30, 2017)

    Apollo 2.0 enables your vehicle to drive on simple urban roads autonomously. It is able to cruise, avoid collisions with obstacles, stop at traffic lights and change lanes.

    Read our Release Note(https://github.com/ApolloAuto/apollo/blob/master/RELEASE.md) to learn more about major features and improvements. Open a new issue(https://github.com/ApolloAuto/apollo/issues) if you have questions and feedback. And of course, we always appreciate your contribution (https://github.com/ApolloAuto/apollo/blob/master/CONTRIBUTING.md).

    You can get started with Apollo 2.0(https://github.com/ApolloAuto/apollo/archive/v2.0.0.tar.gz) today.

    Source code(tar.gz)
    Source code(zip)
    apollo_2.0_camera_sample.bag(296.37 MB)
    calibration.tar.gz(3.13 MB)
    demo_2.0.bag(63.11 MB)
  • v1.5.0(Sep 19, 2017)

  • v1.0.0(Jul 4, 2017)

    Apollo has been initiated to provide an open, comprehensive, and reliable software platform for its partners in the automotive and autonomous-driving industries. Partners can use the Apollo software platform and the reference hardware that Apollo has certified as a template to customize in the development of their own autonomous vehicles.

    Apollo 1.0.0, also referred to as the Automatic GPS Waypoint Following, works in an enclosed venue such as a test track or parking lot. It accurately replays a trajectory and the speed of that trajectory that a human driver has traveled in an enclosed, flat area on solid ground.

    At this stage of development, Apollo 1.0 cannot perceive obstacles in close proximity, drive on public roads, or drive in areas without GPS signals.

    Source code(tar.gz)
    Source code(zip)
    demo_1.0.bag(30.77 MB)
Owner
Apollo Auto
An open autonomous driving platform
Apollo Auto
Hands-On example code for Sensor Fusion and Autonomous Driving Stack based on Autoware

Autoware "Hands-On" Stanford Lecture AA274 / Graz University of Technology M. Schratter, J. Zubaca, K. Mautner-Lassnig, T. Renzler, M. Kirchengast, S.

Virtual Vehicle 27 Dec 12, 2022
ModuLiDAR is an all-in-one open-source software for autonomous UGVs and industrial robots.

ModuLiDAR is an all-in-one open-source software for autonomous UGVs and industrial robots. the target industries that ModuLiDAR is working on are farming industry, mining industry, warehouses industry, and construction industry.

null 18 Dec 12, 2022
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 8, 2023
Open source software for autonomous drones.

Prometheus - 自主无人机开源项目 [English Readme] Prometheus是希腊神话中最具智慧的神明之一,希望本项目能为无人机研发带来无限的智慧与光明。 项目总览 Prometheus是一套开源的自主无人机软件平台,为无人机的智能与自主飞行提供全套解决方案。本项目基于PX4

Amov Lab 1.6k Jan 9, 2023
An Open-source Strong Baseline for SE(3) Planning in Autonomous Drone Racing

Fast-Racing An Open-source Strong Baseline for SE(3) Planning in Autonomous Drone Racing 0. Overview Fast-Racing is a strong baseline that focuses on

ZJU FAST Lab 109 Dec 6, 2022
custom esp8266 controller for driving the pwm led controller

room8266 custom esp8266 controller for driving the pwm led controller designed to drive this: https://github.com/austinscreations/PWM-LED-Controller t

null 1 Nov 1, 2021
Self driving car with obstacle detection and avoidance

STM32F4-Self-Driving-Car-Mini-Project Self driving car with obstacle detection and avoidance Hardware STM32F401RE Dev Board HCSR04 ultrasonic sensor (

Olaoluwa Raji 2 Jan 6, 2022
General purpose power controller, capable of driving soldering irons using different voltages and probe types.

All-purpose Power Micro Controller This general purpose power micro controller features: Wheatstone Bridge front-end New Texas Instruments INA823 inst

Tomasz Jastrzębski 30 Dec 3, 2022
Autonomous recorder for vex robots using the PROS API

VEX Robot Recorder Description This is a demo project for the "Recorder" class that allows the user to record and play back past recorded autonomouses

null 2 Jun 14, 2022
Decentralized architecture for loss tolerant semi-autonomous robotics

gestalt-arch Decentralized architecture for loss tolerant semi-autonomous robotics Objective We demonstrate a decentralized robot control architecture

null 4 Dec 18, 2021
AWS Ambit Scenario Designer for Unreal Engine 4 (Ambit) is a suite of tools to streamline content creation at scale for autonomous vehicle and robotics simulation applications.

AWS Ambit Scenario Designer for Unreal Engine 4 Welcome to AWS Ambit Scenario Designer for Unreal Engine 4 (Ambit), a suite of tools to streamline 3D

AWS Samples 77 Jan 2, 2023
VEX v5 Pro program that records driver movements and plays them back during the autonomous period.

Autonomous Recorder This code was written for team 5588R, but it can be easily modified to work with your team's robot. Notes Code isn't fully finishe

brett 2 Jun 21, 2022
A nonlinear MPC used to control an autonomous car.

MPC local planner A nonlinear MPC used to control an autonomous car. Description This repository contains an implementation of a nonlinear MPC that is

Tor Børve Rasmussen 11 Dec 8, 2022
Multi-sensor perception for autonomous vehicles

Multi-Sensor Fusion for Perception -- 多传感器融合感知 Maintained by Ge Yao, [email protected] Up & Running Overview 本Repo为基于ROS melodic @ Ubuntu 18.04的Mul

Ge Yao 5 Dec 2, 2022
A cross platform shader language with multi-threaded offline compilation or platform shader source code generation

A cross platform shader language with multi-threaded offline compilation or platform shader source code generation. Output json reflection info and c++ header with your shaders structs, fx-like techniques and compile time branch evaluation via (uber-shader) "permutations".

Alex Dixon 286 Dec 14, 2022
Khepri is a Cross-platform agent, the architecture and usage like Coblat Strike but free and open-source.

Khepri Free,Open-Source,Cross-platform agent and Post-exploiton tool written in Golang and C++ Description Khepri is a Cross-platform agent, the archi

Young 1.4k Dec 30, 2022
Free,Open-Source,Cross-platform agent and Post-exploiton tool written in Golang and C++, the architecture and usage like Cobalt Strike

Khepri Free,Open-Source,Cross-platform agent and Post-exploiton tool written in Golang and C++ Description Khepri is a Cross-platform agent, the archi

Young 1.4k Jan 3, 2023
FireDog - Open source cross-platform lightweight virus signature detection engine.

FireDog 开源跨平台轻量级病毒特征检测引擎。Open source cross-platform lightweight virus signature detection engine. 语言 Language C++ 11 LICENSE 是你们喜欢的:MIT License. 让我们搞起

null 41 Nov 4, 2022
SDR++ is a cross-platform and open source SDR software with the aim of being bloat free and simple to use.

SDR++ is a cross-platform and open source SDR software with the aim of being bloat free and simple to use.

AlexandreRouma 2.2k Jan 7, 2023