An open autonomous driving platform

Overview

Build Status Simulation Status


We choose to go to the moon in this decade and do the other things,

not because they are easy, but because they are hard.

-- John F. Kennedy, 1962

Welcome to Apollo's GitHub page!

Apollo is a high performance, flexible architecture which accelerates the development, testing, and deployment of Autonomous Vehicles.

For business and partnership, please visit our website.

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Individual Versions
  4. Architecture
  5. Installation
  6. Quick Starts
  7. Documents

Introduction

Apollo is loaded with new modules and features but needs to be calibrated and configured perfectly before you take it for a spin. Please review the prerequisites and installation steps in detail to ensure that you are well equipped to build and launch Apollo. You could also check out Apollo's architecture overview for a greater understanding of Apollo's core technology and platforms.

Prerequisites

[New 2021-01] The Apollo platform (stable version) is now upgraded with software packages and library dependencies of newer versions including:

  1. CUDA upgraded to version 11.1 to support Nvidia Ampere (30x0 series) GPUs, with NVIDIA driver >= 455.32
  2. LibTorch (both CPU and GPU version) bumped to version 1.7.0 accordingly.

We do not expect a disruption to your current work, but to ease your life of migratation, you would need to:

  1. Update NVIDIA driver on your host to version >= 455.32. (Web link)
  2. Pull latest code and run the following commands after restarting and logging into Apollo Development container:
# Remove Bazel output of previous builds
rm -rf /apollo/.cache/{bazel,build,repos}
# Re-configure bazelrc.
./apollo.sh config --noninteractive

  • The vehicle equipped with the by-wire system, including but not limited to brake-by-wire, steering-by-wire, throttle-by-wire and shift-by-wire (Apollo is currently tested on Lincoln MKZ)

  • A machine with a 8-core processor and 16GB memory minimum

  • NVIDIA Turing GPU is strongly recommended

  • Ubuntu 18.04

  • NVIDIA driver version 455.32.00 and above (Web link)

  • Docker-CE version 19.03 and above (Official doc)

  • NVIDIA Container Toolkit (Official doc)

Please note, it is recommended that you install the versions of Apollo in the following order: 1.0 -> whichever version you would like to test out. The reason behind this recommendation is that you need to confirm whether individual hardware components and modules are functioning correctly, and clear various version test cases before progressing to a higher and more capable version for your safety and the safety of those around you.

Individual Versions:

The following diagram highlights the scope and features of each Apollo release:

Apollo 1.0:

Apollo 1.0, also referred to as the Automatic GPS Waypoint Following, works in an enclosed venue such as a test track or parking lot. This installation is necessary to ensure that Apollo works perfectly with your vehicle. The diagram below lists the various modules in Apollo 1.0.

Apollo 1.5:

Apollo 1.5 is meant for fixed lane cruising. With the addition of LiDAR, vehicles with this version now have better perception of its surroundings and can better map its current position and plan its trajectory for safer maneuvering on its lane. Please note, the modules highlighted in Yellow are additions or upgrades for version 1.5.

Apollo 2.0:

Apollo 2.0 supports vehicles autonomously driving on simple urban roads. Vehicles are able to cruise on roads safely, avoid collisions with obstacles, stop at traffic lights, and change lanes if needed to reach their destination. Please note, the modules highlighted in Red are additions or upgrades for version 2.0.

Apollo 2.5:

Apollo 2.5 allows the vehicle to autonomously run on geo-fenced highways with a camera for obstacle detection. Vehicles are able to maintain lane control, cruise and avoid collisions with vehicles ahead of them.

Please note, if you need to test Apollo 2.5; for safety purposes, please seek the help of the
Apollo Engineering team. Your safety is our #1 priority,
and we want to ensure Apollo 2.5 was integrated correctly with your vehicle before you hit the road.

Apollo 3.0:

Apollo 3.0's primary focus is to provide a platform for developers to build upon in a closed venue low-speed environment. Vehicles are able to maintain lane control, cruise and avoid collisions with vehicles ahead of them.

Apollo 3.5:

Apollo 3.5 is capable of navigating through complex driving scenarios such as residential and downtown areas. The car now has 360-degree visibility, along with upgraded perception algorithms to handle the changing conditions of urban roads, making the car more secure and aware. Scenario-based planning can navigate through complex scenarios, including unprotected turns and narrow streets often found in residential areas and roads with stop signs.

Apollo 5.0:

Apollo 5.0 is an effort to support volume production for Geo-Fenced Autonomous Driving. The car now has 360-degree visibility, along with upgraded perception deep learning model to handle the changing conditions of complex road scenarios, making the car more secure and aware. Scenario-based planning has been enhanced to support additional scenarios like pull over and crossing bare intersections.

Apollo 5.5:

Apollo 5.5 enhances the complex urban road autonomous driving capabilities of previous Apollo releases, by introducing curb-to-curb driving support. With this new addition, Apollo is now a leap closer to fully autonomous urban road driving. The car has complete 360-degree visibility, along with upgraded perception deep learning model and a brand new prediction model to handle the changing conditions of complex road and junction scenarios, making the car more secure and aware.

Apollo 6.0:

Apollo 6.0 incorporates new deep learning models to enhance the capabilities for certain Apollo modules. This version works seamlessly with new additions of data pipeline services to better serve Apollo developers. Apollo 6.0 is also the first version to integrate certain features as a demonstration of our continuous exploration and experimentation efforts towards driverless technology.

Architecture

  • Hardware/ Vehicle Overview

  • Hardware Connection Overview

  • Software Overview

Installation

Congratulations! You have successfully built out Apollo without Hardware. If you do have a vehicle and hardware setup for a particular version, please pick the Quickstart guide most relevant to your setup:

Quick Starts:

Documents

  • Technical Tutorials: Everything you need to know about Apollo. Written as individual versions with links to every document related to that version.

  • How-To Guides: Brief technical solutions to common problems that developers face during the installation and use of the Apollo platform

  • Specs: A Deep dive into Apollo's Hardware and Software specifications (only recommended for expert level developers that have successfully installed and launched Apollo)

  • FAQs

Questions

You are welcome to submit questions and bug reports as GitHub Issues.

Copyright and License

Apollo is provided under the Apache-2.0 license.

Disclaimer

Apollo open source platform only has the source code for models, algorithms and processes, which will be integrated with cybersecurity defense strategy in the deployment for commercialization and productization.

Please refer to the Disclaimer of Apollo in Apollo's official website.

Connect with us

Issues
  • How to generate openDRIVE formate file like base_map.xml

    How to generate openDRIVE formate file like base_map.xml

    Hi, I want to generate map file like base_map.xml, is there any toturials about how to generate and load openDRIVE formate file? I've download the OpenDRIVE Format Specification, Rev. 1.4 but did not find any code samples to generate .xml file, thank you very much!

    Type: Question Module: HD Map 
    opened by hyx007 312
  • hdmap - I want to know more about the map in the simulation

    hdmap - I want to know more about the map in the simulation

    I see the map tool in modules, but I guess the hdmap in your simulation is sunnyvale_loop.bin,

    but I can't find this file, I dont really understand the map

    it comples the map in opendrive format to binary, all i know is this

    but I hope there is some more document to use the map tool

    could someone who know about it tell me how the original file .xml to the final file .bin

    Type: Help wanted Module: HD Map Module: Simulation & Dreamview 
    opened by carlin314 47
  • Cannot see any car image in the Dreamview or see any trajectory.

    Cannot see any car image in the Dreamview or see any trajectory.

    Hi All,

    I successfully finish all the instructions for the demo, but when I switch into Dreamview window, there is neither car image nor trajectory such as in the readme when I rosbag play -l.

    I can see the bag file playing, but there's nothing showing on the localhost:8888. Has anyone encounter same problem as me ?

    Thanks!

    Type: Question 
    opened by tomzhang666 39
  • How to feed images to lane detection module properly?

    How to feed images to lane detection module properly?

    I'm trying to feed images from the KAIST data set to the lane detection module.

    My procedure is like this:

    1. Build a cyber record file. First, create a \tf message. Then, resize KAIST images from 1280x560 to 1920x1080, and create /apollo/sensor/camera/front_6mm/image messages. Pack them as a cyber record file.
    2. Play the record file while running the lane detection module. The lane detection module is run as indicated in this page

    The lane detection module is running without reporting any error, and the detected lanes match the lanes in the picture very well. But in the BEV, the detected lanes are not parallel while they should be (see the picture below). My guess is that the camera location of the KAIST data set is not the same as Apollo's default camera location, but I couldn't find a way to correct it.

    So my question is, how should I properly feed images to the lane detection module so that it can yield parallel lanes? Thanks a lot.

    screenshot of lane detection module In the right part of the screenshot, the detected lanes are not parallel.

    opened by tandf 35
  • Any plan for supporting Nvidia RTX 2080?

    Any plan for supporting Nvidia RTX 2080?

    Hi,

    I wonder if Apollo is going to support the new 20 series GPUs such as RTX 2080?

    As indicated in another issue, Apollo comes with a pre-compiled version of Caffe which doesn't support new GPU architectures such as Volta. Is this still the case in version 3.5? If not, would Apollo provide support in next releases for 20 series GPUs?

    Thanks, Junjie

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 14.04): Ubuntu 14.04
    • Apollo installed from (source or binary): source
    • Apollo version (1.0, 1.5, 2.0, 2.5, 3.0): 3.0

    Steps to reproduce the issue:

    • Please use bullet points and include as much details as possible:

    Supporting materials (screenshots, command lines, code/script snippets):

    Type: Question Module: Perception Module: Hardware 
    opened by junjieshen 33
  • Questions in running cyber_recorder

    Questions in running cyber_recorder

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 18.04): 18.04
    • Apollo installed from (source or binary):
    • Apollo version (3.5, 5.0, 5.5, 6.0): 6.0
    • Output of apollo.sh config if on master branch:

    Steps to reproduce the issue:

    • run simulation together with apollo-6.0 and lgsvl in modular test
    • run cyber_recorder to record message from channel
    • fail to play or get information from record

    Supporting materials (screenshots, command lines, code/script snippets):

    error_recorder

    opened by Yimags 30
  • can not connect to http://localhost:8887/

    can not connect to http://localhost:8887/

    I do as follow

    bash docker/scripts/install_docker.sh
    docker ps  
    bash docker/scripts/dev_start.sh
    bash docker/scripts/dev_into.sh
    bash scripts/hmi.sh
    

    and then

    ...
    [WARNING] Failed to find device with pattern "ttyUSB*" ...
    ...
    [WARNING] Failed to find device with pattern "ram*" ...
    ...
    ...
    Start roscore...
    HMI ros node service running at localhost:8887
    HMI running at http://localhost:8887
    
    

    but I can not connect to can not connect to http://localhost:8887/

    Type: Bug 
    opened by PikachuHy 28
  • Dreamview does not appear on localhost after build succeeds.

    Dreamview does not appear on localhost after build succeeds.

    After the build has passed successfully on Ubuntu 17.10:

    [email protected]_dev_docker:/apollo$ sudo bash scripts/bootstrap.sh
    Started supervisord with dev conf
    Start roscore...
    voice_detector: started
    Dreamview is running at http://localhost:8888
    

    But Firefox cannot connect to localhost:8888. apollo/data/log/ has the following:

    -rw-r--r-- 1 root root   0 Mar 19 17:23 dreamview.out
    lrwxrwxrwx 1 root root  57 Mar 19 17:23 monitor.INFO -> monitor.in_dev_docker.root.log.INFO.20180319-172355.15403
    -rw-r--r-- 1 root root 21K Mar 19 17:23 SystemMonitor.flags
    lrwxrwxrwx 1 root root  60 Mar 19 17:23 monitor.WARNING -> monitor.in_dev_docker.root.log.WARNING.20180319-172357.15403
    lrwxrwxrwx 1 root root  58 Mar 19 17:23 monitor.ERROR -> monitor.in_dev_docker.root.log.ERROR.20180319-172357.15403
    -rw-r--r-- 1 root root 16M Mar 28 11:48 monitor.out
    -rw-r--r-- 1 root root 16M Mar 28 11:48 monitor.in_dev_docker.root.log.WARNING.20180319-172357.15403
    -rw-r--r-- 1 root root 17M Mar 28 11:48 monitor.in_dev_docker.root.log.INFO.20180319-172355.15403
    -rw-r--r-- 1 root root 16M Mar 28 11:48 monitor.in_dev_docker.root.log.ERROR.20180319-172357.15403
    -rw-r--r-- 1 root root   0 Mar 28 16:48 roscore.out
    -rw-r--r-- 1 root root   0 Mar 28 16:48 voice_detector.out
    

    The contents of monitor.ERROR are:

    Log file created at: 2018/03/19 17:23:57
    Running on machine: in_dev_docker
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    E0319 17:23:57.933039 15439 can_checker_factory.cc:48] Failed to create CAN checker with parameter: brand: ESD_CAN
    type: PCI_CARD
    channel_id: CHANNEL_ID_ZERO
    E0319 17:24:05.134542 15439 info_collector.cc:55] Cannot load file data/log/canbus.flags
    E0319 17:24:05.134588 15439 info_collector.cc:55] Cannot load file data/log/control.flags
    E0319 17:24:05.134618 15439 info_collector.cc:55] Cannot load file data/log/localization.flags
    E0319 17:24:05.134646 15439 info_collector.cc:55] Cannot load file data/log/perception.flags
    E0319 17:24:05.134675 15439 info_collector.cc:55] Cannot load file data/log/planning.flags
    E0319 17:24:05.134704 15439 info_collector.cc:55] Cannot load file data/log/prediction.flags
    E0319 17:24:05.134734 15439 info_collector.cc:55] Cannot load file data/log/routing.flags
    E0319 17:24:32.730955 15439 can_checker_factory.cc:48] Failed to create CAN checker with parameter: brand: ESD_CAN
    

    Any help would be appreciated...

    Type: Question Module: Simulation & Dreamview 
    opened by abhisheknaik96 27
  • Build Apollo in Visual Studio Code

    Build Apollo in Visual Studio Code

    Hi Guys,

    I am an autonomous vehicle enthusiast who recently started working on Apollo. My question/doubt is as follows.

    I have successfully built the Apollo code using command line and ran the simulation as well. However to get better understanding on the control flow of the code I would want to build the code in Visual Studio code and run unit tests.

    I followed the procedure given in apollo/docs/howto/how_to_build_and_release.md. But when I run build in VS code I am getting the below error.

    [ERROR] Failed to start docker container "apollo_dev" based on image: apolloauto/apollo:dev-x86_64-20180130_1338 apollo_docker.sh: line 140: docker: command not found The terminal process terminated with exit code: 127

    Does that mean even VS code should be run inside the docker. Currently I am running it in my host machine outside docker.

    Excuse me the question looks silly.

    Thank you, KK

    Type: Question Module: Docker 
    opened by kk2491 26
  • How can I see lane detection from a camera in SVL

    How can I see lane detection from a camera in SVL

    Hi, I'm trying to see lane detection from a camera in SVL simulator, and I trying to do as is specified in this document.

    First I create a record file from de SVL. Next I do point 1,2,3,4,5,6,7. In the point 8, I chose "If you want to test lane detection alone use" using the command mainboard -d ./modules/perception/production/dag/dag_streaming_perception_lane.dag. To see the lane results, I mark enable_visualization: true in modules/perception/production/conf/perception/camera/lane_detection_component.config before executing the point 1 form the document. I play de recorded file it the file of the perception camera turns off

    How can I fix that? Thank you

    Module: Simulation & Dreamview 
    opened by rzr900ipl 25
  • How to deal with unix:///tmp/supervisor.sock refused connection when start the HMI?

    How to deal with unix:///tmp/supervisor.sock refused connection when start the HMI?

    After I install the docker and build the apollo, I met the problem Started supervisord with dev conf Start roscore... voice_detector: started unix:///tmp/supervisor.sock refused connection when start the HMI. Does anyone met this problem and how to deal with it? THX

    Type: Help wanted Module: Docker 
    opened by JoelinChee 25
  • Pointpillars detection not working properly before the update #13189

    Pointpillars detection not working properly before the update #13189"Perception:update pointpillars network and models"

    I am trying dreamview lidar perception practice on several Apollo version builds following this guide 使用Dreamview调试Apollo激光雷达感知实践 In this change #13189, pointpillar detection removes onnx models and use 5 libtorch models instead.

    My test found that after this change #13189, the dreamview lidar perception runs normally and obstacles shown in the dreamview. But before this change #13189, there is no obstacles shown in the dreamview. From cyber_monitor, channel /apollo/perception/obstacles output continuously at some rate but the message is empty (no obstacles). Looks like the lidar perception is running "normally" but no obstacles detected.

    Add debug print in point_pilliars_detection.cc to print out the objects detected and found that:

    1. in the good case (those after change #13189), 10 ~ 20+ objects detected in each time with type of CAR, BUS, etc
    2. in the NG case (those before #13189), most of the time 0 object detected, occasionally there are 1~2 objected with type of UNKNOWN or CAR

    Also run the test suite of pointpillars detection, and the test result shows PASS for both cases. The output objects are somewhat different but looks acceptable. (after #13189 it detected 17 objects in the data while before it detected 16 objects)

    1. Is this an expected/known result for lidar perception before the change #13189?
    2. Is this lidar perception practice guide (and the data record used in it) suitable for the versions before this change #13189? If not can you suggest the correct guide and data record for test?
    3. Any other steps or configuration I missed here?
    Module: Perception 
    opened by bismack163 0
  • Camera fusion in Apollo

    Camera fusion in Apollo

    Hello, I'm confused about the camera fusion in Apollo. Since there are two cameras with different focal lengths in Apollo, how to fuse the results from these two cameras? Should the results from all cameras be fused before being fused with Lidar's result? Is there any tutorial docs about this part or which part of codes should I refer to? Thanks!

    Module: Perception 
    opened by sunjia0909 0
  • About ADCtrajectory heading angle in r7.0

    About ADCtrajectory heading angle in r7.0

    The heading angle in the trajectory sent by the planning module uses the angle of the vehicle axis or the direction angle of the vehicle speed?

    System information

    • OS Platform and Distribution ( Ubuntu 20.04):
    • Apollo installed from (source):
    • Apollo version (7.0):
    Module: Planning 
    opened by zhuiyuehun 0
  • Apollo fails to drive forward when vehicle starts on top of the stop line

    Apollo fails to drive forward when vehicle starts on top of the stop line

    Describe the bug When the ego vehicle approaches the stop sign, it goes through pre_stop, stop, creep and intersection_cruise stages, and resumes moving after stopping before the stop line. However, if I initialize the scenario by placing the ego vehicle on top of a stop line, the planning module does not recognize it is in one of the stop sign stages, and never begins moving to complete any routing request.

    To Reproduce

    1. Start Dreamview
    2. Modify the parameter passed to send_localization to 0, -1, -2, or -3 in the provided script. (under "Additional Context" section)
    3. Run provided script to send localization to Apollo.
    4. Turn on SimControl, planning, routing.
    5. Send a routing request to make the vehicle go straight, by clicking on a point across the junction, and "Send Routing Request" button.
    6. Observe routing produced trajectory but no planning trajectory, thus vehicles stops indefinitely.
    7. Turn off Sim Control.
    8. Repeat step 2-6, but changing the value in Step 3 to -4 or -5. Observe vehicle completes routing request.

    Expected behavior Although the scenario started with the vehicle on top of the stop line, the vehicle should recognize its current scenario being stop_sign/unprotected/stop, or some other scenario and resume moving forward to complete the routing request.

    Screenshots Expected routing request on Borregas Avenue image

    Routing will be completed if the vehicle's head was behind the stop line image

    Routing will not be completed if the vehicle's head has passed the stop line image

    Dreamview when reproducing the bug. image

    Additional context provided_script.zip

    Scenario manager is calling reference_line_info.FirstEncounteredOverlaps() to check for stop sign overlap, and reference_line_ starts from the head of the vehicle. So if the head of the vehicle has passed the stop line, Scenario manager will fail to recognize there exists a stop sign overlap.

    https://github.com/ApolloAuto/apollo/blob/9367741c57753e07c753bad82e1eac09876b344a/modules/planning/common/reference_line_info.h#L334-L339

    From the planning log, I see messages like

    E0805 19:05:17.417805 24312 st_bounds_decider.cc:316] No valid st-boundary exists.
    E0805 19:05:17.417834 24312 lane_follow_stage.cc:177] Failed to run tasks[ST_BOUNDS_DECIDER], Error message: No valid st-boundary exists.
    E0805 19:05:17.418452 24312 lane_follow_stage.cc:373] Use last frame good path to do speed fallback
    E0805 19:05:17.418457 24312 lane_follow_stage.cc:288] Speed fallback due to algorithm failure
    E0805 19:05:17.418460 24312 speed_profile_generator.cc:38] Fallback using piecewise jerk speed!
    W0805 19:05:17.418464 24312 speed_profile_generator.cc:41] init_v = 0, init_a = 0
    W0805 19:05:17.418471 24312 speed_profile_generator.cc:47] Already stopped! Nothing to do in GenerateFallbackSpeed()
    

    To fix this bug I believe there are 2 options:

    1. Extend the reference line to include the length of the vehicle. So reference line starts from the back of the vehicle. I am not sure if this is a good idea if other modules are implemented assuming reference line starts from the head of the vehicle.
    2. Modify how overlap is calculated, to consider the length of the vehicle. https://github.com/ApolloAuto/apollo/blob/9367741c57753e07c753bad82e1eac09876b344a/modules/map/pnc_map/path.cc#L494
    Module: Planning 
    opened by YuqiHuai 0
  • Apollo planner fails to resume moving forward when driving through closely-consecutive stop signs

    Apollo planner fails to resume moving forward when driving through closely-consecutive stop signs

    System information

    • OS Platform and Distribution: Ubuntu 18.04
    • Apollo installed from (source or binary): source
    • Apollo version (3.5, 5.0, 5.5, 6.0): 7.0 (master branch - commit #13620)

    Describe the bug

    The bug happened when ego car drives through 2 controlled junctions with 2 closely-consecutive stop signs, so ego car will stop forever at the second stop sign because it cannot exit the planner state INTERSECTION_CRUISE attached on the first stop sign.

    Typically, when Apollo recognizes a stop sign, its planner states are expected to change as a cycle from LANE_FOLLOW->PRE_STOP -> STOP -> CREEP -> INTERSECTION_CRUISE -> LANE_FOLLOW (back to regular state). See Apollo documentation

    As Apollo's implementation, when the ego car is leaving stop sign, which tends to change from INTERSECTION CRUISE to LANE_FOLLOW, Apollo requires 1 of 2 following conditions to be satisfied:

    (1) the reference line and corresponding traffic signal location is no longer overlapping.

    (2) if overlapping, _the distance between start of reference line (ego car current location) and stop_line (of the first stop sign) needs to be greater than a constant value kIntersectionPassDist = 40.0(m). The basic idea is that this condition is true when the ego car go past the stop sign for more than 40 meters. https://github.com/ApolloAuto/apollo/blob/334d02d7c2e05ca882d7e635cc17f7a0f3b5fcb1/modules/planning/scenarios/common/stage_intersection_cruise_impl.cc#L88

    Back to our scenario, when the ego car go pass the first stop sign and its state is being INTERSECTION_CRUISE, instead of back to regular state LANE_FOLLOW (by satisfying condition 1 or 2), ego car immediately bumps into the second stop sign (because two stop signs are located too closely). Because both condition 1 and 2 haven't been satisfied yet, and planner state is still at INTERSECTION_CRUISE of the first stop sign, so it cannot break the old cycle before start a new one on the second stop sign. Therefore, the ego car stuck forever at the second stop sign.

    To fix this bug, developer should update the value of kIntersectionPassDist dynamically based on the length between intersections. I validated the root cause of this bug by hardcoding this value to 5.0m and the bug hasn't happened anymore.

    ...
    static constexpr double kIntersectionPassDist = 5.0;  // unit: m
    ...
    

    The fix means that when ego car's after just passing the first stop sign, even when still overlapping with first stop sign area, will satisfy condition 2 after the distance > 5.0m (which is easier to satisfy compared to 20.0m as original). Thus, planner state could switch to LANE_FOLLOW, end the cycle of the first stop sign and starts a new cycle of the second stop sign.

    Attachments

    Bug Video Demo: https://drive.google.com/file/d/1xdn13C62VqQiW2ULbZ25j3d7ewDwY-xe/view Bug Cyber Record: https://drive.google.com/file/d/1kQIHZqT3mN1mu-iEhV0x6eq-_N4y8Fgl/view After Fixed Video Demo: https://drive.google.com/file/d/1TJf-fO6LgKg57q3kplP8anb8YxYQeNTF/view HD Map - Shalun (released by LGSVL): https://drive.google.com/file/d/16U-D8tRju0FB0OAduV3ei69qCVhYSCmO/view

    *All recorded simulations are configured with SimControl (and enabling Routing, Prediction and Planning modules)

    Module: Planning 
    opened by tuanngokien 0
Releases(v7.0.0)
  • v7.0.0(Dec 28, 2021)

    Apollo 7.0 incorporates 3 brand new deep learning models to enhance the capabilities for Apollo Perception and Prediction modules. Apollo Studio is introduced in this version, combining with Data Pipeline, to provide a one-stop online development platform to better serve Apollo developers. Apollo 7.0 also publishes the PnC reinforcement learning model training and simulation evaluation service based on previous simulation service.

    Major Features and Improvements

    • Brand New Deep Learning Models
      • Mask-Pillars obstacle detection model based on PointPillars
      • Inter-TNT prediction model based on interactive prediction & planning evaluator
      • Camera obstacle detection model based on SMOKE
    • Apollo Studio Services
      • Practice environment service
      • Vehicle management service
    • PnC Reinforcement Learning Services
      • Smart training and evaluation service
      • Extension interface
    • Upgraded Perception Module Code Structure

    [Note] All models and methodologies included in Apollo 7.0 are for research purposes only. Productized and commercial uses of these models are NOT encouraged, and it is at your own risk. Please be cautious to try Apollo 7.0 with enough safety protection mechanism. Your feedback is highly appreciated for us to continuously improve the models.

    Source code(tar.gz)
    Source code(zip)
  • v6.0.0(Sep 21, 2020)

    Apollo 6.0 incorporates new deep learning models to enhance the capabilities for certain Apollo modules. This version works seamlessly with new additions of data pipeline services to better serve Apollo developers. Apollo 6.0 is also the first version to integrate certain features as a demonstration of our continuous exploration and experimentation efforts towards driverless technology.

    Major Features and Improvements

    • Upgraded Deep Learning Models
      • PointPillars based obstacle detection model
      • Semantic map based pedestrian prediction model
      • Learning based trajectory planning model
    • Brand New Data Pipeline Services
      • Low speed obstacle prediction model training service with semantic map support
      • PointPillars based obstacle detection model training service
      • Control profiling service
      • Vehicle dynamic model training service
      • Open space planner profiling service
      • Complete control parameter auto-tune service
    • Driverless Research
      • Remote control interface with DreamView integration
      • Audio based emergency vehicle detection system
    • Upgraded dev environment including build and dependency updates

    [Note] All models and methodologies included in Apollo 6.0 are for research purposes only. Productized and commercial uses of these models are NOT encouraged, and it is at your own risk. Please be cautious to try Apollo 6.0 with enough safety protection mechanism. Your feedback is highly appreciated for us to continuously improve the models.

    Source code(tar.gz)
    Source code(zip)
  • v5.5.0(Jan 6, 2020)

    Apollo 5.5 enhances the complex urban road autonomous driving capabilities of previous Apollo releases, by introducing curb-to-curb driving support. With this new addition, Apollo is now a leap closer to fully autonomous urban road driving. The car has complete 360-degree visibility, along with upgraded perception deep learning model a brand new prediction model to handle the changing conditions of complex road and junction scenarios, making the car more secure and aware. New Planning scenarios have been introduced to support curb-side functionality.

    Major Features And Improvements

    • Brand new Data Pipeline Service
      • Sensor Calibration service
    • Brand new module - Storytelling
    • Scenario - Based Planning with a new planning scenarios to support curb-to-curb driving
      • Park-and-go
      • Emergency
    • Prediction Model - Caution Obstacle
      • Semantic LSTM evaluator
      • Extrapolation predictor
    • Control module
      • Model Reference Adaptive Control (MRAC)
      • Control profiling service
    • Simulation scenarios

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously in complex urban road conditions including both residential and downtown areas. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT. URBAN DRIVING INVOLVES NAVIGATING HIGH RISK ZONES LIKE SCHOOLS, PLEASE TEST APOLLO 5.5 WITH THE SUPPORT FROM APOLLO ENGINEERING TEAM, PLEASE AVOID DRIVING THE VEHICLE ON THE HIGHWAY OR AT SPEEDS THAT ARE ABOVE OUR SUPPORTED THRESHOLD.

    Source code(tar.gz)
    Source code(zip)
  • v5.0.0(Jun 29, 2019)

    Apollo 5.0 is an effort to support volume production for Geo-Fenced Autonomous Driving. The car now has 360-degree visibility, along with upgraded perception deep learning model to handle the changing conditions of complex road scenarios, making the car more secure and aware. Scenario-based planning has been enhanced to support additional scenarios like pull over and crossing bare intersections.

    Major Features And Improvements

    • Brand new Data Pipeline Service
      • Vehicle Calibration
    • New Perception algorithms
    • Sensor Calibration Service
    • Scenario - Based Planning with a new planning algorithm, Open Space Planner and new scenarios supported
      • Intersection - STOP Sign, Traffic Light, Bare Intersection
      • Park - Valet, Pull Over
    • Map Data Verification tool
    • Prediction Evaluators
    • Simulation web platform - Dreamland
      • Scenario Editor
      • Control-in-loop Simulation
    • Cyber RT runtime framework
      • Official ARM CPU support with full docker dev environment support: https://github.com/ApolloAuto/apollo/blob/master/docs/cyber/CyberRT_Docker.md
      • Python language support for full set of Cyber RT API
      • Cyber RT API website goes online: https://cyber-rt.readthedocs.io/en/latest/

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously in complex urban road conditions including both residential and downtown areas. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT. URBAN DRIVING INVOLVES NAVIGATING HIGH RISK ZONES LIKE SCHOOLS, PLEASE TEST APOLLO 5.0 WITH THE SUPPORT FROM APOLLO ENGINEERING TEAM, PLEASE AVOID DRIVING THE VEHICLE ON THE HIGHWAY OR AT SPEEDS THAT ARE ABOVE OUR SUPPORTED THRESHOLD.

    Source code(tar.gz)
    Source code(zip)
  • v3.5.0(Jan 7, 2019)

    Apollo 3.5 is capable of navigating through complex driving scenarios such as residential and downtown areas. With 360-degree visibility and upgraded perception algorithms to handle the changing conditions of urban roads, the car is more secure and aware.

    Major Features And Improvements

    • Upgraded Sensor Suite
      • VLS-128 Line LiDAR
      • FPD-Link Cameras
      • Continental long-range radars
      • Apollo Extension Unit (AXU)
      • Additional IPC
    • Brand New Runtime Framework - Apollo Cyber RT which is specifically targeted towards autonomous driving
    • New Perception algorithms
    • Scenario - Based Planning with a new planning algorithm, Open Space Planner
    • New Localization algorithm
    • V2X Capabilities
    • Open Vehicle Certification platform - 2 new vehicles added GAC GE3 and GWM WEY VV6

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously in complex urban road conditions including both residential and downtown areas. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT. URBAN DRIVING INVOLVES NAVIGATING HIGH RISK ZONES LIKE SCHOOLS, PLEASE TEST APOLLO 3.5 WITH THE SUPPORT FROM APOLLO ENGINEERING TEAM.

    Source code(tar.gz)
    Source code(zip)
    demo_3.5.record(91.78 MB)
  • v3.0.0(Jul 3, 2018)

    Apollo 3.0 enables L4 product level solution that allows vehicles to drive in a closed venue setting at a low speed. Automakers can now leverage this one stop solution for autonomous driving without having to customize on their own.

    Major Features And Improvements

    • New Safety module called Guardian
    • Enhanced Surveillance module - Monitor
    • Hardware service layer that will now act like a platform and not a product, giving developers the flexibility to integrate their own Hardware
    • Apollo Sensor Unit (ASU)
    • New Gatekeeper - Ultrasonic Sensor
    • Perception module changes:
      • CIPV detection/ Tailgating – moving within a single lane
      • Whole lane line support - bold line support for long range accuracy. There are 2 different types on installations for Camera, low and high installation.
      • Online pose estimation – determines angle change and estimates it when there are bumps or slopes to ensure that the sensors move with the car and the angle/pose changes accordingly
      • Visual localization – we now use camera for localization. This functionality is currently being tested.
      • Ultrasonic Sensor – Currently being tested as the final gatekeeper to be used in conjunction with Guardian for Automated Emergency brake and vertical/perpendicular parking.
    Source code(tar.gz)
    Source code(zip)
  • v2.5.0(Apr 18, 2018)

    This release allows the vehicle to autonomously run on geo-fenced highways. Vehicles are able to do lane keeping cruise and avoid collisions with the leading vehicles.

    Major Features And Improvements

    • Upgrade MSF localization
    • Upgrade DreamView with more visualization features
    • Add HD map data collection tool
    • Add vision based perception with obstacle and lane mark detections
    • Add relative map to support ACC and lane keeping for planning and control
    • Make dockerfile available

    Autonomous Drive Capabilities

    Vehicles with this version can drive autonomously on highways at higher speed without HD map support. The highway needs to have clear white painted lane marks with minimum curvatures. The performance of vision based perception will degrade significantly at night or with strong light flares. BE CAUTIOUS WHEN DRIVING AUTONOMOUSLY, ESPECIALLY AT NIGHT OR IN POOR VISION ENVIRONMENT.

    Source code(tar.gz)
    Source code(zip)
    demo_2.5.bag(1633.54 MB)
    multi_lidar_gnss_calibrator_and_doc.zip(1.42 MB)
  • v2.0.0(Dec 30, 2017)

    Apollo 2.0 enables your vehicle to drive on simple urban roads autonomously. It is able to cruise, avoid collisions with obstacles, stop at traffic lights and change lanes.

    Read our Release Note(https://github.com/ApolloAuto/apollo/blob/master/RELEASE.md) to learn more about major features and improvements. Open a new issue(https://github.com/ApolloAuto/apollo/issues) if you have questions and feedback. And of course, we always appreciate your contribution (https://github.com/ApolloAuto/apollo/blob/master/CONTRIBUTING.md).

    You can get started with Apollo 2.0(https://github.com/ApolloAuto/apollo/archive/v2.0.0.tar.gz) today.

    Source code(tar.gz)
    Source code(zip)
    apollo_2.0_camera_sample.bag(296.37 MB)
    calibration.tar.gz(3.13 MB)
    demo_2.0.bag(63.11 MB)
  • v1.5.0(Sep 19, 2017)

  • v1.0.0(Jul 4, 2017)

    Apollo has been initiated to provide an open, comprehensive, and reliable software platform for its partners in the automotive and autonomous-driving industries. Partners can use the Apollo software platform and the reference hardware that Apollo has certified as a template to customize in the development of their own autonomous vehicles.

    Apollo 1.0.0, also referred to as the Automatic GPS Waypoint Following, works in an enclosed venue such as a test track or parking lot. It accurately replays a trajectory and the speed of that trajectory that a human driver has traveled in an enclosed, flat area on solid ground.

    At this stage of development, Apollo 1.0 cannot perceive obstacles in close proximity, drive on public roads, or drive in areas without GPS signals.

    Source code(tar.gz)
    Source code(zip)
    demo_1.0.bag(30.77 MB)
Owner
Apollo Auto
An open autonomous driving platform
Apollo Auto
Hands-On example code for Sensor Fusion and Autonomous Driving Stack based on Autoware

Autoware "Hands-On" Stanford Lecture AA274 / Graz University of Technology M. Schratter, J. Zubaca, K. Mautner-Lassnig, T. Renzler, M. Kirchengast, S.

Virtual Vehicle 23 Mar 25, 2022
ModuLiDAR is an all-in-one open-source software for autonomous UGVs and industrial robots.

ModuLiDAR is an all-in-one open-source software for autonomous UGVs and industrial robots. the target industries that ModuLiDAR is working on are farming industry, mining industry, warehouses industry, and construction industry.

null 18 Jun 22, 2022
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.4k Aug 9, 2022
Open source software for autonomous drones.

Prometheus - 自主无人机开源项目 [English Readme] Prometheus是希腊神话中最具智慧的神明之一,希望本项目能为无人机研发带来无限的智慧与光明。 项目总览 Prometheus是一套开源的自主无人机软件平台,为无人机的智能与自主飞行提供全套解决方案。本项目基于PX4

Amov Lab 1.4k Aug 5, 2022
An Open-source Strong Baseline for SE(3) Planning in Autonomous Drone Racing

Fast-Racing An Open-source Strong Baseline for SE(3) Planning in Autonomous Drone Racing 0. Overview Fast-Racing is a strong baseline that focuses on

ZJU FAST Lab 83 Aug 8, 2022
custom esp8266 controller for driving the pwm led controller

room8266 custom esp8266 controller for driving the pwm led controller designed to drive this: https://github.com/austinscreations/PWM-LED-Controller t

null 1 Nov 1, 2021
Self driving car with obstacle detection and avoidance

STM32F4-Self-Driving-Car-Mini-Project Self driving car with obstacle detection and avoidance Hardware STM32F401RE Dev Board HCSR04 ultrasonic sensor (

Olaoluwa Raji 2 Jan 6, 2022
General purpose power controller, capable of driving soldering irons using different voltages and probe types.

All-purpose Power Micro Controller This general purpose power micro controller features: Wheatstone Bridge front-end New Texas Instruments INA823 inst

Tomasz Jastrzębski 27 Aug 4, 2022
Autonomous recorder for vex robots using the PROS API

VEX Robot Recorder Description This is a demo project for the "Recorder" class that allows the user to record and play back past recorded autonomouses

null 2 Jun 14, 2022
Decentralized architecture for loss tolerant semi-autonomous robotics

gestalt-arch Decentralized architecture for loss tolerant semi-autonomous robotics Objective We demonstrate a decentralized robot control architecture

null 4 Dec 18, 2021
AWS Ambit Scenario Designer for Unreal Engine 4 (Ambit) is a suite of tools to streamline content creation at scale for autonomous vehicle and robotics simulation applications.

AWS Ambit Scenario Designer for Unreal Engine 4 Welcome to AWS Ambit Scenario Designer for Unreal Engine 4 (Ambit), a suite of tools to streamline 3D

AWS Samples 48 Aug 12, 2022
VEX v5 Pro program that records driver movements and plays them back during the autonomous period.

Autonomous Recorder This code was written for team 5588R, but it can be easily modified to work with your team's robot. Notes Code isn't fully finishe

brett 2 Jun 21, 2022
A nonlinear MPC used to control an autonomous car.

MPC local planner A nonlinear MPC used to control an autonomous car. Description This repository contains an implementation of a nonlinear MPC that is

Tor Børve Rasmussen 7 Jul 13, 2022
Multi-sensor perception for autonomous vehicles

Multi-Sensor Fusion for Perception -- 多传感器融合感知 Maintained by Ge Yao, [email protected] Up & Running Overview 本Repo为基于ROS melodic @ Ubuntu 18.04的Mul

Ge Yao 4 Jun 11, 2022
A cross platform shader language with multi-threaded offline compilation or platform shader source code generation

A cross platform shader language with multi-threaded offline compilation or platform shader source code generation. Output json reflection info and c++ header with your shaders structs, fx-like techniques and compile time branch evaluation via (uber-shader) "permutations".

Alex Dixon 275 Jul 18, 2022
Khepri is a Cross-platform agent, the architecture and usage like Coblat Strike but free and open-source.

Khepri Free,Open-Source,Cross-platform agent and Post-exploiton tool written in Golang and C++ Description Khepri is a Cross-platform agent, the archi

Young 1.3k Aug 10, 2022
Free,Open-Source,Cross-platform agent and Post-exploiton tool written in Golang and C++, the architecture and usage like Cobalt Strike

Khepri Free,Open-Source,Cross-platform agent and Post-exploiton tool written in Golang and C++ Description Khepri is a Cross-platform agent, the archi

Young 1.3k Aug 2, 2022
FireDog - Open source cross-platform lightweight virus signature detection engine.

FireDog 开源跨平台轻量级病毒特征检测引擎。Open source cross-platform lightweight virus signature detection engine. 语言 Language C++ 11 LICENSE 是你们喜欢的:MIT License. 让我们搞起

null 39 Jul 5, 2022
SDR++ is a cross-platform and open source SDR software with the aim of being bloat free and simple to use.

SDR++ is a cross-platform and open source SDR software with the aim of being bloat free and simple to use.

AlexandreRouma 1.8k Aug 12, 2022