A ROS based Open Source Simulation Environment for Robotics Beginners

Overview

A ROS based Open Source Simulation Environment for Robotics Beginners

基于ROS搭建的机器人仿真环境。其中设计了机器人研究过程中需要预先进行的几个重要实验如各种标定(相机标定,深度图配准,手眼标定),还在仿真环境中实现了两种抓取算法的仿真,一种是传统方法(几何方法)一种是机器学习的方法(GPD),最后一个实验是模拟机械臂采集数据的场景。
视频链接-->Bilibili&YouTube

Author 苏一休
E-mail [email protected]

目录

搭建

此功能包在Ubuntu16.04上经过测试,应该适用于其他Linux版本。在你的Catkin工作空间中需要有

  • 负责计算aruco二维码位姿态的aruco_ros
  • 使用点云的基于深度学习的抓取位姿检测gpd_ros(这个包还需要编译安装GPD library);
  • 手眼标定功能包easy_handeye
  • UR机械臂的ROS功能包universal_robot
  • 此外在robot_sim/package中有一些需要用到的但我在上面进行过一些修改的包,如解决gazebo中抓取物体会莫名抖动的包gazebo-pkgs,大寰机器人二指抓手AG-95的ROS功能包dh_gripper_ros以及其他依赖等。
  • 非常感谢以上作者的无私奉献。

以下是如何安装和搭建本ROS功能包

cd ~/catkin_ws/src
git clone -b kinetic-devel https://github.com/pal-robotics/aruco_ros                #aruco_ros
git clone https://github.com/atenpas/gpd_ros/                                       #gpd_ros
git clone https://github.com/IFL-CAMP/easy_handeye                                  #easy_handeye
cd easy_handeye
git reset --hard 64b8b88                                                            #使用的是这个版本
cd ..
git clone -b kinetic-devel https://github.com/ros-industrial/universal_robot.git    #universal_robot
git clone https://github.com/Suyixiu/robot_sim                                      #本功能包
rosdep install --from-paths src --ignore-src --rosdistro=kinetic                    #安装依赖
catkin_make

cp -r ./src/robot_sim/experiment/hand_eye_calibration/urdf/aruco/ ~/.gazebo/models  #复制aruco模型到gazebo默认模型文件夹中

在仿真环境中进行相机标定

首先是将相机模型还有标定板模型load进来

cd ~/your_catkin_ws/
roslaunch robot_sim camera_calibration.launch

之后便可以看到RealSense D435i RGBD相机与标定板,其中这里使用的是7x6内角点,块块大小0.01m的标定板。若想使用其他尺寸的标定板可更改experiment/camera_calibration/urdf/create_chessboard.py脚本中的标定板参数后运行以在experiment/camera_calibration/urdf目录下生成你所需要的标定板,随后修改camera_calibration.launch要load的标定板即可。

其中相机的URDF文件中使用的相机插件设置的相机视角是57°,图像分辨率是1280x720,所以根据相机内参各个参数的的定义算出该相机模型的内参真实值。这个计算与camera_info这个topic中的信息一致。

此时相机已以30帧往外发布图像信息。可以使用ROS自带的标定包来进行实时的标定,也可以把图片保存下来后用相机标定工具如Matlab中的相机标定包进行内参的计算。这里以ROS功能包camera_calibration中的cameracalibrator.py脚本为例。你可以选择你所需要标定的相机。

rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.01 image:=/camera/rgb/image_raw camera:=/camera/rgb     #RGB相机
rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.01 image:=/camera/ir/image_raw camera:=/camera/ir       #IR相机

接下来是移动标定板,程序中设定的是1秒钟产生一个随机位置并移动标定板,若相机能正确识别出标定板角点则将此时的照片保存,有深度图,红外相机的图还有RGB图存放在save_checkboard_img目录中。当你觉得采的图片足够多之后即可停止这个程序。

rosrun robot_sim camera_calibration

这里我们提供了一个python的脚本camera_calibration.py,位于robot_sim/experiment/camera_calibration/scripts,用于载入前面保存的图片以计算RGB相机与IR相机的内参并分别保存在IR_cameraintrinsic_parameters.npzRGB_cameraintrinsic_parameters.npz中,直接python运行此脚本即可。你也可以自己写程序来计算内参,并与前面公式计算的结果像对比以验证你的标定算法的准确性与误差。

cd ~/your_catkin_ws/src/robot_sim/experiment/camera_calibration/scripts
python3 camera_calibration.py

在仿真环境中进行深度图配准

此实验将用到实验一中采集的两个相机拍摄的标定板的图片,这里我们提供了一个python脚本depth_image_registration.py来计算配准矩阵并将其前两行存放在Registration_matrix.txt中,因为实际remap深度图的时候也只会用到前两行。

cd ~/your_catkin_ws/src/robot_sim/experiment/depth_image_registration/scripts
python3 depth_image_registration.py

随后可以随便拿一对RGB图与深度图来观察配准矩阵对不对,因为后面会用到这个矩阵所以这里直接就是写成CPP了。

cd ../src
g++ ./depth_image_registration.cpp -o depth_image_registration $(pkg-config --cflags --libs opencv)
./depth_image_registration

下面是配准前与配准后的区别。

在仿真环境中进行手眼标定

这里使用的手眼系统属于眼在手上的情况,即eye on hand,首先将我们提供的机械臂的moveit功能包跑起来,其中加载了UR10机械臂、大寰机器人的AG-95二指抓手还有D435i RGBD相机安装在UR10机械臂的末端。这里的手的link是yixiuge_ee_link,眼的link是相机的RGB光学framecamera_rgb_optical_frame。标定用到的Aruco二维码的大小是0.2m,ID是582。

roslaunch yixiuge_ur10_moveit_config yixiuge_ur_moveit.launch

需要等moveit加载完之后再加载手眼标定包,因为大寰的二指抓手开起来有点费时,如果马上开hand_eye_calibration的话可能因为找不到moveit的group而报错。

roslaunch robot_sim hand_eye_calibration.launch

之后请参照视频中的操作进行操作最后可以算出手眼矩阵。设计的17个点一般情况下不会全部满足,可以通过RVIZ中的moveit的球拖动机械臂渠道其他位置以采集更多的数据从而使标定结果更加准确,下面是我们采集了37个点之后所计算出来的结果

translation: 
  x: -0.0171515439439
  y: 0.129039200607
  z: 0.146263405556
rotation: 
  x: 0.999995982804
  y: 0.00268604823595
  z: 0.000687382040816
  w: 0.000589089946183

提供的机器人URDF文件中手眼矩阵的真实值如下,可以看出还是很准确的,误差在3,4毫米这样子。

translation: 
  x: -0.0175
  y: 0.128
  z: 0.1425
rotation: 
  x: 1
  y: 0
  z: 0
  w: 0

手眼标定完成电机save后数据会保存在~/.ros/easy_handeye/easy_handeye_eye_on_hand.yaml中,运行下面的launch文件来观察标定的结果

roslaunch robot_sim hand_eye_calibration_result.launch

在仿真环境中抓取

这里设计了两种方法的抓取,一种是使用传统的几何方法识别抓取点的抓取(参考论文),一种是基于机器学习的抓取(GPD)

基于几何方法的抓取

先把机械臂和抓取环境load进来随后启动抓取。

cd ~/your_catkin_ws/
roslaunch robot_sim geometric_method_grasp.launch 
rosrun robot_sim geometric_method_grasp

我们这里提供的方法主要原理是先将物体从桌面分割,这里的分割使用的是直接将相机当前画面减去一个mask,然后设置阈值二值化。因为是仿真环境所以这样做还是比较稳定的,分割效果也是没有问题的。但这种超级简单的做法仅限于环境固定的情况,实际使用的话还是建议使用其他分割算法,比如用深度图生成掩模来提取物体。

分割出物体之后便是滤波然后提取轮廓,膨胀后再腐蚀以连接成闭合的轮廓,随后选择最大的轮廓作为机械臂要抓取的物体。计算轮廓的一阶矩得到物体的重心,这里假设物体的质量分布均匀。随后使用PCA计算轮廓的主方向和垂直主方向的副方向,这里也可使用轮廓的二阶矩来获得。随后计算轮廓的最小外接矩形,与副轴的两个交点连成一条直线,随后遍历这条直线上的点找到副轴与轮廓的两个交点作为抓取点,随后绘制抓取矩形框。详细实现请见代码。

基于机器学习方法的抓取

这里使用的是atenpas的GPD这个方法,具体原理可见论文,这里不再赘述。在搭建GPD库的时候你可能会遇到某些问题,倘若GPD2.0.0版本出现问题则建议使用GPD1.5.0。这里在我们的仿真环境中成功运行了这一算法。我们提供的代码使用的是GPD的sample方式,使用的是caffe的cfg文件。再运行这部分代码的时候请注意调整cfg文件到正确的路径。此外这里的分割使用的是根据距离风格,而实际更为鲁棒的做法应该是使用RGB图生成掩模来分割点云。

roslaunch robot_sim GPD_method_grasp.launch
roslaunch robot_sim gpd_run.launch type:=2 topic:=/cloud_sample

关于gazebo中抓手抓取物体会莫名抖动

这是因为抓手的控制是位置控制而不是力控制所造成的。我们是使用JenniferBuehler编写的gazebo插件gazebo-pkgs来解决这个问题的。具体原理是插件会检测手指与物体接触,设定一些阈值当达到条件之后将物体与抓手的相对位置进行固定并失能物体的collision属性从而解决这一抖动的问题。

在仿真环境中进行数据采集

主要原理是围绕物体生成若干个位姿,随后驱动机械臂运动到指定位姿之后拍照,保存数据

roslaunch robot_sim data_collection.launch
rosrun robot_sim data_collection

其中采样点的多少以及位置等都是可以认为调节的,以下是56个的采样点

Comments
  • camera_calibration: Segmentation fault (core dumped)

    camera_calibration: Segmentation fault (core dumped)

    你好,非常感谢你的工作。

    我编译完你的工程,然后想跑一下“在仿真环境中进行相机标定”测试,gazebo和标定程序都能正常启动,但是当输入下面这个指令时就段错误了,请问你知道怎么解决吗?

    $ rosrun robot_sim camera_calibration qx, qy, qz, qw= 0.15893, 0.173378, 0.130033, 0.96321 x,y,z = -0.145, 0.046, 0.1625 Segmentation fault (core dumped)

    opened by Hsuweehou 1
  • 仿真环境手眼标定报错:No robot state or robot model loaded&&进行手眼标定时rqt闪退

    仿真环境手眼标定报错:No robot state or robot model loaded&&进行手眼标定时rqt闪退

    在运行过程出出现以下问题,麻烦博主帮助解答一下。ubuntu16.04. 1,在手眼标定部分,执行roslaunch robot_sim hand_eye_calibration.launch命令过程中报错:No robot state or robot model loaded. 1

    2,执行完roslaunch robot_sim hand_eye_calibration.launch,没有加载出rqt-image-view, 2

    3,在改变姿态进行手眼标定时,点击Take Sample后再点check starting pose,rqt_easy_handeye.perspective-rqt闪退,报错如下:

    3

    opened by beyondmz 2
  • roslaunch yixiuge_ur10_moveit_config yixiuge_ur_moveit.launch

    roslaunch yixiuge_ur10_moveit_config yixiuge_ur_moveit.launch

    Hello, thank you very much for your selfless sharing, it is a very rewarding work. When I run 'roslaunch yixiuge_ur10_moveit_config yixiuge_ur_moveit.launch'on ros melodic, the following problems occur, and I would like to consult you how to solve them? Thanks in advance for your reply, best wishes for you! 屏幕截图1 屏幕截图2 手眼标定2

    opened by TD-zane 1
  • VTK相关报错

    VTK相关报错

    请问博主这种VTK报错应该如何解决呢,卡在这里了。。。 环境: Ubuntu 16.04 gazebo7 ROS Kinetic PCL 1.9.0 VTK 7.1

    CMakeFiles/GPD_method_grasp.dir/grasp/GPD_method/src/main.cpp.o:在函数‘__static_initialization_and_destruction_0(int, int)’中: main.cpp:(.text+0x3a34):对‘vtkObjectFactoryRegistryCleanup::vtkObjectFactoryRegistryCleanup()’未定义的引用 main.cpp:(.text+0x3a43):对‘vtkObjectFactoryRegistryCleanup::~vtkObjectFactoryRegistryCleanup()’未定义的引用 CMakeFiles/GPD_method_grasp.dir/grasp/GPD_method/src/main.cpp.o:在函数‘vtkRenderingCore_AutoInit::vtkRenderingCore_AutoInit()’中: main.cpp:(.text._ZN25vtkRenderingCore_AutoInitC2Ev[_ZN25vtkRenderingCore_AutoInitC5Ev]+0x17):对‘vtkRenderingOpenGL2_AutoInit_Construct()’未定义的引用 CMakeFiles/GPD_method_grasp.dir/grasp/GPD_method/src/main.cpp.o:在函数‘vtkRenderingCore_AutoInit::~vtkRenderingCore_AutoInit()’中: main.cpp:(.text._ZN25vtkRenderingCore_AutoInitD2Ev[_ZN25vtkRenderingCore_AutoInitD5Ev]+0x17):对‘vtkRenderingOpenGL2_AutoInit_Destruct()’未定义的引用 collect2: error: ld returned 1 exit status robot_sim/experiment/CMakeFiles/GPD_method_grasp.dir/build.make:601: recipe for target '/home/boron/catkin_ws/devel/lib/robot_sim/GPD_method_grasp' failed make[2]: *** [/home/boron/catkin_ws/devel/lib/robot_sim/GPD_method_grasp] Error 1 CMakeFiles/Makefile2:13484: recipe for target 'robot_sim/experiment/CMakeFiles/GPD_method_grasp.dir/all' failed make[1]: *** [robot_sim/experiment/CMakeFiles/GPD_method_grasp.dir/all] Error 2 make[1]: *** 正在等待未完成的任务....

    opened by BoronFan 1
  • malloc(): memory corruption 已放弃(核心已转储)错误

    malloc(): memory corruption 已放弃(核心已转储)错误

    博主您好,我在运行roslaunch robot_sim gpd_run.launch type:=2 topic:=/cloud_sample时,出现如下错误: 2021-12-29 15-37-07屏幕截图 显示说出现了malloc(): memory corruption错误,我猜想是1.pcd点云维度不匹配导致的错误,但苦苦思索了一个周也没有解决该问题。请问您在运行过程中有出现过这种内存损坏的问题,或者您知道如何解决该问题吗? 另外我所用的版本如下:

    • ubuntu18.04
    • PCL 1.9.0
    • Opencv 3.4.3
    • GPD 2.0.0
    opened by Wuui-97 2
Owner
Sulegeyixiu
Sulegeyixiu
Best practices, conventions, and tricks for ROS. Do you want to become a robotics master? Then consider graduating or working at the Robotics Systems Lab at ETH in Zürich!

ROS Best Practices, Conventions and Tricks Best practices for ROS2 in the making. See the Foxy branch in the meanwhile. This is a loose collection of

Robotic Systems Lab - Legged Robotics at ETH Zürich 1.2k Jan 5, 2023
AWS Ambit Scenario Designer for Unreal Engine 4 (Ambit) is a suite of tools to streamline content creation at scale for autonomous vehicle and robotics simulation applications.

AWS Ambit Scenario Designer for Unreal Engine 4 Welcome to AWS Ambit Scenario Designer for Unreal Engine 4 (Ambit), a suite of tools to streamline 3D

AWS Samples 77 Jan 2, 2023
Raspberry Pi Pico (RP2040) and Micro-ROS (ROS 2) Integration

The Pico is an amazing microcontroller and I couldn't wait for ROS 2 support or Arduino Core, so here is my approach. Once the Arduino Core for RP2040 is out it will be easier to use micro_ros_arduino.

Darko Lukić 19 Jun 19, 2022
A PIC/FLIP fluid simulation based on the methods found in Robert Bridson's "Fluid Simulation for Computer Graphics"

GridFluidSim3d This program is an implementation of a PIC/FLIP liquid fluid simulation written in C++11 based on methods described in Robert Bridson's

Ryan Guy 727 Dec 8, 2022
This Repository Aims To Help Beginners with their first successful pull request and Know How to do open source contributions Also For Intermediate and Advance level contributors as well.

Hacktoberfest_2021 This Repository Aims To Help Beginners with their first successful pull request and Know How to do open source contributions Also F

Rishu Rajan 15 Jan 9, 2022
For Beginners, students and developers this is a great opportunity to learn and contribute to open source.

Hacktoberfest 2021 For Beginners, students and developers this is great opportunity to learn and contribute to open source. Link To HacktoberFest 2021

Srinidh 23 Aug 17, 2022
For Beginners, students and developers this is a great opportunity to learn and contribute to open source.

Hacktoberfest 2021 For Beginners, students and developers this is great opportunity to learn and contribute to open source. Link To HacktoberFest 2021

null 79 Dec 27, 2022
An open-source C and C++ project series where beginners can contribute and practice coding.

C C++ mini project A collection of easy C and C++ small projects to help you improve your programming skills. Steps To Follow for contribute Star this

Alexander Monterrosa 2 Jan 1, 2022
A cheap,simple,Ongeki controller Use Keyboard Simulation and Mouse Simulation to controller the ongeki game. Using Pro-micro control.

N.A.G.E.K.I. A cheap,simple,Ongeki controller Use Keyboard Simulation and Mouse Simulation to controller the ongeki game. Using Pro-micro control. 中文版

NanaNana 39 Dec 8, 2022
A cheap,simple,Ongeki controller Use Keyboard Simulation and Mouse Simulation to controller the ongeki game. Using Pro-micro control.

N.A.G.E.K.I. PLEASE CHECK Main Project A cheap,simple,Ongeki controller Use Keyboard Simulation and Mouse Simulation to controller the ongeki game. Us

NanaNana 11 Dec 30, 2021
🎃 Submit creative FizzBuzz solutions in any language you want! Open for beginners !

?? Hacktoberfest 2021 FizzBuzz Submit creative FizzBuzz solutions in any language you want! TL;DR: We're searching for creative/extraordinary/weird Fi

Shubh4nk 17 Oct 25, 2022
foxBMS is a free, open and flexible development environment to design battery management systems.

foxBMS is a free, open and flexible development environment to design battery management systems. It is the first modular open source BMS development platform.

The foxBMS Team 100 Jan 10, 2023
Software Running on the VEX V5 Brain for the USF IEEE VEX Robotics Team.

This is an ongoing Project at USF IEEE VEX Team for VEX Head-to-Head 2022 This is the software running on our USF Big-Bull-Bot, specified to compete i

John Koch 2 Oct 7, 2022
This is a compilation of the code and images for all Arduino code in the Robotics 11 class.

Robotics 11 - Arduino This is a compilation of the code and images for all Arduino code in the Robotics 11 class. All code can be viewed in each proje

GuhBean 1 Oct 29, 2021
Decentralized architecture for loss tolerant semi-autonomous robotics

gestalt-arch Decentralized architecture for loss tolerant semi-autonomous robotics Objective We demonstrate a decentralized robot control architecture

null 4 Dec 18, 2021
Bau Bau is a DIY 4 legged quadruped robot inspired for construction robotics course.

Bau-Bau-Robot Bau Bau is a DIY 4 legged quadruped robot inspired for construction robotics course. In this course, we are looking forward to solve a p

Adithya Chinnakkonda 1 Nov 19, 2021
This repo includes SVO Pro which is the newest version of Semi-direct Visual Odometry (SVO) developed over the past few years at the Robotics and Perception Group (RPG).

rpg_svo_pro This repo includes SVO Pro which is the newest version of Semi-direct Visual Odometry (SVO) developed over the past few years at the Robot

Robotics and Perception Group 1k Dec 26, 2022
Dynamic Animation and Robotics Toolkit

Build Status Item Status Build Status API Documentation Coverage Static Analysis Resources Visit the DART website for more information Gallery Install

DART: Dynamic Animation and Robotics Toolkit 765 Jan 1, 2023
A CUDA-accelerated cloth simulation engine based on Extended Position Based Dynamics (XPBD).

Velvet Velvet is a CUDA-accelerated cloth simulation engine based on Extended Position Based Dynamics (XPBD). Why another cloth simulator? There are a

Vital Chen 39 Dec 21, 2022