大佬!我用的zed2和rslidar-16,自己用kalibr标定了zed2相机,然后用[calibration_camera_lidar]标定了相机左目与雷达,将config/params_camera.yaml config/params_lidar.yaml config/cutom/params_camera.yaml config/cutom/params_lidar.yaml 中的相关参数都改了,包括相机内参,和imu-camera-lidar之间的外参关系,但还是运行出错。我想问是哪里出了问题?
错误描述:
[ INFO] [1660833435.146233507]: ----> Visual Feature Tracker Started.
OpenCV Error: Assertion failed (0 <= _rowRange.start && _rowRange.start <= _rowRange.end && _rowRange.end <= m.rows) in Mat, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp, line 483
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp:483: error: (-215) 0 <= _rowRange.start && _rowRange.start <= _rowRange.end && _rowRange.end <= m.rows in function Mat
[lvi_sam_visual_feature-7] process has died [pid 6043, exit code -6, cmd /home/shangwei/prac2_ws/devel/lib/lvi_sam/lvi_sam_visual_feature __name:=lvi_sam_visual_feature __log:=/home/shangwei/.ros/log/0af21f40-1e32-11ed-9117-80fa5b3e83f4/lvi_sam_visual_feature-7.log].
log file: /home/shangwei/.ros/log/0af21f40-1e32-11ed-9117-80fa5b3e83f4/lvi_sam_visual_feature-7*.log
params_camera.yaml
%YAML:1.0
Project
project_name: "lvi_sam"
#common parameters
imu_topic: "/zed2/zed_node/imu/data_raw"
image_topic: "/zed2/zed_node/left_raw/image_raw_color"
point_cloud_topic: "lvi_sam/lidar/deskew/cloud_deskewed"
Lidar Params
use_lidar: 1 # whether use depth info from lidar or not
lidar_skip: 3 # skip this amount of scans
align_camera_lidar_estimation: 1 # align camera and lidar estimation for visualization
lidar to camera extrinsic
lidar_to_cam_tx: -0.0553684
lidar_to_cam_ty: -0.081193
lidar_to_cam_tz: -0.0244271
lidar_to_cam_rx: -1.22575643
lidar_to_cam_ry: 1.20940651
lidar_to_cam_rz: -1.21497081
imu to lidar extrinsic
imu_to_lidar_tx: -2.59153
imu_to_lidar_ty: 0.0203358
imu_to_lidar_tz: -0.107606
imu_to_lidar_rx: -1.22046289
imu_to_lidar_ry: -1.23098981
imu_to_lidar_rz: -1.19244752
camera model
model_type: PINHOLE
camera_name: camera
image_width: 1280
image_height: 720
distortion_parameters:
k1: 0.0497547
k2: 0.0217677
p1: -0.000146471
p2: -0.000630983
projection_parameters:
fx: 534.86
fy: 534.645
cx: 634.875
cy: 356.9945
#imu parameters The more accurate parameters you provide, the worse performance
acc_n: 1.4e-03 # accelerometer measurement noise standard deviation.
gyr_n: 8.6e-05 # gyroscope measurement noise standard deviation.
acc_w: 8.0e-05 # accelerometer bias random work noise standard deviation.
gyr_w: 2.2e-06 # gyroscope bias random work noise standard deviation.
g_norm: 9.805 #
imu_hz: 400 # frequency of imu
Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ -0.00948391, 0.01912883, 0.99977205,
-0.99985211, 0.01416312, -0.00975565,
-0.0143465, -0.99971671, 0.01899168]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [-0.00477194, 0.02212476, 0.05040654]
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 20 # min distance between two features
freq: 20 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
equalize: 1 # if image is too dark or light, trun on equalize to find enough features
fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
#optimization parameters
max_solver_time: 0.035 # max solver itration time (ms), to guarantee real time
max_num_iterations: 10 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
#unsynchronization parameters
estimate_td: 0 # online estimate time offset between camera and imu
td: -0.004492987156480011 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#rolling shutter parameters
rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet).
#loop closure parameters
loop_closure: 1 # start loop closure
skip_time: 0.0
skip_dist: 0.0
debug_image: 0 # save raw image in loop detector for visualization prupose; you can close this function by setting 0
match_image_scale: 0.5
vocabulary_file: "/config/brief_k10L6.bin"
brief_pattern_file: "/config/brief_pattern.yml"
project name
PROJECT_NAME: "lvi_sam"
lvi_sam:
Topics
pointCloudTopic: "/velodyne_points" # Point cloud data
imuTopic: "/zed2/zed_node/imu/data_raw" # IMU data
Heading
useImuHeadingInitialization: false # if using GPS data, set to "true"
Export settings
savePCD: false # https://github.com/TixiaoShan/LIO-SAM/issues/3
savePCDDirectory: "/software/LVI-SAM/picresults/" # in your home folder, starts and ends with "/". Warning: the code deletes "LOAM" folder then recreates it. See "mapOptimization" for implementation
Sensor Settings
N_SCAN: 16 # number of lidar channel (i.e., 16, 32, 64, 128)
Horizon_SCAN: 1800 # lidar horizontal resolution (Velodyne:1800, Ouster:512,1024,2048)
ang_y: 1.0 #ang/N_SCAN在纵向激光头分布的角度/线数
timeField: "time" # point timestamp field, Velodyne - "time", Ouster - "t"
downsampleRate: 1 # default: 1. Downsample your data if too many points. i.e., 16 = 64 / 4, 16 = 16 / 1
IMU Settings
imuAccNoise: 1.4e-03
imuGyrNoise: 8.6e-05
imuAccBiasN: 8.0e-05
imuGyrBiasN: 2.2e-06
imuGravity: 9.80511
imuHz: 400
Extrinsics (IMU -> lidar)
extrinsicTrans: [-2.59153,0.0203358, -0.107606]
extrinsicRot: [ 0.218016,-1.18145,-53.0919,-0.996897, -0.132405, 0.763035,-0.109347,-1.00026,-0.504798]
extrinsicRPY: [ 0.218016,-1.18145,-53.0919,-0.996897, -0.132405, 0.763035,-0.109347,-1.00026,-0.504798]
LOAM feature threshold
edgeThreshold: 1.0
surfThreshold: 0.1
edgeFeatureMinValidNum: 10
surfFeatureMinValidNum: 100
voxel filter paprams
odometrySurfLeafSize: 0.4 # default: 0.4
mappingCornerLeafSize: 0.2 # default: 0.2
mappingSurfLeafSize: 0.4 # default: 0.4
robot motion constraint (in case you are using a 2D robot)
z_tollerance: 1000 # meters
rotation_tollerance: 1000 # radians
CPU Params
numberOfCores: 4 # number of cores for mapping optimization
mappingProcessInterval: 0.15 # seconds, regulate mapping frequency
Surrounding map
surroundingkeyframeAddingDistThreshold: 1.0 # meters, regulate keyframe adding threshold
surroundingkeyframeAddingAngleThreshold: 0.2 # radians, regulate keyframe adding threshold
surroundingKeyframeDensity: 2.0 # meters, downsample surrounding keyframe poses
surroundingKeyframeSearchRadius: 50.0 # meters, within n meters scan-to-map optimization (when loop closure disabled)
Loop closure
loopClosureEnableFlag: true
surroundingKeyframeSize: 25 # submap size (when loop closure enabled)
historyKeyframeSearchRadius: 20.0 # meters, key frame that is within n meters from current pose will be considerd for loop closure
historyKeyframeSearchTimeDiff: 30.0 # seconds, key frame that is n seconds older will be considered for loop closure
historyKeyframeSearchNum: 25 # number of hostory key frames will be fused into a submap for loop closure
historyKeyframeFitnessScore: 0.3 # icp threshold, the smaller the better alignment
Visualization
globalMapVisualizationSearchRadius: 1000.0 # meters, global map visualization radius
globalMapVisualizationPoseDensity: 10.0 # meters, global map visualization keyframe density
globalMapVisualizationLeafSize: 1.0 # meters, global map visualization cloud density