在仿真运行rtabmap算法后,在此基础上继续进行算法包仿真实践。从此前的实验来看,基本上在gazebo上仿真SLAM算法的要点大致是和ROS 的运行机制相似,就是topic 的订阅和发布机制。原理上就是将gazebo仿真的topic和算法所订阅的topic一样即可(指名称以及topic中消息的类型)。因此在此思路基础上,进行对ORB_SLAM2的运行实验。
安装配置ORB_SLAM2
主要步骤follow官网即可:https://github.com/raulmur/ORB_SLAM2
其中有两个大坑,参考连接:https://blog.csdn.net/li528405176/article/details/81164637
即可解决。
基于gazebo世界的ORB_SLAM2运行实验
进入正题,毕竟ORB_SLAM2号称单目,双目以及深度相机都支持的算法,所以先来实施单目相机的实验。
Mono 相机实验
参考官网(上面的网站),得出最为重要的信息,算法订阅的主题:
订阅主题是"/camera/image_raw",因此对现有的机器人进行修改并保存为新模型:robot_monoCam_imu.xacro和robot_monoCam_imu.gazebo。
根据第一篇文章,建立机器人模型,这里主要是相机插件发生变化,因此改动gazebo中的相机插件即可:
<!-- mono camera -->
<gazebo reference="camera_link">
<sensor type="camera" name="camera1">
<update_rate>30.0</update_rate>
<camera name="head">
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.02</near>
<far>300</far>
</clip>
<noise>
<type>gaussian</type>
<mean>0.0</mean>
<stddev>0.007</stddev>
</noise>
</camera>
<plugin name="camera_controller" filename="libgazebo_ros_camera.so">
<alwaysOn>true</alwaysOn>
<updateRate>0.0</updateRate>
<cameraName>camera</cameraName>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
<frameName>camera_link</frameName>
<hackBaseline>0.07</hackBaseline>
<distortionK1>0.0</distortionK1>
<distortionK2>0.0</distortionK2>
<distortionK3>0.0</distortionK3>
<distortionT1>0.0</distortionT1>
<distortionT2>0.0</distortionT2>
</plugin>
</sensor>
</gazebo>
这个是单目相机的gazebo插件,用其替换掉原来的深度相机插件即可。另外,这个模型还有一个不同的传感器是IMU(原本打算仿真VINS-Mono算法的,由于未知问题尚未解决,决定先搁置):
<!-- imu -->
<gazebo reference="imu_link">
<gravity>true</gravity>
<sensor name="imu_sensor" type="imu">
<always_on>true</always_on>
<update_rate>100</update_rate>
<visualize>true</visualize>
<topic>__default_topic__</topic>
<plugin filename="libgazebo_ros_imu_sensor.so" name="imu_plugin">
<topicName>imu0</topicName>
<bodyName>imu_link</bodyName>
<updateRateHZ>100.0</updateRateHZ>
<gaussianNoise>0.0</gaussianNoise>
<xyzOffset>0 0 0</xyzOffset>
<rpyOffset>0 0 0</rpyOffset>
<frameName>imu_link</frameName>
</plugin>
<pose>0 0 0 0 0 0</pose>
</sensor>
</gazebo>
这里需要注意的是,相机发布的topic完整是由cameraName加上相应的topic(也可以将cameraName置空,topic名称写完整也可以)。
这样机器人就完成配置了, 用此前的命令启动:
roslaunch robot1_gazebo gazebo_wg.launch model:="`rospack find robot1_description`/urdf/robot_monoCam_imu.xacro" world:="custom_world.launch"
再启动ORB_SLAM2节点:
rosrun ORB_SLAM2 Mono ~/dev/catkin_ws/src/ORB_SLAM2-master/Vocabulary/ORBvoc.txt ~/dev/catkin_ws/src/ORB_SLAM2-master/Examples/Monocular/TUM3.yaml
这里需要说明一下,ORBvoc.txt是ORB_SLAM2的作者生成的图片字典文件,对室内和室外的效果都很好,因此一般不改变;TUM3.yaml 是相机的参数的配置文件,焦距,畸变参数等等,根据真实相机标定后的参数来设定。在这里用的是算法包自带的文件。
运行结果:
一直显示尝试初始化,猜测原因如下:
- 在上面的相机参数文件yaml设定参数不恰当,因为在gazebo世界中,硬件是仿真出来的,因此可设定的参数不多,而且应该也不需要标定。
- 在gazebo世界中,特征过少,默认的特征数是100才会初始化成功。
RGB-D相机实验
步骤也很相似,在官网得知算法订阅的topic 后,由于在rtabmap中,也是用深度相机即RGBD相机,所以在不修改机器人模型的基础上,将算法要求的topic 重命名:
新建launch文件:
orb_slam2_rgbd.launch
<launch>
<arg name="PATH_TO_VOCABULARY" value="$(find ORB_SLAM2)/vocabulary_files/ORBvoc.txt"/>
<arg name="PATH_TO_SETTINGS_FILE" value="$(find ORB_SLAM2)/setting_files/Zdzn.yaml"/>
<node name="RGBD" pkg="ORB_SLAM2" type="RGBD" args="$(arg PATH_TO_VOCABULARY) $(arg PATH_TO_SETTINGS_FILE)">
<remap from="/camera/depth_registered/image_raw" to="/depth/image_raw" />
<remap from="/camera/rgb/image_raw" to="/rgb/image_raw" />
</node>
</launch>
标签remap的作用就是重命名,将原来算法订阅的topic,"/camera/depth_registered/image_raw" 和 "/camera/rgb/image_raw" 重命名为机器人中定义的topic名称。
启动命令:
roslaunch ORB_SLAM2 orbslam2_rgbd.launch
这里需要主要的是,launch 文件需要放在rosbag对应的路径下,才能被使用。在这里路径是:/home/lxq/dev/catkin_ws/src/ORB_SLAM2-master/Examples/ROS/ORB_SLAM2
运行结果:
视频连接:http://www.iqiyi.com/w_19s9zc5gu9.html
Stereo相机实验
和上面一样,先建立机器人模型,因为这是双目相机,因此需要在gazebo文件中替换为双目的镜头:
<!-- stereo camera -->
<gazebo reference="camera_link">
<sensor type="multicamera" name="stereo_camera">
<update_rate>30.0</update_rate>
<camera name="left">
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.02</near>
<far>300</far>
</clip>
<noise>
<type>gaussian</type>
<mean>0.0</mean>
<stddev>0.007</stddev>
</noise>
</camera>
<camera name="right">
<pose>0 -0.07 0 0 0 0</pose>
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.02</near>
<far>300</far>
</clip>
<noise>
<type>gaussian</type>
<mean>0.0</mean>
<stddev>0.007</stddev>
</noise>
</camera>
<plugin name="stereo_camera_controller" filename="libgazebo_ros_multicamera.so">
<alwaysOn>true</alwaysOn>
<updateRate>0.0</updateRate>
<cameraName>camera</cameraName>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
<frameName>camera_link</frameName>
<hackBaseline>0.07</hackBaseline>
<distortionK1>0.0</distortionK1>
<distortionK2>0.0</distortionK2>
<distortionK3>0.0</distortionK3>
<distortionT1>0.0</distortionT1>
<distortionT2>0.0</distortionT2>
</plugin>
</sensor>
</gazebo>
然后通过官网查看需要发布什么topic,并在插件中定义。
在修改了一下相机参数文件,保存为Stereo_setting.yaml:
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 277.19135641132203
Camera.fy: 277.19135641132203
Camera.cx: 160.5
Camera.cy: 120.5
Camera.k1: 0.0
Camera.k2: 0.0
Camera.k3: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.width: 752
Camera.height: 480
# Camera frames per second
Camera.fps: 20.0
# stereo baseline times fx
Camera.bf: 47.90639384423901
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 35
#--------------------------------------------------------------------------------------------
# Stereo Rectification. Only if you need to pre-rectify the images.
# Camera.fx, .fy, etc must be the same as in LEFT.P
#--------------------------------------------------------------------------------------------
LEFT.height: 480
LEFT.width: 752
LEFT.D: !!opencv-matrix
rows: 1
cols: 5
dt: d
data:[-0.28340811, 0.07395907, 0.00019359, 1.76187114e-05, 0.0]
LEFT.K: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [458.654, 0.0, 367.215, 0.0, 457.296, 248.375, 0.0, 0.0, 1.0]
LEFT.R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [0.999966347530033, -0.001422739138722922, 0.008079580483432283, 0.001365741834644127, 0.9999741760894847, 0.007055629199258132, -0.008089410156878961, -0.007044357138835809, 0.9999424675829176]
LEFT.P: !!opencv-matrix
rows: 3
cols: 4
dt: d
data: [435.2046959714599, 0, 367.4517211914062, 0, 0, 435.2046959714599, 252.2008514404297, 0, 0, 0, 1, 0]
RIGHT.height: 480
RIGHT.width: 752
RIGHT.D: !!opencv-matrix
rows: 1
cols: 5
dt: d
data:[-0.28368365, 0.07451284, -0.00010473, -3.555907e-05, 0.0]
RIGHT.K: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [457.587, 0.0, 379.999, 0.0, 456.134, 255.238, 0.0, 0.0, 1]
RIGHT.R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [0.9999633526194376, -0.003625811871560086, 0.007755443660172947, 0.003680398547259526, 0.9999684752771629, -0.007035845251224894, -0.007729688520722713, 0.007064130529506649, 0.999945173484644]
RIGHT.P: !!opencv-matrix
rows: 3
cols: 4
dt: d
data: [435.2046959714599, 0, 367.4517211914062, -47.90639384423901, 0, 435.2046959714599, 252.2008514404297, 0, 0, 0, 1, 0]
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
编写launch文件:
orbslam2_stereo.launch:
<launch>
<arg name="PATH_TO_VOCABULARY" value="$(find ORB_SLAM2)/vocabulary_files/ORBvoc.txt"/>
<arg name="PATH_TO_SETTINGS_FILE" value="$(find ORB_SLAM2)/setting_files/Stereo_setting.yaml"/>
<node name="Stereo" pkg="ORB_SLAM2" type="Stereo" args="$(arg PATH_TO_VOCABULARY) $(arg PATH_TO_SETTINGS_FILE) false">
</node>
</launch>
在这里要说明一下ORB_SLAM2的节点启动,最后有一个false,是代表是否需要双目校正,在这里填ture的话,反而会出现严重的畸变,因此推测,仿真出来的相机镜头是不需要标定和校正的。
启动命令:
roslaunch ORB_SLAM2 orbslam2_stereo.launch
运行结果:
视频连接:http://www.iqiyi.com/w_19s9yursvx.html