CN112130555A - Self-propelled robot and system based on laser navigation radar and computer vision perception fusion - Google Patents

Self-propelled robot and system based on laser navigation radar and computer vision perception fusion Download PDF

Info

Publication number
CN112130555A
CN112130555A CN202010519604.2A CN202010519604A CN112130555A CN 112130555 A CN112130555 A CN 112130555A CN 202010519604 A CN202010519604 A CN 202010519604A CN 112130555 A CN112130555 A CN 112130555A
Authority
CN
China
Prior art keywords
module
panoramic camera
camera module
robot
laser navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010519604.2A
Other languages
Chinese (zh)
Other versions
CN112130555B (en
Inventor
余正泓
郑婉君
马子乾
朱雪斌
黎红源
邹子平
张又丹
陈素锦
黄梦彩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Institute of Science and Technology
Original Assignee
Guangdong Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Institute of Science and Technology filed Critical Guangdong Institute of Science and Technology
Priority to CN202010519604.2A priority Critical patent/CN112130555B/en
Publication of CN112130555A publication Critical patent/CN112130555A/en
Application granted granted Critical
Publication of CN112130555B publication Critical patent/CN112130555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明提供了一种基于激光导航雷达及计算机视觉感知融合的自行走机器人及系统,包括机器人本体、激光雷达、竖直设置的伸缩支架、及设置在伸缩支架顶端的全景摄像模组,伸缩支架与全景摄像模组转动副连接,伸缩支架与全景摄像模组之间设置有矢量角度传感器,矢量角度传感器用于获取全景摄像模组相对于伸缩支架转动时的转角大小和转动方向,全景摄像模组距离地面的高度大于激光雷达距离地面的高度,伸缩支架用于自动调节全景摄像模组距离地面的高度,并为矢量角度传感器提供固定参考基准,激光雷达用于获取机器人本体与其周围障碍物之间的距离,全景摄像模组用于获取机器人本体周围障碍物的第一彩色图像。

Figure 202010519604

The invention provides a self-propelled robot and system based on the fusion of laser navigation radar and computer vision perception. It is connected with the rotating pair of the panoramic camera module, and a vector angle sensor is arranged between the telescopic bracket and the panoramic camera module. The height of the group from the ground is greater than the height of the lidar from the ground. The telescopic bracket is used to automatically adjust the height of the panoramic camera module from the ground and provide a fixed reference for the vector angle sensor. The lidar is used to obtain the relationship between the robot body and its surrounding obstacles. The panoramic camera module is used to obtain the first color image of the obstacles around the robot body.

Figure 202010519604

Description

基于激光导航雷达及计算机视觉感知融合的自行走机器人及 系统Self-propelled robot based on the fusion of lidar navigation radar and computer vision perception and system

技术领域technical field

本发明涉及机器人技术领域,特别是涉及一种基于激光导航雷达及计算机视觉感知融合的自行走机器人及系统。The invention relates to the field of robot technology, in particular to a self-propelled robot and a system based on fusion of laser navigation radar and computer vision perception.

背景技术Background technique

目前,自行走机器人一般采用磁条或激光雷达导航,磁条导航不仅成本较高,而且受环境限制,制约因素较多。而激光雷达由于无法更多维度的对环境中的事物信息进行采集,只能适用于对一些特定目标的距离进行测量,因此采用该激光雷达导航的自行走机器人只能在一些宽广且地形简单的区域活动,应用场景受到很大的限制。At present, self-propelled robots generally use magnetic stripe or lidar for navigation. Magnetic stripe navigation is not only expensive, but also limited by the environment and has many constraints. However, because lidar cannot collect information about things in the environment in more dimensions, it can only be used to measure the distance of some specific targets. Therefore, the self-propelled robot using this lidar can only navigate in some wide and simple terrain. Regional activities and application scenarios are greatly restricted.

例如,如图1所示,由多个固定遮挡物2和一个可移动遮挡物3围成的圆形围栏,其中,可移动遮挡物3的颜色与固定遮挡物2的颜色均不同,或直接在可移动遮挡物3上写“出口”字样等。此时若将激光雷达导航自行走机器人1放在其中,它获取的雷达数据将是一个圆形围栏,无法获取到可移动遮挡物3是出口的数据,也就导致自身无法确定正确的行动路线走出围栏。For example, as shown in FIG. 1, a circular fence surrounded by a plurality of fixed shields 2 and a movable shield 3, wherein the color of the movable shield 3 is different from that of the fixed shield 2, or the Write the word "exit" etc. on the movable shield 3. At this time, if the lidar navigation self-propelled robot 1 is placed in it, the radar data obtained by it will be a circular fence, and it cannot obtain the data that the movable blocker 3 is the exit, which makes it unable to determine the correct action route. Get out of the fence.

再例如图2所示的情形,在一个由中间隔离带4分割成的两条路径上(路径一和路径二)以不同的方式放置多个固定遮挡物2,形成不同的可通过路径。其中,路径一和路径二的起始路口摆放的固定遮挡物2的方式是相同的。此时,若将激光雷达导航自行走机器人1放在路口让它从道路中通过,它对前方可能遇到的障碍以及难易程度将是相同的,将无法获取最优路径通过,而很显然,路径二比路径一更优。For another example, as shown in FIG. 2 , a plurality of fixed shelters 2 are placed in different ways on two paths (path 1 and path 2) divided by the intermediate isolation belt 4 to form different passable paths. The manner of placing the fixed shelter 2 at the starting intersection of the path 1 and the path 2 is the same. At this time, if the LiDAR-navigated self-propelled robot 1 is placed at the intersection and let it pass through the road, the obstacles and difficulty levels it may encounter in front will be the same, and it will not be able to obtain the optimal path to pass through. , path two is better than path one.

发明内容SUMMARY OF THE INVENTION

基于上述,提供一种能够适用于更复杂环境下的自行走机器人,并提供了一种适用于该机器人的系统。Based on the above, a self-propelled robot suitable for more complex environments is provided, and a system suitable for the robot is provided.

此外,还提供了一种上述自行走机器人的控制方法。In addition, a control method of the above self-propelled robot is also provided.

一种基于激光导航雷达及计算机视觉感知融合的自行走机器人,包括用于行走运动的机器人本体、设置在所述机器人本体上的激光雷达、竖直固定设置在所述机器人本体上的第一伸缩支架、及设置在所述第一伸缩支架顶端的第一全景摄像模组;所述第一伸缩支架与所述第一全景摄像模组转动副连接;所述第一伸缩支架与所述第一全景摄像模组之间设置有第一矢量角度传感器;所述第一矢量角度传感器用于获取所述第一全景摄像模组相对于所述第一伸缩支架转动时的转角大小和转动方向,以便修正所述第一全景摄像模组获取的画面中的物体方位与激光雷达获取的该物体的方位因所述第一全景摄像模组相对于所述第一伸缩支架转动带来的偏差,使得二者获取的同一物体的方位相同;所述第一全景摄像模组距离地面的高度大于所述激光雷达距离地面的高度;所述第一伸缩支架用于自动调节所述第一全景摄像模组距离地面的高度,并为所述第一矢量角度传感器提供第一固定参考基准;所述激光雷达用于获取所述机器人本体与其周围障碍物之间的距离;所述第一全景摄像模组用于获取所述机器人本体周围障碍物的第一彩色图像,以辅助判周围断障碍物的物理和/或化学属性。A self-propelled robot based on the fusion of laser navigation radar and computer vision perception, comprising a robot body for walking motion, a laser radar arranged on the robot body, a first telescopic robot vertically fixed on the robot body a bracket, and a first panoramic camera module arranged at the top of the first telescopic bracket; the first telescopic bracket is connected with the first panoramic camera module in a rotating pair; the first telescopic bracket is connected to the first A first vector angle sensor is arranged between the panoramic camera modules; the first vector angle sensor is used to obtain the rotation angle size and rotation direction of the first panoramic camera module relative to the first telescopic bracket, so as to Correct the deviation between the orientation of the object in the picture acquired by the first panoramic camera module and the orientation of the object acquired by the lidar due to the rotation of the first panoramic camera module relative to the first telescopic bracket, so that the two The orientation of the same object obtained by the user is the same; the height of the first panoramic camera module from the ground is greater than the height of the lidar from the ground; the first telescopic bracket is used to automatically adjust the distance of the first panoramic camera module The height of the ground is used to provide a first fixed reference for the first vector angle sensor; the lidar is used to obtain the distance between the robot body and its surrounding obstacles; the first panoramic camera module is used to A first color image of obstacles around the robot body is acquired to assist in judging the physical and/or chemical properties of the obstacles.

在其中一个实施例中,所述第一全景摄像模组与所述第一伸缩支架之间通过第一相机稳定器连接;所述第一相机稳定器与所述第一伸缩支架之间转动副连接;所述第一全景摄像模组包括第一镜头模块、第一镜头支撑杆和第一重力块;所述第一镜头支撑杆垂直固定安装在所述第一相机稳定器的相机安装座上;所述第一镜头模块设置在所述第一镜头支撑杆的顶端;所述第一重力块设置在所述相机安装座的背面,以降低所述第一全景摄像模组的重心,以及提高所述相机安装座相对于所述第一相机稳定器的扭矩,从而使得所述第一镜头支撑杆保持竖直状态;所述第一矢量角度传感器设置在所述第一伸缩支架与所述第一相机稳定器之间,用以获取所述第一相机稳定器相对于所述第一伸缩支架转动时的转角大小和转动方向,以便修正所述第一全景摄像模组获取的画面中的物体方位与激光雷达获取的该物体的方位因所述第一全景摄像模组相对于所述第一伸缩支架转动带来的偏差,使得二者获取的同一物体的方位相同。In one embodiment, the first panoramic camera module and the first telescopic bracket are connected by a first camera stabilizer; the first camera stabilizer and the first telescopic bracket are connected by a rotating pair connection; the first panoramic camera module includes a first lens module, a first lens support rod and a first gravity block; the first lens support rod is vertically fixed on the camera mount of the first camera stabilizer ; the first lens module is arranged on the top of the first lens support rod; the first gravity block is arranged on the back of the camera mount to lower the center of gravity of the first panoramic camera module and increase the The torque of the camera mount relative to the first camera stabilizer keeps the first lens support rod in a vertical state; the first vector angle sensor is arranged between the first telescopic bracket and the first Between a camera stabilizer to obtain the angle size and rotation direction of the first camera stabilizer when it rotates relative to the first telescopic bracket, so as to correct the objects in the picture obtained by the first panoramic camera module The azimuth and the azimuth of the object obtained by the laser radar are caused by the deviation caused by the rotation of the first panoramic camera module relative to the first telescopic bracket, so that the azimuth of the same object obtained by the two is the same.

在其中一个实施例中,所述第一镜头模块包括第一广角镜头、第二广角镜头和第三广角镜头;所述第一广角镜头、所述第二广角镜头和所述第三广角镜头的拍摄角度各为120度,以此通过拼接合成获得一个360度全景摄像头。In one embodiment, the first lens module includes a first wide-angle lens, a second wide-angle lens and a third wide-angle lens; the shooting angles of the first wide-angle lens, the second wide-angle lens and the third wide-angle lens are each 120 degrees , so as to obtain a 360-degree panoramic camera through stitching and synthesis.

在其中一个实施例中,还包括第二伸缩支架、及设置在所述第二伸缩支架顶端的第二全景摄像模组;所述第二伸缩支架与所述第二全景摄像模组转动副连接;所述第二伸缩支架与所述第二全景摄像模组之间设置有第二矢量角度传感器;所述第二矢量角度传感器用于获取所述第二全景摄像模组相对于所述第二伸缩支架转动时的转角大小和转动方向,以便修正所述第二全景摄像模组获取的画面中的物体方位与激光雷达获取的该物体的方位因所述第二全景摄像模组相对于所述第二伸缩支架转动带来的偏差,使得二者获取的同一物体的方位相同。第二伸缩支架竖直设置,并与所述机器人本体固定连接;所述第二全景摄像模组距离地面的高度大于所述激光雷达距离地面的高度,小于所述第一全景摄像模组距离地面的高度;所述第二伸缩支架用于自动调节所述第二全景摄像模组距离地面的高度,并为所述第二矢量角度传感器提供第二固定参考基准;所述第二固定参考基准与所述第一固定参考基准相同;所述第二全景摄像模组与所述第一全景摄像模组之间的水平距离大于0;所述第二伸缩支架与所述第一伸缩支架之间的距离大于0;所述第二全景摄像模组用于获取所述机器人本体周围障碍物的第二彩色图像,与所述第一彩色图像合成三维图像。该方式最多可以合成6幅三维图像,并且最多可以看清事物5个基准面情况。In one of the embodiments, it also includes a second telescopic bracket and a second panoramic camera module disposed at the top of the second telescopic bracket; the second telescopic bracket is connected to the second panoramic camera module by a rotating pair A second vector angle sensor is arranged between the second telescopic bracket and the second panoramic camera module; the second vector angle sensor is used to obtain the relative relationship between the second panoramic camera module and the second The size of the corner and the direction of rotation when the telescopic bracket is rotated, so as to correct the orientation of the object in the picture acquired by the second panoramic camera module and the orientation of the object acquired by the lidar because the second panoramic camera module is relative to the The deviation caused by the rotation of the second telescopic support makes the orientation of the same object obtained by the two the same. The second telescopic bracket is vertically arranged and is fixedly connected to the robot body; the height of the second panoramic camera module from the ground is greater than the height of the lidar from the ground, and less than the height of the first panoramic camera module from the ground height; the second telescopic bracket is used to automatically adjust the height of the second panoramic camera module from the ground, and provide a second fixed reference for the second vector angle sensor; the second fixed reference and The first fixed reference is the same; the horizontal distance between the second panoramic camera module and the first panoramic camera module is greater than 0; the distance between the second telescopic bracket and the first telescopic bracket is The distance is greater than 0; the second panoramic camera module is used to obtain a second color image of obstacles around the robot body, and synthesize a three-dimensional image with the first color image. This method can synthesize up to 6 three-dimensional images, and can see up to 5 datum planes of things.

在其中一个实施例中,所述第二全景摄像模组与所述第二伸缩支架之间通过第二相机稳定器连接;所述第二相机稳定器与所述第二伸缩支架之间转动副连接;所述第二全景摄像模组包括第二镜头模块、第二镜头支撑杆和第二重力块;所述第二镜头支撑杆垂直固定安装在所述第二相机稳定器的相机安装座上;所述第二镜头模块设置在所述第二镜头支撑杆的顶端;所述第二重力块设置在所述相机安装座的背面,以降低所述第二全景摄像模组的重心,以及提高所述相机安装座相对于所述第二相机稳定器的扭矩,从而使得所述第二镜头支撑杆保持竖直状态;所述第二矢量角度传感器设置在所述第二伸缩支架与所述第二相机稳定器之间,用以获取所述第二相机稳定器相对于所述第二伸缩支架转动时的转角大小和转动方向,以便修正所述第二全景摄像模组获取的画面中的物体方位与激光雷达获取的该物体的方位因所述第二全景摄像模组相对于所述第二伸缩支架转动带来的偏差,使得二者获取的同一物体的方位相同。In one embodiment, the second panoramic camera module and the second telescopic bracket are connected by a second camera stabilizer; the second camera stabilizer and the second telescopic bracket are connected by a rotating pair connection; the second panoramic camera module includes a second lens module, a second lens support rod and a second gravity block; the second lens support rod is vertically fixed on the camera mount of the second camera stabilizer ; the second lens module is arranged on the top of the second lens support rod; the second gravity block is arranged on the back of the camera mount to lower the center of gravity of the second panoramic camera module, and increase the The torque of the camera mount relative to the second camera stabilizer keeps the second lens support rod in a vertical state; the second vector angle sensor is arranged between the second telescopic bracket and the first Between the two camera stabilizers, it is used to obtain the angle size and rotation direction of the second camera stabilizer when it rotates relative to the second telescopic bracket, so as to correct the objects in the picture obtained by the second panoramic camera module The azimuth and the azimuth of the object obtained by the laser radar are caused by the deviation caused by the rotation of the second panoramic camera module relative to the second telescopic bracket, so that the azimuth of the same object obtained by the two is the same.

在其中一个实施例中,所述第二镜头模块包括第四广角镜头、第五广角镜头和第六广角镜头;所述第四广角镜头、所述第五广角镜头和所述第六广角镜头的拍摄角度各为120,以此通过拼接合成获得一个360度全景摄像头。In one embodiment, the second lens module includes a fourth wide-angle lens, a fifth wide-angle lens, and a sixth wide-angle lens; the shooting angles of the fourth wide-angle lens, the fifth wide-angle lens, and the sixth wide-angle lens are each 120°, In this way, a 360-degree panoramic camera is obtained by stitching and compositing.

在其中一个实施例中,还包括音频接收器、设置在所述机器人本体上的机械臂、及设置在所述机械臂末端用于敲击的锤子;所述音频接收器用于接收所述锤子敲击物体的声音。In one of the embodiments, it further includes an audio receiver, a mechanical arm disposed on the robot body, and a hammer disposed at the end of the mechanical arm for knocking; the audio receiver is used to receive the hammer knocking The sound of hitting an object.

在其中一个实施例中,还包括音频接收器、末端电机、及设置在所述机器人本体上的机械臂、机械臂安装座和第三伸缩支架;所述机械臂通过所述机械臂安装座与所述机器人本体连接;所述机械臂为一伸缩型直杆;所述机械臂与所述机械臂安装座转动副连接;所述机械臂与所述第三伸缩支架转动副连接,使得第三伸缩支架的伸长和缩短能够带动所述机械臂绕着所述机械臂安装座转动。所述机械臂末端设置有电控弹射锤模组;所述电控弹射锤模组与所述机械臂末端转动副连接,并通过末端电机带动所述电控弹射锤模组与所述机械臂之间的相对转动;所述音频接收器设置在所述电控弹射锤模组上,并使得音频接收端面向所述电控弹射锤模组击打物体的方向。In one of the embodiments, it further includes an audio receiver, a terminal motor, a robotic arm, a robotic arm mounting seat and a third telescopic bracket provided on the robot body; the robotic arm is connected to the robotic arm mounting seat through the robotic arm mounting seat. the robot body is connected; the mechanical arm is a telescopic straight rod; the mechanical arm is connected with the rotating pair of the mechanical arm mounting seat; the mechanical arm is connected with the rotating pair of the third telescopic bracket, so that the third The extension and shortening of the telescopic bracket can drive the mechanical arm to rotate around the mechanical arm mounting seat. The end of the mechanical arm is provided with an electronically controlled ejection hammer module; the electronically controlled ejection hammer module is connected with the rotating pair at the end of the mechanical arm, and drives the electronically controlled ejection hammer module and the mechanical arm through the end motor The audio receiver is arranged on the electronically controlled ejection hammer module, and the audio receiving end faces the direction in which the electronically controlled ejection hammer module hits the object.

在其中一个实施例中,还包括音频接收器及超声波发射器;所述超声波发射器用于发射超声波;所述音频接收器用于接收所述超声波发射器发射出的并经障碍物反射回来的超声波,以实现在阴雨、大雾等天气情况的导航功能。In one embodiment, an audio receiver and an ultrasonic transmitter are also included; the ultrasonic transmitter is used for transmitting ultrasonic waves; the audio receiver is used for receiving ultrasonic waves emitted by the ultrasonic transmitter and reflected back by obstacles, In order to realize the navigation function in weather conditions such as rainy and foggy.

在其中一个实施例中,还包括第一通讯模组和第二通讯模组;所述第一通讯模组用于与互联网连接,以实现远程控制和数据维护;所述第二通讯模组用于与同地区的其他机器人配对后实时通讯,以实现机器人之间互相学习的功能。In one of the embodiments, it also includes a first communication module and a second communication module; the first communication module is used to connect to the Internet to realize remote control and data maintenance; the second communication module is used for Real-time communication after pairing with other robots in the same area to realize the function of mutual learning between robots.

在其中一个实施例中,还包括设置在所述机器人本体上的风速仪模组;所述风速仪模组用于测量所述机器人本体所在位置的风速。由于本申请方案中全景摄像模组架设在可伸缩的支架上,正常情况下位置较高,如果风速太大会导致扭臂过大,因此,设置风速仪模组,当测量的风速大于设定值时,将控制使架设全景摄像模组的可伸缩支架缩短,以减少扭臂。In one of the embodiments, it further includes an anemometer module disposed on the robot body; the anemometer module is used to measure the wind speed at the location of the robot body. Since the panoramic camera module in the solution of this application is erected on a retractable bracket, the position is usually high. If the wind speed is too high, the torsion arm will be too large. Therefore, the anemometer module is set. When the measured wind speed is greater than the set value When , the retractable bracket for erecting the panoramic camera module will be shortened to reduce the torsion arm.

在其中一个实施例中,还包括设置在所述机器人本体上的风向仪模组;所述风速仪模组用于测量所述机器人本体所在位置的风向。如此设置,可以根据风向调整所述机器人本体的姿态,以最小风阻的方式运行。In one of the embodiments, it further includes an anemometer module disposed on the robot body; the anemometer module is used to measure the wind direction at the location of the robot body. In this way, the posture of the robot body can be adjusted according to the wind direction, so as to operate with the minimum wind resistance.

在其中一个实施例中,所述机器人本体的四周依次设置有第一激光雷达、第二激光雷达、第三激光雷达及第四激光雷达;所述第一激光雷达、所述第二激光雷达、所述第三激光雷达和第四激光雷达用于分别获取机器人与四周障碍物之间的距离。In one embodiment, a first laser radar, a second laser radar, a third laser radar and a fourth laser radar are arranged around the robot body in sequence; the first laser radar, the second laser radar, the The third lidar and the fourth lidar are used to obtain distances between the robot and surrounding obstacles, respectively.

上述提供的基于激光导航雷达及计算机视觉感知融合的自行走机器人,通过在竖直设置的可伸缩支架上安置全景摄像模组,可以获取到激光雷达获取不到的色彩信息或文字信息,以此可以辅助判断机器人周围断障碍物的物理和/或化学属性,提高自行走机器人对周围环境的认识程度,从而拓宽自行走机器人的适用场景。The self-propelled robot based on the fusion of laser navigation radar and computer vision perception provided above can obtain color information or text information that cannot be obtained by laser radar by placing a panoramic camera module on a vertically arranged retractable bracket. It can assist in judging the physical and/or chemical properties of obstacles around the robot, improve the self-propelled robot's understanding of the surrounding environment, and thus broaden the applicable scenarios of the self-propelled robot.

根据上述内容,本申请还提供了一种基于激光导航雷达及计算机视觉感知融合的自行走机器人系统。According to the above content, the present application also provides a self-propelled robot system based on the fusion of laser navigation radar and computer vision perception.

一种基于激光导航雷达及计算机视觉感知融合的自行走机器人系统,包括中央处理器、图像数据处理模块、激光导航雷达数据处理模块、第一全景摄像模组、第一矢量角度传感器及激光导航雷达模组;所述图像数据处理模块、所述激光导航雷达数据处理模块与所述中央处理器连接;所述第一全景摄像模组与所述图像数据处理模块连接;所述激光导航雷达模组与所述所述激光导航雷达数据处理模块连接;所述第一矢量角度传感器分别与所述第一全景摄像模组和所述中央处理器连接;所述激光导航雷达模组用于获取机器人本体与其周围障碍物之间的距离;所述第一全景摄像模组用于获取机器人本体周围障碍物的第一彩色图像,以辅助判周围断障碍物的物理和/或化学属性;所述图像数据处理模块用于对所述第一全景摄像模组进行控制,并对所述第一彩色图像进行第一级数据处理;所述激光导航雷达数据处理模块用于对所述激光导航雷达模组进行控制,并对所述激光导航雷达模组获取的数据进行第一级处理;所述第一矢量角度传感器用于获取所述第一全景摄像模组相对于所述激光导航雷达模组的基准偏差的角度大小和方向;所述中央处理器通过所述第一矢量角度传感器获取的数据对所述图像数据处理模块和所述激光导航雷达数据处理模块输出的数据进行第二级数据处理,修正所述第一全景摄像模组因转动导致相对于所述激光导航雷达模组的基准偏差,使得所述第一彩色图像中的物体方位所述激光导航雷达获取的该物体方位相同。A self-propelled robot system based on the fusion of laser navigation radar and computer vision perception, comprising a central processing unit, an image data processing module, a laser navigation radar data processing module, a first panoramic camera module, a first vector angle sensor and a laser navigation radar module; the image data processing module and the laser navigation radar data processing module are connected to the central processing unit; the first panoramic camera module is connected to the image data processing module; the laser navigation radar module connected with the laser navigation radar data processing module; the first vector angle sensor is respectively connected with the first panoramic camera module and the central processing unit; the laser navigation radar module is used to obtain the robot body The distance between the obstacle and the surrounding obstacles; the first panoramic camera module is used to obtain the first color image of the obstacles around the robot body to assist in judging the physical and/or chemical properties of the surrounding obstacles; the image data The processing module is used to control the first panoramic camera module and perform first-level data processing on the first color image; the laser navigation radar data processing module is used to perform the first level data processing on the laser navigation radar module. control, and perform first-level processing on the data obtained by the laser navigation radar module; the first vector angle sensor is used to obtain the reference deviation of the first panoramic camera module relative to the laser navigation radar module The central processing unit performs second-level data processing on the data output by the image data processing module and the laser navigation radar data processing module through the data obtained by the first vector angle sensor, and corrects all The rotation of the first panoramic camera module causes a reference deviation relative to the laser navigation radar module, so that the orientation of the object in the first color image is the same as the orientation of the object obtained by the laser navigation radar.

在其中一个实施例中,还第二全景摄像模组、第二矢量角度传感器;所述第二全景摄像模组与所述图像数据处理模块连接;所述第二矢量角度传感器分别与所述第二全景摄像模组和所述中央处理器连接;所述第二全景摄像模组用于获取机器人本体周围障碍物的第二彩色图像,与所述第一彩色图像合成三维图像;所述第二矢量角度传感器用于获取所述第二全景摄像模组相对于所述激光导航雷达模组的基准偏差的角度大小和方向;所述中央处理器通过所述第二矢量角度传感器获取的数据对所述图像数据处理模块和所述激光导航雷达数据处理模块输出的数据进行第二级数据处理,修正所述第二全景摄像模组因转动导致相对于所述激光导航雷达模组的基准偏差,使得所述第二彩色图像中的物体方位所述激光导航雷达获取的该物体方位相同;所述中央处理器或所述第二全景摄像模组还用于将所述第一彩色图像和所述第二彩色图像合成三维图像。In one embodiment, there are also a second panoramic camera module and a second vector angle sensor; the second panoramic camera module is connected to the image data processing module; the second vector angle sensor is respectively connected to the first Two panoramic camera modules are connected to the central processing unit; the second panoramic camera module is used to obtain a second color image of obstacles around the robot body, and synthesize a three-dimensional image with the first color image; the second The vector angle sensor is used to obtain the angular magnitude and direction of the reference deviation of the second panoramic camera module relative to the laser navigation radar module; the central processor uses the data obtained by the second vector angle sensor to The data output by the image data processing module and the laser navigation radar data processing module is subjected to second-level data processing, and the reference deviation of the second panoramic camera module relative to the laser navigation radar module caused by the rotation is corrected, so that the The orientation of the object in the second color image is the same as the orientation of the object obtained by the laser navigation radar; the central processing unit or the second panoramic camera module is also used to combine the first color image and the first color image. Two color images are combined into a three-dimensional image.

在其中一个实施例中,所述第一全景摄像模组包括第一广角镜头、第二广角镜头、第三广角镜头;所述第二全景摄像模组包括第四广角镜头、第五广角镜头、第六广角镜头;所述第一广角镜头与所述第六广角镜头配对连接后与所述图像数据处理模块连接;所述第二广角镜头与所述第五广角镜头配对连接后与所述图像数据处理模块连接;所述第三广角镜头与所述第四广角镜头配对连接后与所述图像数据处理模块连接;所述中央处理器还用于通过所述第一矢量角度传感器和所述第二矢量角度传感器的数据对所述第一全景摄像模组和所述第二全景摄像模组的相对转角进行修正。In one embodiment, the first panoramic camera module includes a first wide-angle lens, a second wide-angle lens, and a third wide-angle lens; the second panoramic camera module includes a fourth wide-angle lens, a fifth wide-angle lens, and a sixth wide-angle lens; all The first wide-angle lens and the sixth wide-angle lens are paired and connected to the image data processing module; the second wide-angle lens and the fifth wide-angle lens are paired and connected to the image data processing module; the third wide-angle lens After being paired and connected with the fourth wide-angle lens, it is connected with the image data processing module; the central processing unit is further configured to analyze the first panorama through the data of the first vector angle sensor and the second vector angle sensor The relative rotation angle of the camera module and the second panoramic camera module is corrected.

在其中一个实施例中,所述激光导航雷达模组包括第一激光雷达、第二激光雷达、第三激光雷达及第四激光雷达;所述第一激光雷达、所述第二激光雷达、所述第三激光雷达和第四激光雷达用于依次设置在机器人的四周分别获取机器人与四周障碍物之间的距离;所述激光导航雷达数据处理模块还用于将所述第一激光雷达、所述第二激光雷达、所述第三激光雷达和第四激光雷达获取的数据拼接合成一个完整的雷达图,并输出给所述中央处理器做进一步处理。In one embodiment, the lidar navigation radar module includes a first lidar, a second lidar, a third lidar, and a fourth lidar; the first lidar, the second lidar, the The third laser radar and the fourth laser radar are used to be arranged around the robot in turn to obtain the distance between the robot and the surrounding obstacles; the laser navigation radar data processing module is also used to The data obtained by the second lidar, the third lidar and the fourth lidar are spliced into a complete radar image, and output to the central processor for further processing.

在其中一个实施例中,还包括分别与所述中央处理器连接的电控弹射锤模组及音频接收器;所述电控弹射锤模组用于敲击物体使发出声音;所述音频接收器用于接收所述电控弹射锤模组敲击物体发出的声音;所述中央处理器还用于对所述电控弹射锤模组敲击物体发出的声音进行频谱分析,并输出被敲击物体的种类和基本的物理和/或化学属性。In one of the embodiments, it further includes an electronically controlled ejection hammer module and an audio receiver respectively connected to the central processing unit; the electronically controlled ejection hammer module is used to strike an object to make a sound; the audio receiver The device is used to receive the sound produced by the electronically controlled ejection hammer module striking the object; the central processing unit is also used to perform spectrum analysis on the sound produced by the electronically controlled ejection hammer module striking the object, and output the knocked The kind and basic physical and/or chemical properties of an object.

在其中一个实施例中,还包括与所述中央处理器连接的超声波发射器;所述超声波发射器用于发射超声波;所述音频接收器还用于接收所述超声波发射器发射出的并经障碍物反射回来的超声波,以实现机器人在阴雨、大雾等天气情况的导航功能。In one of the embodiments, it further includes an ultrasonic transmitter connected to the central processing unit; the ultrasonic transmitter is used for transmitting ultrasonic waves; the audio receiver is also used for receiving the ultrasonic wave transmitted by the ultrasonic transmitter and passing through obstacles The ultrasonic wave reflected from the object is used to realize the navigation function of the robot in weather conditions such as rain and fog.

在其中一个实施例中,所述超声波发射器还与所述音频接收器连接。In one of the embodiments, the ultrasonic transmitter is also connected with the audio receiver.

在其中一个实施例中,还包括与所述中央处理器连接的第一通讯模组和第二通讯模组;所述第一通讯模组用于与互联网连接,以实现远程控制和数据维护;所述第二通讯模组用于与同地区的其他机器人配对后实时通讯,以实现机器人之间互相学习的功能。In one of the embodiments, it further includes a first communication module and a second communication module connected with the central processing unit; the first communication module is used for connecting with the Internet to realize remote control and data maintenance; The second communication module is used for real-time communication after pairing with other robots in the same region, so as to realize the function of mutual learning between the robots.

在其中一个实施例中,还包括与所述中央处理器连接的风向仪模组;所述风向仪模组用于测量机器人所在位置的风向。In one of the embodiments, it also includes an wind vane module connected to the central processing unit; the wind vane module is used to measure the wind direction at the location where the robot is located.

在其中一个实施例中,还包括与所述中央处理器连接的风速仪模组;所述风速仪模组用于测量机器人所在位置的风速。In one of the embodiments, it also includes an anemometer module connected to the central processing unit; the anemometer module is used to measure the wind speed at the location where the robot is located.

在其中一个实施例中,还包括与所述中央处理器连接的存储模块;所述存储模块存储有可被所述中央处理器运行的控制程序和常见物体的属性信息,以及用于存储各功能模块运行时产生的信息。In one embodiment, it further includes a storage module connected to the central processing unit; the storage module stores a control program that can be executed by the central processing unit and attribute information of common objects, and is used to store various functions Information generated when the module runs.

上述提供的基于激光导航雷达及计算机视觉感知融合的自行走机器人系统,通过设置全景摄像模组,可以获取到激光雷达获取不到的色彩信息或文字信息,以此可以辅助判断机器人周围断障碍物的物理和/或化学属性,提高自行走机器人对周围环境的认识程度,从而拓宽自行走机器人的适用场景。The above-mentioned self-propelled robot system based on the fusion of laser navigation radar and computer vision perception can obtain color information or text information that cannot be obtained by laser radar by setting up a panoramic camera module, so as to assist in judging obstacles around the robot. The physical and/or chemical properties of the self-propelled robot can improve the awareness of the surrounding environment, thereby broadening the applicable scenarios of the self-propelled robot.

此外,本申请还提供了一种基于激光导航雷达及计算机视觉感知融合的自行走机器人的控制方法。In addition, the present application also provides a control method of a self-propelled robot based on the fusion of laser navigation radar and computer vision perception.

一种基于激光导航雷达及计算机视觉感知融合的自行走机器人的控制方法,所述自行走机器人为上述任何一项实施例中的自行走机器人,方法包括:所述激光导航雷达获取机器人周围的障碍物反射面;所述中央处理器判断障碍物之间的最大距离是否小于机器人的最小通过距离,若是,则启动所述第一全景摄像模组。A control method of a self-propelled robot based on laser navigation radar and computer vision perception fusion, the self-propelled robot is the self-propelled robot in any of the above-mentioned embodiments, and the method comprises: the laser navigation radar obtains obstacles around the robot object reflecting surface; the central processing unit determines whether the maximum distance between obstacles is less than the minimum passing distance of the robot, and if so, activates the first panoramic camera module.

在其中一个实施例中,所述方法还包括,测试所述激光导航雷达能够获取机器人周围障碍物的最大距离;判断该最大距离是否小于或等于第一设定值,若是,则启动所述超声波发射器和所述音频接收器。In one embodiment, the method further includes: testing the maximum distance that the lidar navigation radar can obtain obstacles around the robot; judging whether the maximum distance is less than or equal to a first set value, and if so, starting the ultrasonic wave transmitter and the audio receiver.

上述提供的方法,通过激光导航雷达对周围障碍环境的初步判断,当仅通过激光雷达导航无法满足要求时,启动计算机视觉感知对周围环境进行更深层次的认识,从而寻找出最佳的行动路径。The method provided above uses the preliminary judgment of the surrounding obstacle environment through the lidar navigation radar. When the laser radar navigation alone cannot meet the requirements, computer vision perception is activated to gain a deeper understanding of the surrounding environment, so as to find the best action path.

附图说明Description of drawings

图1背景技术中用于暴露现有机器人缺点而设置的障碍围栏示意图;1 is a schematic diagram of an obstacle fence set up for exposing the shortcomings of the existing robot in the background technology;

图2背景技术中用于暴露现有机器人缺点而设置的路障示意图;Figure 2 is a schematic diagram of a roadblock set in the background technology for exposing the shortcomings of the existing robot;

图3为一实施例提供的设置有一个全景摄像模组的自行走机器人结构示意图;3 is a schematic structural diagram of a self-propelled robot provided with a panoramic camera module according to an embodiment;

图4为一实施例提供的设置有两个全景摄像模组的自行走机器人结构示意图;4 is a schematic structural diagram of a self-propelled robot provided with two panoramic camera modules according to an embodiment;

图5为一实施例提供的基于激光导航雷达及计算机视觉感知融合的自行走机器人系统结构示意图。FIG. 5 is a schematic structural diagram of a self-propelled robot system based on the fusion of laser navigation radar and computer vision perception according to an embodiment.

附图标记说明:Description of reference numbers:

1.激光雷达导航自行走机器人;2.固定遮挡物;3.可移动遮挡物;4.中间隔离带;1. Lidar navigation self-propelled robot; 2. Fixed obstructions; 3. Movable obstructions; 4. Intermediate isolation belt;

10.机器人本体;21.第一伸缩支架;22.第一相机稳定器;23.第二伸缩支架;24.第二相机稳定器;31.机械臂;32.机械臂安装座;33.第三伸缩支架;34.末端电机;10. Robot body; 21. First telescopic bracket; 22. First camera stabilizer; 23. Second telescopic bracket; 24. Second camera stabilizer; 31. Robot arm; 32. Robot arm mount; 33. Section Three telescopic brackets; 34. End motor;

110.第一激光雷达;120.第二激光雷达;130.第三激光雷达;140.第四激光雷达;210.第一全景摄像模组;211.第一广角镜头;212.第二广角镜头;213.第三广角镜头;214.第一镜头支撑杆;215.第一重力块;220.第二全景摄像模组;221.第四广角镜头;222.第五广角镜头;223.第六广角镜头;224.第二镜头支撑杆;225.第二重力块;300.电控弹射锤模组;410.音频接收器;420.超声波发射器;510.第一矢量角度传感器;520.第二矢量角度传感器;610.第一通讯模组;620.第二通讯模组;710.风速仪模组;720.风向仪模组;810.中央处理器;820.图像数据处理模块;830.激光导航雷达数据处理模块;840.存储模块。110. The first lidar; 120. The second lidar; 130. The third lidar; 140. The fourth lidar; 210. The first panoramic camera module; 211. The first wide-angle lens; 212. The second wide-angle lens; 213 .The third wide-angle lens; 214. The first lens support rod; 215. The first gravity block; 220. The second panoramic camera module; 221. The fourth wide-angle lens; 222. The fifth wide-angle lens; 223. The sixth wide-angle lens; Two-lens support rod; 225. Second gravity block; 300. Electronically controlled ejection hammer module; 410. Audio receiver; 420. Ultrasonic transmitter; 510. First vector angle sensor; 520. Second vector angle sensor; 610 .First communication module; 620. Second communication module; 710. Anemometer module; 720. Anemometer module; 810. Central processing unit; 820. Image data processing module; 830. Laser navigation radar data processing module ; 840. Storage module.

具体实施方式Detailed ways

在本专利文件中,下面讨论的图1-5和用于描述本公开的原理或方法的各种实施例只用于说明,而不应以任何方式解释为限制了本公开的范围。本领域的技术人员应理解的是,本公开的原理或方法可在任何适当布置的机器人中实现。参考附图,本公开的优选实施例将在下文中描述。在下面的描述中,将省略众所周知的功能或配置的详细描述,以免以不必要的细节混淆本公开的主题。而且,本文中使用的术语将根据本发明的功能定义。因此,术语可能会根据用户或操作者的意向或用法而不同。因此,本文中使用的术语必须基于本文中所作的描述来理解。1-5, discussed below, and the various embodiments used to describe the principles or methods of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles or methods of the present disclosure may be implemented in any suitably arranged robot. Preferred embodiments of the present disclosure will be described hereinafter with reference to the accompanying drawings. In the following description, detailed descriptions of well-known functions or configurations are omitted so as not to obscure the subject matter of the present disclosure with unnecessary detail. Also, the terms used herein will be defined according to the functions of the present invention. Therefore, the terminology may vary according to the user's or operator's intention or usage. Therefore, the terms used in this document must be understood based on the descriptions made in this document.

一种基于激光导航雷达及计算机视觉感知融合的自行走机器人,如图3所示,包括用于行走运动的机器人本体10、设置在机器人本体10上的激光雷达、竖直固定设置在机器人本体10上的第一伸缩支架21、及设置在第一伸缩支架21顶端的第一全景摄像模组210。第一伸缩支架21与第一全景摄像模组210转动副连接。第一伸缩支架21与第一全景摄像模组210之间设置有第一矢量角度传感器510。第一矢量角度传感器510用于获取第一全景摄像模组210相对于第一伸缩支架21转动时的转角大小和转动方向,以便修正第一全景摄像模组210获取的画面中的物体方位与激光雷达获取的该物体的方位因第一全景摄像模组210相对于第一伸缩支架21转动带来的偏差,使得二者获取的同一物体的方位相同。第一全景摄像模组210距离地面的高度大于激光雷达距离地面的高度。第一伸缩支架21用于自动调节第一全景摄像模组210距离地面的高度,并为第一矢量角度传感器510提供第一固定参考基准。激光雷达用于获取机器人本体10与其周围障碍物之间的距离。第一全景摄像模组210用于获取机器人本体10周围障碍物的第一彩色图像,以辅助判周围断障碍物的物理和/或化学属性。A self-propelled robot based on the fusion of laser navigation radar and computer vision perception, as shown in FIG. 3, includes a robot body 10 for walking motion, a laser radar arranged on the robot body 10, and a robot body 10 vertically fixed on the robot body 10. The first telescopic bracket 21 on the upper part, and the first panoramic camera module 210 arranged on the top of the first telescopic bracket 21 . The first telescopic bracket 21 is rotatably connected with the first panoramic camera module 210 . A first vector angle sensor 510 is disposed between the first telescopic bracket 21 and the first panoramic camera module 210 . The first vector angle sensor 510 is used to obtain the angle size and rotation direction of the first panoramic camera module 210 when it rotates relative to the first telescopic bracket 21, so as to correct the orientation of the object and the laser beam in the picture obtained by the first panoramic camera module 210. The orientation of the object acquired by the radar is due to the deviation caused by the rotation of the first panoramic camera module 210 relative to the first telescopic bracket 21 , so that the orientation of the same object acquired by the two is the same. The height of the first panoramic camera module 210 from the ground is greater than the height of the lidar from the ground. The first telescopic bracket 21 is used to automatically adjust the height of the first panoramic camera module 210 from the ground, and provide a first fixed reference for the first vector angle sensor 510 . The lidar is used to obtain the distance between the robot body 10 and its surrounding obstacles. The first panoramic camera module 210 is used to obtain a first color image of obstacles around the robot body 10 to assist in judging the physical and/or chemical properties of the obstacles around.

在其中一个实施例中,如图3所示,第一全景摄像模组210与第一伸缩支架21之间通过第一相机稳定器22连接。第一相机稳定器22与第一伸缩支架21之间转动副连接。第一全景摄像模组210包括第一镜头模块、第一镜头支撑杆214和第一重力块215。第一镜头支撑杆214垂直固定安装在第一相机稳定器22的相机安装座上。第一镜头模块设置在第一镜头支撑杆214的顶端。第一重力块215设置在相机安装座的背面,以降低第一全景摄像模组210的重心,以及提高相机安装座相对于第一相机稳定器22的扭矩,从而使得第一镜头支撑杆214保持竖直状态。第一矢量角度传感器510设置在第一伸缩支架21与第一相机稳定器22之间,用以获取第一相机稳定器22相对于第一伸缩支架21转动时的转角大小和转动方向,以便修正第一全景摄像模组210获取的画面中的物体方位与激光雷达获取的该物体的方位因第一全景摄像模组210相对于第一伸缩支架21转动带来的偏差,使得二者获取的同一物体的方位相同。In one embodiment, as shown in FIG. 3 , the first camera stabilizer 22 is connected between the first panoramic camera module 210 and the first telescopic bracket 21 . The first camera stabilizer 22 and the first telescopic bracket 21 are connected by a rotational pair. The first panoramic camera module 210 includes a first lens module, a first lens support rod 214 and a first gravity block 215 . The first lens support rod 214 is vertically fixed on the camera mount of the first camera stabilizer 22 . The first lens module is disposed on the top of the first lens support rod 214 . The first gravity block 215 is disposed on the back of the camera mount to lower the center of gravity of the first panoramic camera module 210 and increase the torque of the camera mount relative to the first camera stabilizer 22, so that the first lens support rod 214 remains vertical state. The first vector angle sensor 510 is disposed between the first telescopic bracket 21 and the first camera stabilizer 22 to obtain the size and direction of the rotation angle of the first camera stabilizer 22 when it rotates relative to the first telescopic bracket 21 for correction The deviation between the orientation of the object in the picture acquired by the first panoramic camera module 210 and the orientation of the object acquired by the lidar is caused by the rotation of the first panoramic camera module 210 relative to the first telescopic bracket 21, so that the two obtained are the same. The orientation of the objects is the same.

在其中一个实施例中,如图3所示,第一镜头模块包括第一广角镜头211、第二广角镜头212和第三广角镜头213。第一广角镜头211、第二广角镜头212和第三广角镜头213的拍摄角度各为120,以此通过拼接合成获得一个360度全景摄像头。In one embodiment, as shown in FIG. 3 , the first lens module includes a first wide-angle lens 211 , a second wide-angle lens 212 and a third wide-angle lens 213 . The shooting angles of the first wide-angle lens 211 , the second wide-angle lens 212 , and the third wide-angle lens 213 are each 120, so that a 360-degree panoramic camera is obtained by splicing and synthesizing.

在其中一个实施例中,如图4所示,还包括第二伸缩支架23、及设置在第二伸缩支架23顶端的第二全景摄像模组220。第二伸缩支架23与第二全景摄像模组220转动副连接。第二伸缩支架23与第二全景摄像模组220之间设置有第二矢量角度传感器520。第二矢量角度传感器520用于获取第二全景摄像模组220相对于第二伸缩支架23转动时的转角大小和转动方向,以便修正第二全景摄像模组220获取的画面中的物体方位与激光雷达获取的该物体的方位因第二全景摄像模组220相对于第二伸缩支架23转动带来的偏差,使得二者获取的同一物体的方位相同。第二伸缩支架23竖直设置,并与机器人本体10固定连接。第二全景摄像模组220距离地面的高度大于激光雷达距离地面的高度,小于第一全景摄像模组210距离地面的高度。第二伸缩支架23用于自动调节第二全景摄像模组220距离地面的高度,并为第二矢量角度传感器520提供第二固定参考基准。第二固定参考基准与第一固定参考基准相同。第二全景摄像模组220与第一全景摄像模组210之间的水平距离大于0。第二伸缩支架23与第一伸缩支架21之间的距离大于0。第二全景摄像模组220用于获取机器人本体10周围障碍物的第二彩色图像,与第一彩色图像合成三维图像。该方式最多可以合成6幅三维图像,并且最多可以看清事物5个基准面情况。In one embodiment, as shown in FIG. 4 , it further includes a second telescopic bracket 23 and a second panoramic camera module 220 disposed at the top of the second telescopic bracket 23 . The second telescopic bracket 23 is rotatably connected with the second panoramic camera module 220 . A second vector angle sensor 520 is disposed between the second telescopic bracket 23 and the second panoramic camera module 220 . The second vector angle sensor 520 is used to obtain the size and direction of the rotation angle of the second panoramic camera module 220 when it rotates relative to the second telescopic bracket 23 , so as to correct the orientation of the object and the laser beam in the picture obtained by the second panoramic camera module 220 The orientation of the object acquired by the radar is due to the deviation caused by the rotation of the second panoramic camera module 220 relative to the second telescopic bracket 23, so that the orientation of the same object acquired by the two is the same. The second telescopic bracket 23 is vertically arranged and is fixedly connected with the robot body 10 . The height of the second panoramic camera module 220 from the ground is greater than the height of the lidar from the ground, and less than the height of the first panoramic camera module 210 from the ground. The second telescopic bracket 23 is used to automatically adjust the height of the second panoramic camera module 220 from the ground, and provide a second fixed reference for the second vector angle sensor 520 . The second fixed reference is the same as the first fixed reference. The horizontal distance between the second panoramic camera module 220 and the first panoramic camera module 210 is greater than zero. The distance between the second telescopic bracket 23 and the first telescopic bracket 21 is greater than zero. The second panoramic camera module 220 is used for acquiring a second color image of obstacles around the robot body 10, and synthesizing a three-dimensional image with the first color image. This method can synthesize up to 6 three-dimensional images, and can see up to 5 datum planes of things.

在其中一个实施例中,如图4所示,第二全景摄像模组220与第二伸缩支架23之间通过第二相机稳定器24连接。第二相机稳定器24与第二伸缩支架23之间转动副连接。第二全景摄像模组220包括第二镜头模块、第二镜头支撑杆224和第二重力块225。第二镜头支撑杆224垂直固定安装在第二相机稳定器24的相机安装座上。第二镜头模块设置在第二镜头支撑杆224的顶端。第二重力块225设置在相机安装座的背面,以降低第二全景摄像模组220的重心,以及提高相机安装座相对于第二相机稳定器24的扭矩,从而使得第二镜头支撑杆224保持竖直状态。第二矢量角度传感器520设置在第二伸缩支架23与第二相机稳定器24之间,用以获取第二相机稳定器24相对于第二伸缩支架23转动时的转角大小和转动方向,以便修正第二全景摄像模组220获取的画面中的物体方位与激光雷达获取的该物体的方位因第二全景摄像模组220相对于第二伸缩支架23转动带来的偏差,使得二者获取的同一物体的方位相同。In one embodiment, as shown in FIG. 4 , the second camera stabilizer 24 is connected between the second panoramic camera module 220 and the second telescopic bracket 23 . The second camera stabilizer 24 and the second telescopic bracket 23 are connected by a rotational pair. The second panoramic camera module 220 includes a second lens module, a second lens support rod 224 and a second gravity block 225 . The second lens support rod 224 is vertically fixed on the camera mount of the second camera stabilizer 24 . The second lens module is disposed on the top of the second lens support rod 224 . The second gravity block 225 is disposed on the back of the camera mount to lower the center of gravity of the second panoramic camera module 220 and increase the torque of the camera mount relative to the second camera stabilizer 24 , so that the second lens support rod 224 remains vertical state. The second vector angle sensor 520 is disposed between the second telescopic bracket 23 and the second camera stabilizer 24, and is used to obtain the rotation angle and rotation direction of the second camera stabilizer 24 relative to the second telescopic bracket 23 for correction. The deviation of the orientation of the object in the picture obtained by the second panoramic camera module 220 and the orientation of the object obtained by the lidar due to the rotation of the second panoramic camera module 220 relative to the second telescopic bracket 23 makes the two obtained the same The orientation of the objects is the same.

在其中一个实施例中,如图4所示,第二镜头模块包括第四广角镜头221、第五广角镜头222和第六广角镜头223。第四广角镜头221、第五广角镜头222和第六广角镜头223的拍摄角度各为120,以此通过拼接合成获得一个360度全景摄像头。In one embodiment, as shown in FIG. 4 , the second lens module includes a fourth wide-angle lens 221 , a fifth wide-angle lens 222 and a sixth wide-angle lens 223 . The shooting angles of the fourth wide-angle lens 221 , the fifth wide-angle lens 222 and the sixth wide-angle lens 223 are each 120°, so that a 360-degree panoramic camera is obtained by stitching and synthesizing.

在其中一个实施例中,如图3或图4所示,还包括音频接收器410、设置在10上的机械臂31、及设置在机械臂31末端用于敲击的锤子(可看作是标记300)。音频接收器410用于接收锤子敲击物体的声音。In one of the embodiments, as shown in FIG. 3 or FIG. 4 , it further includes an audio receiver 410 , a mechanical arm 31 arranged on the 10 , and a hammer (which can be regarded as a hammer) arranged at the end of the mechanical arm 31 for striking. mark 300). The audio receiver 410 is used to receive the sound of the hammer hitting the object.

在其中一个实施例中,如图3或图4所示,还包括音频接收器410、末端电机34、及设置在机器人本体10上的机械臂31、机械臂安装座32和第三伸缩支架33。机械臂31通过机械臂安装座32与机器人本体10连接。机械臂31为一伸缩型直杆。机械臂31与机械臂安装座32转动副连接。机械臂31与第三伸缩支架33转动副连接,使得第三伸缩支架33的伸长和缩短能够带动机械臂31绕着机械臂安装座32转动。机械臂31末端设置有电控弹射锤模组300。电控弹射锤模组300与机械臂31末端转动副连接,并通过末端电机34带动电控弹射锤模组300与机械臂31之间的相对转动。音频接收器410设置在电控弹射锤模组300上,并使得音频接收端面向电控弹射锤模组300击打物体的方向。In one of the embodiments, as shown in FIG. 3 or FIG. 4 , an audio receiver 410 , an end motor 34 , and a robotic arm 31 , a robotic arm mount 32 and a third telescopic bracket 33 are provided on the robot body 10 . . The robot arm 31 is connected to the robot body 10 through the robot arm mounting seat 32 . The mechanical arm 31 is a telescopic straight rod. The manipulator 31 is connected to the manipulator mounting seat 32 for rotation. The mechanical arm 31 is rotationally connected with the third telescopic bracket 33 , so that the extension and shortening of the third telescopic bracket 33 can drive the mechanical arm 31 to rotate around the mechanical arm mounting seat 32 . The end of the mechanical arm 31 is provided with an electronically controlled ejection hammer module 300 . The electronically controlled ejection hammer module 300 is connected to the rotating pair at the end of the mechanical arm 31 , and drives the relative rotation between the electronically controlled ejection hammer module 300 and the mechanical arm 31 through the end motor 34 . The audio receiver 410 is disposed on the electronically controlled ejection hammer module 300 , so that the audio receiving end faces the direction in which the electronically controlled ejection hammer module 300 hits the object.

在其中一个实施例中,如图3或图4所示,还包括音频接收器410及超声波发射器420。超声波发射器420用于发射超声波。音频接收器410用于接收超声波发射器420发射出的并经障碍物反射回来的超声波,以实现在阴雨、大雾等天气情况的导航功能。In one of the embodiments, as shown in FIG. 3 or FIG. 4 , an audio receiver 410 and an ultrasonic transmitter 420 are further included. The ultrasonic transmitter 420 is used to transmit ultrasonic waves. The audio receiver 410 is used to receive the ultrasonic waves emitted by the ultrasonic transmitter 420 and reflected back by obstacles, so as to realize the navigation function in weather conditions such as rainy and foggy conditions.

在其中一个实施例中,如图3或图4所示,还包括第一通讯模组610和第二通讯模组620。第一通讯模组610用于与互联网连接,以实现远程控制和数据维护。第二通讯模组620用于与同地区的其他机器人配对后实时通讯,以实现机器人之间互相学习的功能。In one embodiment, as shown in FIG. 3 or FIG. 4 , a first communication module 610 and a second communication module 620 are further included. The first communication module 610 is used for connecting with the Internet to realize remote control and data maintenance. The second communication module 620 is used for real-time communication after pairing with other robots in the same area, so as to realize the function of mutual learning between the robots.

在其中一个实施例中,如图3或图4所示,还包括设置在机器人本体10上的风速仪模组710。风速仪模组710用于测量机器人本体10所在位置的风速。由于本申请方案中全景摄像模组架设在可伸缩的支架上,正常情况下位置较高,如果风速太大会导致扭臂过大,因此,设置风速仪模组710,当测量的风速大于设定值时,将控制使架设全景摄像模组的可伸缩支架缩短,以减少扭臂。In one of the embodiments, as shown in FIG. 3 or FIG. 4 , an anemometer module 710 disposed on the robot body 10 is further included. The anemometer module 710 is used to measure the wind speed at the location where the robot body 10 is located. Since the panoramic camera module in the solution of the present application is erected on a retractable bracket, the position is usually high. If the wind speed is too high, the torsion arm will be too large. Therefore, the anemometer module 710 is set. When the measured wind speed is greater than the set value When the value is set, the retractable bracket on which the panoramic camera module is set will be shortened to reduce the torsion arm.

在其中一个实施例中,如图3或图4所示,还包括设置在机器人本体10上的风向仪模组720。风速仪模组710用于测量机器人本体10所在位置的风向。如此设置,可以根据风向调整10的姿态,以最小风阻的方式运行。In one of the embodiments, as shown in FIG. 3 or FIG. 4 , a wind vane module 720 disposed on the robot body 10 is further included. The anemometer module 710 is used to measure the wind direction at the location of the robot body 10 . In this way, the attitude of 10 can be adjusted according to the wind direction, and the operation can be performed with the least wind resistance.

在其中一个实施例中,如图3或图4所示,机器人本体10的四周依次设置有第一激光雷达110、第二激光雷达120、第三激光雷达130及第四激光雷达140(图中未显示)。第一激光雷达110、第二激光雷达120、第三激光雷达130和第四激光雷达140用于分别获取机器人与四周障碍物之间的距离。In one embodiment, as shown in FIG. 3 or FIG. 4 , a first laser radar 110 , a second laser radar 120 , a third laser radar 130 and a fourth laser radar 140 are arranged around the robot body 10 in sequence (in the figure not shown). The first laser radar 110 , the second laser radar 120 , the third laser radar 130 and the fourth laser radar 140 are used to obtain distances between the robot and surrounding obstacles, respectively.

上述提供的基于激光导航雷达及计算机视觉感知融合的自行走机器人,通过在竖直设置的可伸缩支架上安置全景摄像模组,可以获取到激光雷达获取不到的色彩信息或文字信息,以此可以辅助判断机器人周围断障碍物的物理和/或化学属性,提高自行走机器人对周围环境的认识程度,从而拓宽自行走机器人的适用场景。The self-propelled robot based on the fusion of laser navigation radar and computer vision perception provided above can obtain color information or text information that cannot be obtained by laser radar by placing a panoramic camera module on a vertically arranged retractable bracket. It can assist in judging the physical and/or chemical properties of obstacles around the robot, improve the self-propelled robot's understanding of the surrounding environment, and thus broaden the applicable scenarios of the self-propelled robot.

根据上述内容,本申请还提供了一种基于激光导航雷达及计算机视觉感知融合的自行走机器人系统。According to the above content, the present application also provides a self-propelled robot system based on the fusion of laser navigation radar and computer vision perception.

一种基于激光导航雷达及计算机视觉感知融合的自行走机器人系统,如图5所示,包括中央处理器810、图像数据处理模块820、激光导航雷达数据处理模块830、第一全景摄像模组210、第一矢量角度传感器510及激光导航雷达模组。图像数据处理模块820、激光导航雷达数据处理模块830与中央处理器810连接。第一全景摄像模组210与图像数据处理模块820连接。激光导航雷达模组与激光导航雷达数据处理模块830连接。第一矢量角度传感器510分别与第一全景摄像模组210和中央处理器810连接。激光导航雷达模组用于获取10与其周围障碍物之间的距离。第一全景摄像模组210用于获取机器人本体10周围障碍物的第一彩色图像,以辅助判周围断障碍物的物理和/或化学属性。图像数据处理模块820用于对第一全景摄像模组210进行控制,并对第一彩色图像进行第一级数据处理。激光导航雷达数据处理模块830用于对激光导航雷达模组进行控制,并对激光导航雷达模组获取的数据进行第一级处理。第一矢量角度传感器510用于获取第一全景摄像模组210相对于激光导航雷达模组的基准偏差的角度大小和方向。中央处理器810通过第一矢量角度传感器510获取的数据对图像数据处理模块820和激光导航雷达数据处理模块830输出的数据进行第二级数据处理,修正第一全景摄像模组210因转动导致相对于激光导航雷达模组的基准偏差,使得第一彩色图像中的物体方位激光导航雷达获取的该物体方位相同。A self-propelled robot system based on the fusion of laser navigation radar and computer vision perception, as shown in FIG. 5, includes a central processing unit 810, an image data processing module 820, a laser navigation radar data processing module 830, and a first panoramic camera module 210. , the first vector angle sensor 510 and the laser navigation radar module. The image data processing module 820 and the laser navigation radar data processing module 830 are connected to the central processing unit 810 . The first panoramic camera module 210 is connected to the image data processing module 820 . The laser navigation radar module is connected to the laser navigation radar data processing module 830 . The first vector angle sensor 510 is respectively connected with the first panoramic camera module 210 and the central processing unit 810 . The LiDAR module is used to obtain the distance between 10 and its surrounding obstacles. The first panoramic camera module 210 is used to obtain a first color image of obstacles around the robot body 10 to assist in judging the physical and/or chemical properties of the obstacles around. The image data processing module 820 is used to control the first panoramic camera module 210 and perform first-level data processing on the first color image. The laser navigation radar data processing module 830 is used to control the laser navigation radar module, and perform first-level processing on the data obtained by the laser navigation radar module. The first vector angle sensor 510 is used to obtain the angle magnitude and direction of the reference deviation of the first panoramic camera module 210 relative to the laser navigation radar module. The central processing unit 810 performs second-level data processing on the data output by the image data processing module 820 and the laser navigation radar data processing module 830 through the data obtained by the first vector angle sensor 510, and corrects the relative rotation of the first panoramic camera module 210. Based on the reference deviation of the laser navigation radar module, the orientation of the object obtained by the laser navigation radar in the first color image is the same.

在其中一个实施例中,如图5所示,还第二全景摄像模组220、第二矢量角度传感器520。第二全景摄像模组220与图像数据处理模块820连接。第二矢量角度传感器520分别与第二全景摄像模组220和中央处理器810连接。第二全景摄像模组220用于获取机器人本体10周围障碍物的第二彩色图像,与第一彩色图像合成三维图像。第二矢量角度传感器520用于获取第二全景摄像模组220相对于激光导航雷达模组的基准偏差的角度大小和方向。中央处理器810通过第二矢量角度传感器520获取的数据对图像数据处理模块820和激光导航雷达数据处理模块830输出的数据进行第二级数据处理,修正第二全景摄像模组220因转动导致相对于激光导航雷达模组的基准偏差,使得第二彩色图像中的物体方位激光导航雷达获取的该物体方位相同。中央处理器810或第二全景摄像模组220还用于将第一彩色图像和第二彩色图像合成三维图像。In one embodiment, as shown in FIG. 5 , there are also a second panoramic camera module 220 and a second vector angle sensor 520 . The second panoramic camera module 220 is connected to the image data processing module 820 . The second vector angle sensor 520 is respectively connected to the second panoramic camera module 220 and the central processing unit 810 . The second panoramic camera module 220 is used for acquiring a second color image of obstacles around the robot body 10, and synthesizing a three-dimensional image with the first color image. The second vector angle sensor 520 is used to obtain the angular magnitude and direction of the reference deviation of the second panoramic camera module 220 relative to the laser navigation radar module. The central processing unit 810 performs second-level data processing on the data output by the image data processing module 820 and the laser navigation radar data processing module 830 through the data obtained by the second vector angle sensor 520, and corrects the relative rotation of the second panoramic camera module 220 due to rotation. Based on the reference deviation of the laser navigation radar module, the orientation of the object obtained by the laser navigation radar in the second color image is the same. The central processing unit 810 or the second panoramic camera module 220 is further configured to combine the first color image and the second color image into a three-dimensional image.

在其中一个实施例中,如图5所示,第一全景摄像模组210包括第一广角镜头211、第二广角镜头212、第三广角镜头213。第二全景摄像模组220包括第四广角镜头221、第五广角镜头222、第六广角镜头223。第一广角镜头211与第六广角镜头223配对连接后与图像数据处理模块820连接。第二广角镜头212与第五广角镜头222配对连接后与图像数据处理模块820连接。第三广角镜头213与第四广角镜头221配对连接后与图像数据处理模块820连接。中央处理器810还用于通过第一矢量角度传感器510和第二矢量角度传感器520的数据对第一全景摄像模组210和第二全景摄像模组220的相对转角进行修正。In one embodiment, as shown in FIG. 5 , the first panoramic camera module 210 includes a first wide-angle lens 211 , a second wide-angle lens 212 , and a third wide-angle lens 213 . The second panoramic camera module 220 includes a fourth wide-angle lens 221 , a fifth wide-angle lens 222 , and a sixth wide-angle lens 223 . The first wide-angle lens 211 is paired with the sixth wide-angle lens 223 and then connected to the image data processing module 820 . The second wide-angle lens 212 is paired with the fifth wide-angle lens 222 and then connected to the image data processing module 820 . The third wide-angle lens 213 is paired with the fourth wide-angle lens 221 and then connected to the image data processing module 820 . The central processing unit 810 is further configured to correct the relative rotation angle of the first panoramic camera module 210 and the second panoramic camera module 220 through the data of the first vector angle sensor 510 and the second vector angle sensor 520 .

在其中一个实施例中,如图5所示,激光导航雷达模组包括第一激光雷达110、第二激光雷达120、第三激光雷达130及第四激光雷达140。第一激光雷达110、第二激光雷达120、第三激光雷达130和第四激光雷达140用于依次设置在机器人的四周分别获取机器人与四周障碍物之间的距离。激光导航雷达数据处理模块830还用于将第一激光雷达110、第二激光雷达120、第三激光雷达130和第四激光雷达140获取的数据拼接合成一个完整的雷达图,并输出给中央处理器810做进一步处理。In one embodiment, as shown in FIG. 5 , the laser navigation radar module includes a first laser radar 110 , a second laser radar 120 , a third laser radar 130 and a fourth laser radar 140 . The first laser radar 110 , the second laser radar 120 , the third laser radar 130 and the fourth laser radar 140 are sequentially arranged around the robot to obtain distances between the robot and surrounding obstacles, respectively. The laser navigation radar data processing module 830 is also used for splicing the data obtained by the first laser radar 110 , the second laser radar 120 , the third laser radar 130 and the fourth laser radar 140 into a complete radar image, and outputting it to the central processing device 810 for further processing.

在其中一个实施例中,如图5所示,还包括分别与中央处理器810连接的电控弹射锤模组300及音频接收器410。电控弹射锤模组300用于敲击物体使发出声音。音频接收器410用于接收电控弹射锤模组300敲击物体发出的声音。中央处理器810还用于对电控弹射锤模组300敲击物体发出的声音进行频谱分析,并输出被敲击物体的种类和基本的物理和/或化学属性。In one embodiment, as shown in FIG. 5 , it further includes an electronically controlled ejector hammer module 300 and an audio receiver 410 respectively connected to the central processing unit 810 . The electronically controlled catapult hammer module 300 is used to knock objects to make sounds. The audio receiver 410 is used for receiving the sound produced by the electronically controlled catapult hammer module 300 striking the object. The central processing unit 810 is further configured to perform spectrum analysis on the sound emitted by the electronically controlled catapult hammer module 300 striking the object, and output the type and basic physical and/or chemical properties of the struck object.

在其中一个实施例中,如图5所示,还包括与中央处理器810连接的超声波发射器420。超声波发射器420用于发射超声波。音频接收器410还用于接收超声波发射器420发射出的并经障碍物反射回来的超声波,以实现机器人在阴雨、大雾等天气情况的导航功能。In one of the embodiments, as shown in FIG. 5 , an ultrasonic transmitter 420 connected to the central processing unit 810 is further included. The ultrasonic transmitter 420 is used to transmit ultrasonic waves. The audio receiver 410 is also used to receive the ultrasonic waves emitted by the ultrasonic transmitter 420 and reflected back by obstacles, so as to realize the navigation function of the robot in weather conditions such as rainy and foggy conditions.

在其中一个实施例中,如图5所示,超声波发射器420还与音频接收器410连接。In one of the embodiments, as shown in FIG. 5 , the ultrasonic transmitter 420 is also connected with the audio receiver 410 .

在其中一个实施例中,如图5所示,还包括与中央处理器810连接的第一通讯模组610和第二通讯模组620。第一通讯模组610用于与互联网连接,以实现远程控制和数据维护。第二通讯模组620用于与同地区的其他机器人配对后实时通讯,以实现机器人之间互相学习的功能。In one embodiment, as shown in FIG. 5 , it further includes a first communication module 610 and a second communication module 620 connected to the central processing unit 810 . The first communication module 610 is used for connecting with the Internet to realize remote control and data maintenance. The second communication module 620 is used for real-time communication after pairing with other robots in the same area, so as to realize the function of mutual learning between the robots.

在其中一个实施例中,如图5所示,还包括与中央处理器810连接的风向仪模组720。风向仪模组720用于测量机器人所在位置的风向。In one embodiment, as shown in FIG. 5 , the wind vane module 720 connected to the central processing unit 810 is further included. The wind vane module 720 is used to measure the wind direction at the location of the robot.

在其中一个实施例中,如图5所示,还包括与中央处理器810连接的风速仪模组710。风速仪模组710用于测量机器人所在位置的风速。In one embodiment, as shown in FIG. 5 , an anemometer module 710 connected to the central processing unit 810 is further included. The anemometer module 710 is used to measure the wind speed at the location of the robot.

在其中一个实施例中,如图5所示,还包括与中央处理器810连接的存储模块840。存储模块840存储有可被中央处理器810运行的控制程序和常见物体的属性信息,以及用于存储各功能模块运行时产生的信息。In one embodiment, as shown in FIG. 5 , a storage module 840 connected to the central processing unit 810 is further included. The storage module 840 stores a control program that can be run by the central processing unit 810 and attribute information of common objects, and is used to store information generated when each functional module runs.

上述提供的基于激光导航雷达及计算机视觉感知融合的自行走机器人系统,通过设置全景摄像模组,可以获取到激光雷达获取不到的色彩信息或文字信息,以此可以辅助判断机器人周围断障碍物的物理和/或化学属性,提高自行走机器人对周围环境的认识程度,从而拓宽自行走机器人的适用场景。The above-mentioned self-propelled robot system based on the fusion of laser navigation radar and computer vision perception can obtain color information or text information that cannot be obtained by laser radar by setting up a panoramic camera module, so as to assist in judging obstacles around the robot. The physical and/or chemical properties of the self-propelled robot can improve the awareness of the surrounding environment, thereby broadening the applicable scenarios of the self-propelled robot.

此外,本申请还提供了一种基于激光导航雷达及计算机视觉感知融合的自行走机器人的控制方法。In addition, the present application also provides a control method of a self-propelled robot based on the fusion of laser navigation radar and computer vision perception.

一种基于激光导航雷达及计算机视觉感知融合的自行走机器人的控制方法,自行走机器人为上述任何一项实施例中的自行走机器人,方法包括步骤S110-S140:A control method of a self-propelled robot based on laser navigation radar and computer vision perception fusion, the self-propelled robot is the self-propelled robot in any of the above-mentioned embodiments, and the method comprises steps S110-S140:

S110:激光导航雷达获取机器人周围的障碍物反射面。S110: The lidar navigation radar obtains the reflection surfaces of obstacles around the robot.

S120:中央处理器810判断障碍物之间的最大距离是否小于机器人的最小通过距离,若是,则执行步骤S130,否则执行步骤S140。S120: The central processing unit 810 determines whether the maximum distance between obstacles is smaller than the minimum passing distance of the robot, and if so, executes step S130, otherwise executes step S140.

S130:启动第一全景摄像模组210。S130: Start the first panoramic camera module 210.

S140:只启动激光导航雷达继续运行。S140: Only start the LiDAR to continue running.

在其中一个实施例中,方法还包括,步骤S210-S240:In one of the embodiments, the method further includes, steps S210-S240:

S210:测试激光导航雷达能够获取机器人周围障碍物的最大距离。S210: Test the maximum distance that the lidar navigation radar can obtain the obstacles around the robot.

S220:判断该最大距离是否小于或等于第一设定值,若是,则执行步骤S230,否则执行步骤S240。S220: Determine whether the maximum distance is less than or equal to the first set value, if so, execute step S230, otherwise execute step S240.

S230:启动超声波发射器420和音频接收器410。S230: Activate the ultrasonic transmitter 420 and the audio receiver 410.

S240:只启动激光导航雷达继续运行。S240: Only start the lidar navigation radar and continue to run.

上述提供的方法,通过激光导航雷达对周围障碍环境的初步判断,当仅通过激光雷达导航无法满足要求时,启动计算机视觉感知对周围环境进行更深层次的认识,从而寻找出最佳的行动路径。The method provided above uses the preliminary judgment of the surrounding obstacle environment through the lidar navigation radar. When the laser radar navigation alone cannot meet the requirements, computer vision perception is activated to gain a deeper understanding of the surrounding environment, so as to find the best action path.

以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. In order to make the description simple, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features It is considered to be the range described in this specification.

以上实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above examples only represent several embodiments of the present invention, and the descriptions thereof are specific and detailed, but should not be construed as a limitation on the scope of the invention patent. It should be pointed out that for those skilled in the art, without departing from the concept of the present invention, several modifications and improvements can be made, which all belong to the protection scope of the present invention. Therefore, the protection scope of the patent of the present invention shall be subject to the appended claims.

Claims (10)

1. A self-walking robot based on laser navigation radar and computer vision perception fusion is characterized by comprising a robot body for walking movement, a laser radar arranged on the robot body, a first telescopic bracket vertically and fixedly arranged on the robot body, and a first panoramic camera module arranged at the top end of the first telescopic bracket;
the first telescopic bracket is connected with the first panoramic camera module revolute pair;
a first vector angle sensor is arranged between the first telescopic bracket and the first panoramic camera module;
the first vector angle sensor is used for acquiring the rotation angle and the rotation direction of the first panoramic camera module when rotating relative to the first telescopic support;
the height of the first panoramic camera module from the ground is greater than that of the laser radar from the ground;
the first telescopic support is used for automatically adjusting the height of the first panoramic camera module from the ground and providing a first fixed reference for the first vector angle sensor;
the laser radar is used for acquiring the distance between the robot body and the surrounding obstacles;
the first panoramic camera shooting module is used for acquiring a first color image of the obstacle around the robot body so as to assist in judging the physical and/or chemical properties of the obstacle around the robot body.
2. The self-walking robot of claim 1,
the first panoramic camera shooting module is connected with the first telescopic bracket through a first camera stabilizer;
the first camera stabilizer is connected with the first telescopic bracket through a revolute pair;
the first panoramic camera shooting module comprises a first lens module, a first lens supporting rod and a first gravity block;
the first lens supporting rod is vertically and fixedly arranged on the camera mounting seat of the first camera stabilizer;
the first lens module is arranged at the top end of the first lens supporting rod;
the first gravity block is arranged on the back of the camera mounting seat so as to reduce the gravity center of the first panoramic camera shooting module and improve the torque of the camera mounting seat relative to the first camera stabilizer, so that the first lens supporting rod is kept in a vertical state;
the first vector angle sensor is arranged between the first telescopic support and the first camera stabilizer and used for acquiring the rotation angle size and the rotation direction of the first camera stabilizer relative to the first telescopic support during rotation.
3. The self-walking robot of claim 1,
the panoramic camera comprises a first telescopic support and a first panoramic camera module arranged at the top end of the first telescopic support;
the second telescopic bracket is connected with the second panoramic camera module revolute pair;
a second vector angle sensor is arranged between the second telescopic bracket and the second panoramic camera module;
the second vector angle sensor is used for acquiring the rotation angle and the rotation direction of the second panoramic camera module when rotating relative to the second telescopic support;
the second telescopic bracket is vertically arranged and is fixedly connected with the robot body;
the height of the second panoramic camera module from the ground is greater than that of the laser radar and less than that of the first panoramic camera module from the ground;
the second telescopic support is used for automatically adjusting the height of the second panoramic camera module from the ground and providing a second fixed reference for the second vector angle sensor;
the second fixed reference datum is the same as the first fixed reference datum;
the horizontal distance between the second panoramic camera module and the first panoramic camera module is greater than 0;
the distance between the second telescopic bracket and the first telescopic bracket is greater than 0;
the second panoramic camera shooting module is used for acquiring a second color image of obstacles around the robot body and synthesizing the second color image with the first color image into a three-dimensional image.
4. The self-walking robot of claim 1,
the robot further comprises an audio receiver, a mechanical arm arranged on the robot body and a hammer arranged at the tail end of the mechanical arm and used for knocking;
the audio receiver is used for receiving sound of the hammer hitting an object.
5. The self-walking robot of claim 1,
the robot further comprises an audio receiver, a tail end motor, a mechanical arm arranged on the robot body, a mechanical arm mounting seat and a third telescopic support;
the mechanical arm is connected with the robot body through the mechanical arm mounting seat;
the mechanical arm is a telescopic straight rod;
the mechanical arm is connected with the mechanical arm mounting seat revolute pair;
the mechanical arm is connected with the third telescopic bracket revolute pair;
an electric control ejection hammer module is arranged at the tail end of the mechanical arm;
the electric control ejection hammer module is connected with the mechanical arm tail end revolute pair and drives the electric control ejection hammer module to rotate relative to the mechanical arm through a tail end motor;
the audio receiver is arranged on the electronic control ejection hammer module, and enables the audio receiving end to face the direction in which the electronic control ejection hammer module strikes the object.
6. The self-walking robot of claim 1,
the device also comprises a first communication module and a second communication module;
the first communication module is used for being connected with the Internet so as to realize remote control and data maintenance;
the second communication module is used for real-time communication after being paired with other robots in the same area, so that the function of mutual learning among the robots is realized.
7. A self-walking robot system based on laser navigation radar and computer vision perception fusion is characterized in that,
the system comprises a central processing unit, an image data processing module, a laser navigation radar data processing module, a first panoramic camera module, a first vector angle sensor and a laser navigation radar module;
the image data processing module and the laser navigation radar data processing module are connected with the central processing unit;
the first panoramic camera module is connected with the image data processing module;
the laser navigation radar module is connected with the laser navigation radar data processing module;
the first vector angle sensor is respectively connected with the first panoramic camera module and the central processing unit;
the laser navigation radar module is used for acquiring the distance between the robot body and the surrounding obstacles;
the first panoramic camera module is used for acquiring a first color image of an obstacle around the robot body so as to assist in judging the physical and/or chemical properties of the obstacle around the robot body;
the image data processing module is used for controlling the first panoramic camera module and performing first-stage data processing on the first color image;
the laser navigation radar data processing module is used for controlling the laser navigation radar module and performing first-stage processing on data acquired by the laser navigation radar module;
the first vector angle sensor is used for acquiring the angle and the direction of the reference deviation of the first panoramic camera module relative to the laser navigation radar module;
and the central processing unit performs second-level data processing on the data output by the image data processing module and the laser navigation radar data processing module through the data acquired by the first vector angle sensor, and corrects the reference deviation of the first panoramic camera module relative to the laser navigation radar module caused by rotation, so that the object azimuth in the first color image is the same as the object azimuth acquired by the laser navigation radar.
8. The system of claim 7,
the electronic control ejection hammer module and the audio receiver are respectively connected with the central processing unit;
the electronic control ejection hammer module is used for knocking an object to make a sound;
the audio receiver is used for receiving sound generated by the electronic control ejection hammer module when the electronic control ejection hammer module strikes an object;
the central processing unit is also used for carrying out spectrum analysis on the sound emitted by the electric control ejection hammer module when the object is knocked, and outputting the type and basic physical and/or chemical properties of the knocked object.
9. The system of claim 8,
the ultrasonic transmitter is connected with the central processing unit;
the ultrasonic transmitter is used for transmitting ultrasonic waves;
the audio receiver is also used for receiving the ultrasonic waves which are emitted by the ultrasonic emitter and reflected back by the barrier so as to realize the navigation function of the robot under the weather conditions of overcast and rainy days, heavy fog days and the like.
10. The system of claims 7-9,
also comprises a first communication module and a second communication module connected with the central processing unit
The first communication module is used for being connected with the Internet so as to realize remote control and data maintenance;
the second communication module is used for real-time communication after being paired with other robots in the same area, so that the function of mutual learning among the robots is realized.
CN202010519604.2A 2020-06-09 2020-06-09 Self-walking robot and system based on laser navigation radar and computer vision perception fusion Active CN112130555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010519604.2A CN112130555B (en) 2020-06-09 2020-06-09 Self-walking robot and system based on laser navigation radar and computer vision perception fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010519604.2A CN112130555B (en) 2020-06-09 2020-06-09 Self-walking robot and system based on laser navigation radar and computer vision perception fusion

Publications (2)

Publication Number Publication Date
CN112130555A true CN112130555A (en) 2020-12-25
CN112130555B CN112130555B (en) 2023-09-15

Family

ID=73850506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010519604.2A Active CN112130555B (en) 2020-06-09 2020-06-09 Self-walking robot and system based on laser navigation radar and computer vision perception fusion

Country Status (1)

Country Link
CN (1) CN112130555B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002065721A (en) * 2000-08-29 2002-03-05 Komatsu Ltd Device and method for supporting environmental recognition for visually handicapped
CN101008575A (en) * 2006-01-25 2007-08-01 刘海英 Over-limit measuring instrument and method of railway transportation equipment
JP2012051042A (en) * 2010-08-31 2012-03-15 Yaskawa Electric Corp Robot system and robot control device
CN205961294U (en) * 2016-08-03 2017-02-15 北京威佳视科技有限公司 Portable multimachine virtual studio in position is shot with video -corder and is broadcast all -in -one
CN107036706A (en) * 2017-05-27 2017-08-11 中国石油大学(华东) One kind set tube vibration well head Monitor detection equipment
JP2017156153A (en) * 2016-02-29 2017-09-07 株式会社 ミックウェア Navigation device, method for outputting obstacle information at navigation device, and program
CA3112760A1 (en) * 2016-06-30 2017-12-30 Spin Master Ltd. Assembly with object in housing and mechanism to open housing
CN107730652A (en) * 2017-10-30 2018-02-23 国家电网公司 A kind of cruising inspection system, method and device
CN107966989A (en) * 2017-12-25 2018-04-27 北京工业大学 A kind of robot autonomous navigation system
CN107991662A (en) * 2017-12-06 2018-05-04 江苏中天引控智能系统有限公司 A kind of 3D laser and 2D imaging synchronous scanning device and its scan method
WO2018097574A1 (en) * 2016-11-24 2018-05-31 엘지전자 주식회사 Mobile robot and control method thereof
CN208760540U (en) * 2018-09-25 2019-04-19 成都铂贝科技有限公司 A kind of unmanned vehicle of more applications
CN109855624A (en) * 2019-01-17 2019-06-07 宁波舜宇智能科技有限公司 Navigation device and air navigation aid for AGV vehicle
CN209356928U (en) * 2019-03-15 2019-09-06 上海海鸥数码照相机有限公司 From walking robot formula 3D modeling data acquisition equipment
CN110968081A (en) * 2018-09-27 2020-04-07 广东美的生活电器制造有限公司 Control method and control device of sweeping robot with telescopic camera
WO2020077025A1 (en) * 2018-10-12 2020-04-16 Toyota Research Institute, Inc. Systems and methods for conditional robotic teleoperation
CN111203848A (en) * 2019-12-17 2020-05-29 苏州商信宝信息科技有限公司 Intelligent floor tile processing method and system based on big data processing and analysis

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002065721A (en) * 2000-08-29 2002-03-05 Komatsu Ltd Device and method for supporting environmental recognition for visually handicapped
CN101008575A (en) * 2006-01-25 2007-08-01 刘海英 Over-limit measuring instrument and method of railway transportation equipment
JP2012051042A (en) * 2010-08-31 2012-03-15 Yaskawa Electric Corp Robot system and robot control device
JP2017156153A (en) * 2016-02-29 2017-09-07 株式会社 ミックウェア Navigation device, method for outputting obstacle information at navigation device, and program
CA3112760A1 (en) * 2016-06-30 2017-12-30 Spin Master Ltd. Assembly with object in housing and mechanism to open housing
CN205961294U (en) * 2016-08-03 2017-02-15 北京威佳视科技有限公司 Portable multimachine virtual studio in position is shot with video -corder and is broadcast all -in -one
WO2018097574A1 (en) * 2016-11-24 2018-05-31 엘지전자 주식회사 Mobile robot and control method thereof
CN107036706A (en) * 2017-05-27 2017-08-11 中国石油大学(华东) One kind set tube vibration well head Monitor detection equipment
CN107730652A (en) * 2017-10-30 2018-02-23 国家电网公司 A kind of cruising inspection system, method and device
CN107991662A (en) * 2017-12-06 2018-05-04 江苏中天引控智能系统有限公司 A kind of 3D laser and 2D imaging synchronous scanning device and its scan method
CN107966989A (en) * 2017-12-25 2018-04-27 北京工业大学 A kind of robot autonomous navigation system
CN208760540U (en) * 2018-09-25 2019-04-19 成都铂贝科技有限公司 A kind of unmanned vehicle of more applications
CN110968081A (en) * 2018-09-27 2020-04-07 广东美的生活电器制造有限公司 Control method and control device of sweeping robot with telescopic camera
WO2020077025A1 (en) * 2018-10-12 2020-04-16 Toyota Research Institute, Inc. Systems and methods for conditional robotic teleoperation
CN109855624A (en) * 2019-01-17 2019-06-07 宁波舜宇智能科技有限公司 Navigation device and air navigation aid for AGV vehicle
CN209356928U (en) * 2019-03-15 2019-09-06 上海海鸥数码照相机有限公司 From walking robot formula 3D modeling data acquisition equipment
CN111203848A (en) * 2019-12-17 2020-05-29 苏州商信宝信息科技有限公司 Intelligent floor tile processing method and system based on big data processing and analysis

Also Published As

Publication number Publication date
CN112130555B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN112106111B (en) A calibration method, device, movable platform and storage medium
WO2020082363A1 (en) Environment sensing system and mobile platform
CN110045742B (en) Obstacle avoidance device and method for quad-rotor unmanned aerial vehicle based on infrared optical ranging
CN112684432B (en) Laser radar calibration method, device, equipment and storage medium
KR101881121B1 (en) Drone for measuring distance and method for controlling drone
CN110192122A (en) Radar-directed system and method on unmanned moveable platform
WO2022179207A1 (en) Window occlusion detection method and apparatus
CN109032157A (en) Unmanned plane imitative ground operational method, device, equipment and storage medium
CN107728616A (en) The map creating method and mobile robot of mobile robot
US11852726B2 (en) Image processing-based laser emission and dynamic calibration apparatus and method, device and medium
CN116964485A (en) A detection method, device and laser radar
CN112308927A (en) Fusion device of panoramic camera and laser radar and calibration method thereof
CN109669191A (en) To landform construction method before vehicle based on single line laser radar
CN112130555A (en) Self-propelled robot and system based on laser navigation radar and computer vision perception fusion
US12174319B2 (en) Autonomous vehicle
KR101830296B1 (en) System for drawing digital map
KR101830384B1 (en) Aerial photography system of high-precision video image
WO2021052217A1 (en) Control device for performing image processing and frame body control
KR102241997B1 (en) System amd method for determining location and computer readable recording medium
CN103995264A (en) Vehicle-mounted mobile laser radar mapping system
CN217564722U (en) Multi-mode bird-repelling robot
EP2476014B1 (en) Device and method for object detection and location
CN215749167U (en) A robot control system
CN116879853A (en) Multi-sensor external parameter joint calibration method
CN115902845A (en) External parameter calibration method of laser radar and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant