CN114947653A - Visual and laser fusion slam method and system based on hotel cleaning robot - Google Patents

Visual and laser fusion slam method and system based on hotel cleaning robot Download PDF

Info

Publication number
CN114947653A
CN114947653A CN202210423475.6A CN202210423475A CN114947653A CN 114947653 A CN114947653 A CN 114947653A CN 202210423475 A CN202210423475 A CN 202210423475A CN 114947653 A CN114947653 A CN 114947653A
Authority
CN
China
Prior art keywords
map
layer
obstacle avoidance
vision
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210423475.6A
Other languages
Chinese (zh)
Inventor
张晨博
郭震
杨俊�
刘宇星
杨洪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jingwu Trade Technology Development Co Ltd
Original Assignee
Shanghai Jingwu Trade Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jingwu Trade Technology Development Co Ltd filed Critical Shanghai Jingwu Trade Technology Development Co Ltd
Priority to CN202210423475.6A priority Critical patent/CN114947653A/en
Publication of CN114947653A publication Critical patent/CN114947653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明提供了一种基于酒店清洁机器人的视觉与激光融合slam方法及系统,包括:步骤S1:建立栅格地图并在地图上标记出预设地点的位置;步骤S2:根据预设地点实时生成避障地图;步骤S3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;步骤S4:反馈到达指定工作点位的信息。本发明保障了清洁机器人在整个任务行进过程中的安全性,利用视觉进行目标物体的智能识别节省了清洁过程中人工标记各个工作点位的时间,并大大降低了人工标记出错的风险。

Figure 202210423475

The present invention provides a vision and laser fusion slam method and system based on a hotel cleaning robot, including: step S1: establishing a grid map and marking the location of a preset location on the map; step S2: generating real-time generation according to the preset location Obstacle avoidance map; Step S3 : travel to the preset location according to the obstacle avoidance map to identify the target object in the preset location, and obtain the working point; Step S4 : Feedback the information of reaching the designated working point. The invention guarantees the safety of the cleaning robot during the entire task traveling process, uses vision to intelligently identify the target object, saves the time for manually marking each work point during the cleaning process, and greatly reduces the risk of manual marking errors.

Figure 202210423475

Description

基于酒店清洁机器人的视觉与激光融合slam方法及系统Vision and laser fusion SLAM method and system based on hotel cleaning robot

技术领域technical field

本发明涉及机器人导航领域,具体地,涉及一种基于酒店清洁机器人的视觉与激光融合slam方法及系统。The invention relates to the field of robot navigation, in particular, to a vision and laser fusion slam method and system based on a hotel cleaning robot.

背景技术Background technique

酒店清洁机器人是一种作业与酒店卫生间的特殊机器人,由于该机器人携带了一系列机械设备(如机械臂,相机,电动夹爪,电动刷等),身形比较庞大且不规则,行进时可能会遇到很多空间中的障碍物,尤其是机械臂在收拢状态下也有可能碰触到障碍物或人,因此单纯依靠二维激光雷达这一个传感器是无法满足安全行进的需求,本文在此基础上又添加了视觉传感器以增加空间的避障能力,利用视觉传感器与激光传感器组成的slam系统将会大大增加清洁机器人行走的安全性以及稳定性。The hotel cleaning robot is a special robot that works with hotel bathrooms. Because the robot carries a series of mechanical equipment (such as mechanical arms, cameras, electric grippers, electric brushes, etc.), its body is relatively large and irregular, and it may travel There will be many obstacles in space, especially the manipulator may touch obstacles or people in the retracted state, so relying solely on the two-dimensional lidar sensor cannot meet the needs of safe travel, this article is based on this A visual sensor is added to increase the obstacle avoidance capability of the space. The slam system composed of visual sensor and laser sensor will greatly increase the safety and stability of the cleaning robot.

专利文献CN109144067A(申请号:CN201811083718.6)公开了一种智能清洁机器人及其路径规划方法,传感器模块用于分析并反馈实时清洁环境信息;精准定位模块用于获取当前智能清洁机器人在环境地图上所在的位置;利用几何-拓扑混合地图技术建立环境地图,通过结合环境地图和实时位置,用先进路径规划算法规划出最优清扫路径,并将数据上传云平台实现实时分析记录与控制;驱动模块用于按照规划出的最优路径驱动智能清洁机器人运行并进行清洁工作;人机交互模块能够利用温湿度传感器结合摄像头实现对智能清洁机器人工作状态和性能的显示,并通过wifi/蓝牙技术可以完成智能清洁机器人的遥控与预约功能。但该发明对于机器人的安全保障不足。Patent document CN109144067A (application number: CN201811083718.6) discloses an intelligent cleaning robot and its path planning method. The sensor module is used to analyze and feed back real-time cleaning environment information; the precise positioning module is used to obtain the current intelligent cleaning robot on the environment map Location; use the geometric-topological hybrid map technology to build an environmental map, and use the advanced path planning algorithm to plan the optimal cleaning path by combining the environmental map and real-time location, and upload the data to the cloud platform to realize real-time analysis, record and control; drive module It is used to drive the intelligent cleaning robot to run and clean according to the planned optimal path; the human-computer interaction module can use the temperature and humidity sensor combined with the camera to display the working status and performance of the intelligent cleaning robot, and can be completed through wifi/Bluetooth technology Remote control and reservation function of intelligent cleaning robot. However, the invention has insufficient safety guarantee for the robot.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的缺陷,本发明的目的是提供一种基于酒店清洁机器人的视觉与激光融合slam方法及系统。Aiming at the defects in the prior art, the purpose of the present invention is to provide a vision and laser fusion slam method and system based on a hotel cleaning robot.

根据本发明提供的一种基于酒店清洁机器人的视觉与激光融合slam方法,包括:A vision and laser fusion slam method based on a hotel cleaning robot provided according to the present invention includes:

步骤S1:建立栅格地图并在地图上标记出预设地点的位置;Step S1: establishing a grid map and marking the location of the preset location on the map;

步骤S2:根据预设地点实时生成避障地图;Step S2: generating an obstacle avoidance map in real time according to a preset location;

步骤S3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;Step S3: traveling to the preset location according to the obstacle avoidance map to identify the target object in the preset location, and obtain the working point;

步骤S4:反馈到达指定工作点位的信息。Step S4: Feeding back the information of arriving at the designated work point.

优选地,在所述步骤S1中:Preferably, in the step S1:

利用二维激光雷达建立相应的导航所使用的2d栅格地图。Use 2D lidar to build a 2D grid map for the corresponding navigation.

优选地,在所述步骤S2中:Preferably, in the step S2:

机器人根据预设地点行进,在行进的过程中规划层实时生成预设范围的避障地图,避障地图用于机器人的实时动态避障,避障地图融入了视觉与激光的数据信息,机器人根据避障地图避开空间中的预设障碍物。The robot travels according to the preset location, and the planning layer generates an obstacle avoidance map with a preset range in real time during the process of traveling. The obstacle avoidance map is used for the robot's real-time dynamic obstacle avoidance. The obstacle avoidance map integrates visual and laser data information. Obstacle Avoidance Map Avoid preset obstacles in space.

优选地,作业过程包含了任务层、执行层和反馈层三部分;其中任务层负责进行任务规划,任务分解以及各分支任务的下发;执行层负责各个分支模块对下发的任务进行再分解并一次执行;反馈层在各模块执行完成相应任务后向上层进行反馈或遇到问题时向上层发出告警;Preferably, the operation process includes three parts: the task layer, the execution layer and the feedback layer; the task layer is responsible for task planning, task decomposition and the distribution of each branch task; the execution layer is responsible for re-decomposing the distributed tasks by each branch module And execute it at one time; the feedback layer gives feedback to the upper layer after each module has completed the corresponding task or issues an alarm to the upper layer when encountering a problem;

执行层中包括导航模块,导航模块负责让机器人达到指定的作业点或作业区域,导航模块中的规划层负责机器人行走的执行,规划层规划全局路径和局部路径,全局路径是起始位置到目标位置的完整规划路线,以路径最短原则进行规划;局部路径是对机器人行走过程预设范围内的局部路径规划,避开不在地图上的动态障碍物,加入视觉进行空间避障,在行进时视觉实时检测出前方空间内的移动或静态空间障碍物,并将障碍物加入局部避障地图中。The execution layer includes the navigation module. The navigation module is responsible for letting the robot reach the designated work point or work area. The planning layer in the navigation module is responsible for the execution of the robot's walking. The planning layer plans the global path and the local path. The global path is from the starting position to the target. The complete planning route of the location is planned according to the principle of the shortest path; the local path is the local path planning within the preset range of the robot's walking process, avoiding the dynamic obstacles that are not on the map, adding vision for spatial obstacle avoidance, and visualizing when traveling. Detects moving or static space obstacles in the forward space in real time, and adds the obstacles to the local obstacle avoidance map.

优选地,在所述步骤S3中:Preferably, in the step S3:

当机器人到达预设地点后进入预设区域,通过视觉对预设区域进行初步识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标物体,当发现目标物体时,通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正导航点位,得到工作点位。When the robot arrives at the preset location, it enters the preset area, initially recognizes the preset area through vision, and uses relevant deep learning methods to find the desired target object in the photographed environment. To the target position, the point cloud matching algorithm is used to pair the object model established in advance with the target object, correct the navigation point, and obtain the working point.

根据本发明提供的一种基于酒店清洁机器人的视觉与激光融合slam系统,包括:A vision and laser fusion slam system based on a hotel cleaning robot provided according to the present invention includes:

模块M1:建立栅格地图并在地图上标记出预设地点的位置;Module M1: Create a grid map and mark the location of the preset location on the map;

模块M2:根据预设地点实时生成避障地图;Module M2: generate an obstacle avoidance map in real time according to a preset location;

模块M3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;Module M3: travel to the preset location according to the obstacle avoidance map, identify the target object in the preset location, and obtain the working point;

模块M4:反馈到达指定工作点位的信息。Module M4: Feedback the information that the designated working point is reached.

优选地,在所述模块M1中:Preferably, in the module M1:

利用二维激光雷达建立相应的导航所使用的2d栅格地图。Use 2D lidar to build a 2D grid map for the corresponding navigation.

优选地,在所述模块M2中:Preferably, in the module M2:

机器人根据预设地点行进,在行进的过程中规划层实时生成预设范围的避障地图,避障地图用于机器人的实时动态避障,避障地图融入了视觉与激光的数据信息,机器人根据避障地图避开空间中的预设障碍物。The robot travels according to the preset location, and the planning layer generates an obstacle avoidance map with a preset range in real time during the process of traveling. The obstacle avoidance map is used for the robot's real-time dynamic obstacle avoidance. The obstacle avoidance map integrates visual and laser data information. Obstacle Avoidance Map Avoid preset obstacles in space.

优选地,作业过程包含了任务层、执行层和反馈层三部分;其中任务层负责进行任务规划,任务分解以及各分支任务的下发;执行层负责各个分支模块对下发的任务进行再分解并一次执行;反馈层在各模块执行完成相应任务后向上层进行反馈或遇到问题时向上层发出告警;Preferably, the operation process includes three parts: the task layer, the execution layer and the feedback layer; the task layer is responsible for task planning, task decomposition and the distribution of each branch task; the execution layer is responsible for re-decomposing the distributed tasks by each branch module And execute it at one time; the feedback layer gives feedback to the upper layer after each module has completed the corresponding task or issues an alarm to the upper layer when encountering a problem;

执行层中包括导航模块,导航模块负责让机器人达到指定的作业点或作业区域,导航模块中的规划层负责机器人行走的执行,规划层规划全局路径和局部路径,全局路径是起始位置到目标位置的完整规划路线,以路径最短原则进行规划;局部路径是对机器人行走过程预设范围内的局部路径规划,避开不在地图上的动态障碍物,加入视觉进行空间避障,在行进时视觉实时检测出前方空间内的移动或静态空间障碍物,并将障碍物加入局部避障地图中。The execution layer includes the navigation module. The navigation module is responsible for letting the robot reach the designated work point or work area. The planning layer in the navigation module is responsible for the execution of the robot's walking. The planning layer plans the global path and the local path. The global path is from the starting position to the target. The complete planning route of the location is planned according to the principle of the shortest path; the local path is the local path planning within the preset range of the robot's walking process, avoiding the dynamic obstacles that are not on the map, adding vision for spatial obstacle avoidance, and visualizing when traveling. Detects moving or static space obstacles in the forward space in real time, and adds the obstacles to the local obstacle avoidance map.

优选地,在所述模块M3中:Preferably, in the module M3:

当机器人到达预设地点后进入预设区域,通过视觉对预设区域进行初步识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标物体,当发现目标物体时,通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正导航点位,得到工作点位。When the robot arrives at the preset location, it enters the preset area, initially recognizes the preset area through vision, and uses relevant deep learning methods to find the desired target object in the photographed environment. To the target position, the point cloud matching algorithm is used to pair the object model established in advance with the target object, correct the navigation point, and obtain the working point.

与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

1、本发明保障了清洁机器人在整个任务行进过程中的安全性;1. The present invention ensures the safety of the cleaning robot during the entire mission process;

2、本发明利用视觉进行目标物体的智能识别节省了清洁过程中人工标记各个工作点位的时间,并大大降低了人工标记出错的风险。2. The present invention uses vision to perform intelligent identification of target objects, which saves the time for manually marking each work point in the cleaning process, and greatly reduces the risk of errors in manual marking.

附图说明Description of drawings

通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:

图1为本发明流程图。Fig. 1 is a flow chart of the present invention.

具体实施方式Detailed ways

下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several changes and improvements can be made without departing from the inventive concept. These all belong to the protection scope of the present invention.

实施例1:Example 1:

根据本发明提供的一种基于酒店清洁机器人的视觉与激光融合slam方法,如图1所示,包括:A vision and laser fusion slam method based on a hotel cleaning robot provided according to the present invention, as shown in FIG. 1 , includes:

步骤S1:建立栅格地图并在地图上标记出预设地点的位置;Step S1: establishing a grid map and marking the location of the preset location on the map;

具体地,在所述步骤S1中:Specifically, in the step S1:

利用二维激光雷达建立相应的导航所使用的2d栅格地图。Use 2D lidar to build a 2D grid map for the corresponding navigation.

步骤S2:根据预设地点实时生成避障地图;Step S2: generating an obstacle avoidance map in real time according to a preset location;

具体地,在所述步骤S2中:Specifically, in the step S2:

机器人根据预设地点行进,在行进的过程中规划层实时生成预设范围的避障地图,避障地图用于机器人的实时动态避障,避障地图融入了视觉与激光的数据信息,机器人根据避障地图避开空间中的预设障碍物。The robot travels according to the preset location, and the planning layer generates an obstacle avoidance map with a preset range in real time during the process of traveling. The obstacle avoidance map is used for the robot's real-time dynamic obstacle avoidance. The obstacle avoidance map integrates visual and laser data information. Obstacle Avoidance Map Avoid preset obstacles in space.

具体地,作业过程包含了任务层、执行层和反馈层三部分;其中任务层负责进行任务规划,任务分解以及各分支任务的下发;执行层负责各个分支模块对下发的任务进行再分解并一次执行;反馈层在各模块执行完成相应任务后向上层进行反馈或遇到问题时向上层发出告警;Specifically, the job process includes three parts: the task layer, the execution layer and the feedback layer; the task layer is responsible for task planning, task decomposition, and the distribution of branch tasks; the execution layer is responsible for re-decomposing the distributed tasks by each branch module And execute it at one time; the feedback layer gives feedback to the upper layer after each module has completed the corresponding task or issues an alarm to the upper layer when encountering a problem;

执行层中包括导航模块,导航模块负责让机器人达到指定的作业点或作业区域,导航模块中的规划层负责机器人行走的执行,规划层规划全局路径和局部路径,全局路径是起始位置到目标位置的完整规划路线,以路径最短原则进行规划;局部路径是对机器人行走过程预设范围内的局部路径规划,避开不在地图上的动态障碍物,加入视觉进行空间避障,在行进时视觉实时检测出前方空间内的移动或静态空间障碍物,并将障碍物加入局部避障地图中。The execution layer includes the navigation module. The navigation module is responsible for letting the robot reach the designated work point or work area. The planning layer in the navigation module is responsible for the execution of the robot's walking. The planning layer plans the global path and the local path. The global path is from the starting position to the target. The complete planning route of the location is planned according to the principle of the shortest path; the local path is the local path planning within the preset range of the robot's walking process, avoiding the dynamic obstacles that are not on the map, adding vision for spatial obstacle avoidance, and visualizing when traveling. Detects moving or static space obstacles in the forward space in real time, and adds the obstacles to the local obstacle avoidance map.

步骤S3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;Step S3: traveling to the preset location according to the obstacle avoidance map to identify the target object in the preset location, and obtain the working point;

具体地,在所述步骤S3中:Specifically, in the step S3:

当机器人到达预设地点后进入预设区域,通过视觉对预设区域进行初步识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标物体,当发现目标物体时,通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正导航点位,得到工作点位。When the robot arrives at the preset location, it enters the preset area, initially recognizes the preset area through vision, and uses relevant deep learning methods to find the desired target object in the photographed environment. To the target position, the point cloud matching algorithm is used to pair the object model established in advance with the target object, correct the navigation point, and obtain the working point.

步骤S4:反馈到达指定工作点位的信息。Step S4: Feeding back the information of arriving at the designated work point.

实施例2:Example 2:

实施例2为实施例1的优选例,以更为具体地对本发明进行说明。Embodiment 2 is a preferred example of Embodiment 1, in order to describe the present invention in more detail.

本领域技术人员可以将本发明提供的一种基于酒店清洁机器人的视觉与激光融合slam方法,理解为基于酒店清洁机器人的视觉与激光融合slam系统的具体实施方式,即所述基于酒店清洁机器人的视觉与激光融合slam系统可以通过执行所述基于酒店清洁机器人的视觉与激光融合slam方法的步骤流程予以实现。Those skilled in the art can understand a hotel cleaning robot-based vision and laser fusion slam method provided by the present invention as a specific implementation of a hotel cleaning robot-based vision and laser fusion slam system, that is, the hotel cleaning robot-based The vision and laser fusion slam system can be realized by executing the steps of the described hotel cleaning robot-based vision and laser fusion slam method.

根据本发明提供的一种基于酒店清洁机器人的视觉与激光融合slam系统,包括:A vision and laser fusion slam system based on a hotel cleaning robot provided according to the present invention includes:

模块M1:建立栅格地图并在地图上标记出预设地点的位置;Module M1: Create a grid map and mark the location of the preset location on the map;

具体地,在所述模块M1中:Specifically, in the module M1:

利用二维激光雷达建立相应的导航所使用的2d栅格地图。Use 2D lidar to build a 2D grid map for the corresponding navigation.

模块M2:根据预设地点实时生成避障地图;Module M2: generate an obstacle avoidance map in real time according to a preset location;

具体地,在所述模块M2中:Specifically, in the module M2:

机器人根据预设地点行进,在行进的过程中规划层实时生成预设范围的避障地图,避障地图用于机器人的实时动态避障,避障地图融入了视觉与激光的数据信息,机器人根据避障地图避开空间中的预设障碍物。The robot travels according to the preset location, and the planning layer generates an obstacle avoidance map with a preset range in real time during the process of traveling. The obstacle avoidance map is used for the robot's real-time dynamic obstacle avoidance. The obstacle avoidance map integrates visual and laser data information. Obstacle Avoidance Map Avoid preset obstacles in space.

具体地,作业过程包含了任务层、执行层和反馈层三部分;其中任务层负责进行任务规划,任务分解以及各分支任务的下发;执行层负责各个分支模块对下发的任务进行再分解并一次执行;反馈层在各模块执行完成相应任务后向上层进行反馈或遇到问题时向上层发出告警;Specifically, the job process includes three parts: the task layer, the execution layer and the feedback layer; the task layer is responsible for task planning, task decomposition, and the distribution of branch tasks; the execution layer is responsible for re-decomposing the distributed tasks by each branch module And execute it at one time; the feedback layer gives feedback to the upper layer after each module has completed the corresponding task or issues an alarm to the upper layer when encountering a problem;

执行层中包括导航模块,导航模块负责让机器人达到指定的作业点或作业区域,导航模块中的规划层负责机器人行走的执行,规划层规划全局路径和局部路径,全局路径是起始位置到目标位置的完整规划路线,以路径最短原则进行规划;局部路径是对机器人行走过程预设范围内的局部路径规划,避开不在地图上的动态障碍物,加入视觉进行空间避障,在行进时视觉实时检测出前方空间内的移动或静态空间障碍物,并将障碍物加入局部避障地图中。The execution layer includes the navigation module. The navigation module is responsible for letting the robot reach the designated work point or work area. The planning layer in the navigation module is responsible for the execution of the robot's walking. The planning layer plans the global path and the local path. The global path is from the starting position to the target. The complete planning route of the location is planned according to the principle of the shortest path; the local path is the local path planning within the preset range of the robot's walking process, avoiding the dynamic obstacles that are not on the map, adding vision for spatial obstacle avoidance, and visualizing when traveling. Detects moving or static space obstacles in the forward space in real time, and adds the obstacles to the local obstacle avoidance map.

模块M3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;Module M3: travel to the preset location according to the obstacle avoidance map, identify the target object in the preset location, and obtain the working point;

具体地,在所述模块M3中:Specifically, in the module M3:

当机器人到达预设地点后进入预设区域,通过视觉对预设区域进行初步识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标物体,当发现目标物体时,通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正导航点位,得到工作点位。When the robot arrives at the preset location, it enters the preset area, initially recognizes the preset area through vision, and uses relevant deep learning methods to find the desired target object in the photographed environment. To the target position, the point cloud matching algorithm is used to pair the object model established in advance with the target object, correct the navigation point, and obtain the working point.

模块M4:反馈到达指定工作点位的信息。Module M4: Feedback the information that the designated working point is reached.

实施例3:Example 3:

实施例3为实施例1的优选例,以更为具体地对本发明进行说明。Embodiment 3 is a preferred example of Embodiment 1, in order to describe the present invention in more detail.

针对目前酒店卫生间比较狭窄无法使用传统二维激光slam算法的问题,本发明要解决的技术问题体现在以下几点:Aiming at the problem that the traditional two-dimensional laser slam algorithm cannot be used due to the narrowness of the hotel bathroom at present, the technical problems to be solved by the present invention are embodied in the following points:

1)结合了视觉与激光相结合的slam方法,保证了机器人在行进过程中的安全性1) The slam method that combines vision and laser to ensure the safety of the robot in the process of traveling

2)视觉还可以为二维激光雷达带来更多的空间信息,使得机器人可以自主识别目标物体,再加上一定的判断逻辑即可自主标记目标点并到达2) Vision can also bring more spatial information to the two-dimensional lidar, so that the robot can identify the target object autonomously, and with a certain judgment logic, it can autonomously mark the target point and reach it.

本方法包括如下步骤:The method includes the following steps:

步骤1:利用二维激光雷达建立相应的导航所使用的2d栅格地图;Step 1: Use 2D LiDAR to establish a 2D grid map used for corresponding navigation;

步骤2:在地图上标记出各房间门口的位置以及房间卫生间的位置;Step 2: Mark the location of the door of each room and the location of the bathroom in the room on the map;

步骤3:随着上层任务的下发,机器人开始行进,朝着第一个房间门口走去,而在行进的过程中规划层会实时生成一张一定范围的避障地图,该地图则用于机器人的实时动态避障,该地图融入了视觉与激光的数据信息,确保可以避开空间中的各种大大小小的障碍物;Step 3: With the assignment of the upper-level task, the robot starts to move towards the door of the first room, and the planning layer will generate a certain range of obstacle avoidance map in real time during the process of traveling, which is used for The real-time dynamic obstacle avoidance of the robot, the map incorporates visual and laser data information to ensure that all kinds of obstacles in space can be avoided;

步骤4:当机器人到达房间门口后便可以进入对应房间的卫生间内,由于卫生间内空间比较狭小,此时可以通过视觉对房间进行初步的识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标(如台盆,镜子,马桶等),当发现目标时,则通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正最终的导航点位,得到最终的工作点位;Step 4: When the robot arrives at the door of the room, it can enter the bathroom of the corresponding room. Since the space in the bathroom is relatively small, the room can be visually identified at this time, and the relevant deep learning methods can be used to find all the objects in the captured environment. Targets (such as basins, mirrors, toilets, etc.) are required. When the target is found, it will drive to the target position through local path planning, and then use the point cloud matching algorithm to match the object model established in advance with the target object to correct the final navigation. point to get the final working point;

步骤5:通过反馈层告知上层已到达指定工作点位,并等待下一个任务的下发。Step 5: The upper layer is notified through the feedback layer that it has reached the designated work point and waits for the next task to be issued.

其中,步骤3包括如下步骤:Wherein, step 3 includes the following steps:

步骤3.1:清洁过程包含了任务层、执行层和反馈层三部分,其中任务层主要负责进行任务规划,任务分解以及各分支任务的下发,而执行层则是各个分支模块对下发的任务进行再分解并一次执行,反馈层则是各模块执行完成相应任务后向上层进行反馈或遇到任何问题时向上层发出告警信号;Step 3.1: The cleaning process consists of three parts: the task layer, the execution layer and the feedback layer. The task layer is mainly responsible for task planning, task decomposition and the distribution of each branch task, while the execution layer is the task issued by each branch module. It is decomposed and executed at one time. The feedback layer is that each module provides feedback to the upper layer after completing the corresponding task or sends an alarm signal to the upper layer when it encounters any problems;

步骤3.2:其中导航模块则是执行层中的一个分支模块,该模块负责让机器人达到指定的作业点或作业区域,而其中负责让机器人行走的执行部分则是导航的规划层,规划层规划全局路径和局部路径,全局路径则是起始位置到目标位置的完整规划路线(理论上以路径最短原则进行规划),而局部路径则是对机器人行走过程一定范围内的局部路径规划(主要为了避开一些不在地图上的动态障碍物),由于二维激光雷达只能检测到一定高度的障碍物,无法保障清洁机器人的整体安全,因此在这加入视觉进行空间避障,在行进时视觉会实时检测出前方空间内的移动或静态空间障碍物,并将该障碍物加入局部避障地图中,这时局部规划器便能相应规划出一条安全路径来。Step 3.2: The navigation module is a branch module in the execution layer. This module is responsible for letting the robot reach the designated work point or work area, and the execution part responsible for making the robot walk is the navigation planning layer, and the planning layer plans the overall situation Path and local path, the global path is the complete planned route from the starting position to the target position (theoretically planned based on the principle of the shortest path), while the local path is the local path planning within a certain range of the robot’s walking process (mainly to avoid Open some dynamic obstacles that are not on the map), because the two-dimensional lidar can only detect obstacles of a certain height, and cannot guarantee the overall safety of the cleaning robot, so add vision to avoid obstacles in space, and the vision will be real-time when traveling. After detecting a moving or static space obstacle in the front space, and adding the obstacle to the local obstacle avoidance map, the local planner can plan a safe path accordingly.

本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统、装置及其各个模块以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统、装置及其各个模块以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同程序。所以,本发明提供的系统、装置及其各个模块可以被认为是一种硬件部件,而对其内包括的用于实现各种程序的模块也可以视为硬件部件内的结构;也可以将用于实现各种功能的模块视为既可以是实现方法的软件程序又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system, device and each module provided by the present invention in the form of pure computer readable program code, the system, device and each module provided by the present invention can be completely implemented by logically programming the method steps. The same program is implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers, among others. Therefore, the system, device and each module provided by the present invention can be regarded as a kind of hardware component, and the modules used for realizing various programs included in it can also be regarded as the structure in the hardware component; A module for realizing various functions can be regarded as either a software program for realizing a method or a structure within a hardware component.

以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims, which do not affect the essential content of the present invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily, provided that there is no conflict.

Claims (10)

1.一种基于酒店清洁机器人的视觉与激光融合slam方法,其特征在于,包括:1. a vision and laser fusion slam method based on hotel cleaning robot, is characterized in that, comprises: 步骤S1:建立栅格地图并在地图上标记出预设地点的位置;Step S1: establishing a grid map and marking the location of the preset location on the map; 步骤S2:根据预设地点实时生成避障地图;Step S2: generating an obstacle avoidance map in real time according to a preset location; 步骤S3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;Step S3: traveling to the preset location according to the obstacle avoidance map to identify the target object in the preset location, and obtain the working point; 步骤S4:反馈到达指定工作点位的信息。Step S4: Feeding back the information of arriving at the designated work point. 2.根据权利要求1所述的基于酒店清洁机器人的视觉与激光融合slam方法,其特征在于,在所述步骤S1中:2. the vision and laser fusion slam method based on hotel cleaning robot according to claim 1, is characterized in that, in described step S1: 利用二维激光雷达建立相应的导航所使用的2d栅格地图。Use 2D lidar to build a 2D grid map for the corresponding navigation. 3.根据权利要求1所述的基于酒店清洁机器人的视觉与激光融合slam方法,其特征在于,在所述步骤S2中:3. the vision and laser fusion slam method based on hotel cleaning robot according to claim 1, is characterized in that, in described step S2: 机器人根据预设地点行进,在行进的过程中规划层实时生成预设范围的避障地图,避障地图用于机器人的实时动态避障,避障地图融入了视觉与激光的数据信息,机器人根据避障地图避开空间中的预设障碍物。The robot travels according to the preset location, and the planning layer generates an obstacle avoidance map with a preset range in real time during the process of traveling. The obstacle avoidance map is used for the robot's real-time dynamic obstacle avoidance. The obstacle avoidance map integrates visual and laser data information. Obstacle Avoidance Map Avoid preset obstacles in space. 4.根据权利要求3所述的基于酒店清洁机器人的视觉与激光融合slam方法,其特征在于:4. the vision and laser fusion slam method based on hotel cleaning robot according to claim 3, is characterized in that: 作业过程包含了任务层、执行层和反馈层三部分;其中任务层负责进行任务规划,任务分解以及各分支任务的下发;执行层负责各个分支模块对下发的任务进行再分解并一次执行;反馈层在各模块执行完成相应任务后向上层进行反馈或遇到问题时向上层发出告警;The job process consists of three parts: the task layer, the execution layer and the feedback layer; the task layer is responsible for task planning, task decomposition and the distribution of branch tasks; the execution layer is responsible for re-decomposing the tasks issued by each branch module and executing them at one time ;The feedback layer gives feedback to the upper layer after each module performs the corresponding task or issues an alarm to the upper layer when encountering a problem; 执行层中包括导航模块,导航模块负责让机器人达到指定的作业点或作业区域,导航模块中的规划层负责机器人行走的执行,规划层规划全局路径和局部路径,全局路径是起始位置到目标位置的完整规划路线,以路径最短原则进行规划;局部路径是对机器人行走过程预设范围内的局部路径规划,避开不在地图上的动态障碍物,加入视觉进行空间避障,在行进时视觉实时检测出前方空间内的移动或静态空间障碍物,并将障碍物加入局部避障地图中。The execution layer includes the navigation module. The navigation module is responsible for letting the robot reach the designated work point or work area. The planning layer in the navigation module is responsible for the execution of the robot's walking. The planning layer plans the global path and the local path. The global path is from the starting position to the target. The complete planning route of the location is planned according to the principle of the shortest path; the local path is the local path planning within the preset range of the robot's walking process, avoiding the dynamic obstacles that are not on the map, adding vision for spatial obstacle avoidance, and visualizing when traveling. Detects moving or static space obstacles in the forward space in real time, and adds the obstacles to the local obstacle avoidance map. 5.根据权利要求1所述的基于酒店清洁机器人的视觉与激光融合slam方法,其特征在于,在所述步骤S3中:5. the vision and laser fusion slam method based on hotel cleaning robot according to claim 1, is characterized in that, in described step S3: 当机器人到达预设地点后进入预设区域,通过视觉对预设区域进行初步识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标物体,当发现目标物体时,通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正导航点位,得到工作点位。When the robot arrives at the preset location, it enters the preset area, initially recognizes the preset area through vision, and uses relevant deep learning methods to find the desired target object in the photographed environment. To the target position, the point cloud matching algorithm is used to pair the object model established in advance with the target object, correct the navigation point, and obtain the working point. 6.一种基于酒店清洁机器人的视觉与激光融合slam系统,其特征在于,包括:6. A vision and laser fusion slam system based on a hotel cleaning robot, characterized in that, comprising: 模块M1:建立栅格地图并在地图上标记出预设地点的位置;Module M1: Create a grid map and mark the location of the preset location on the map; 模块M2:根据预设地点实时生成避障地图;Module M2: generate an obstacle avoidance map in real time according to a preset location; 模块M3:根据避障地图行进至预设地点识别预设地点内的目标物体,并得到工作点位;Module M3: travel to the preset location according to the obstacle avoidance map, identify the target object in the preset location, and obtain the working point; 模块M4:反馈到达指定工作点位的信息。Module M4: Feedback the information that the designated working point is reached. 7.根据权利要求6所述的基于酒店清洁机器人的视觉与激光融合slam系统,其特征在于,在所述模块M1中:7. the vision and laser fusion slam system based on hotel cleaning robot according to claim 6, is characterized in that, in described module M1: 利用二维激光雷达建立相应的导航所使用的2d栅格地图。Use 2D lidar to build a 2D grid map for the corresponding navigation. 8.根据权利要求6所述的基于酒店清洁机器人的视觉与激光融合slam系统,其特征在于,在所述模块M2中:8. the vision and laser fusion slam system based on hotel cleaning robot according to claim 6, is characterized in that, in described module M2: 机器人根据预设地点行进,在行进的过程中规划层实时生成预设范围的避障地图,避障地图用于机器人的实时动态避障,避障地图融入了视觉与激光的数据信息,机器人根据避障地图避开空间中的预设障碍物。The robot travels according to the preset location, and the planning layer generates an obstacle avoidance map with a preset range in real time during the process of traveling. The obstacle avoidance map is used for the robot's real-time dynamic obstacle avoidance. The obstacle avoidance map integrates visual and laser data information. Obstacle Avoidance Map Avoid preset obstacles in space. 9.根据权利要求8所述的基于酒店清洁机器人的视觉与激光融合slam系统,其特征在于:9. the vision and laser fusion slam system based on hotel cleaning robot according to claim 8, is characterized in that: 作业过程包含了任务层、执行层和反馈层三部分;其中任务层负责进行任务规划,任务分解以及各分支任务的下发;执行层负责各个分支模块对下发的任务进行再分解并一次执行;反馈层在各模块执行完成相应任务后向上层进行反馈或遇到问题时向上层发出告警;The job process consists of three parts: the task layer, the execution layer and the feedback layer; the task layer is responsible for task planning, task decomposition and the distribution of branch tasks; the execution layer is responsible for re-decomposing the tasks issued by each branch module and executing them at one time ;The feedback layer gives feedback to the upper layer after each module performs the corresponding task or issues an alarm to the upper layer when encountering a problem; 执行层中包括导航模块,导航模块负责让机器人达到指定的作业点或作业区域,导航模块中的规划层负责机器人行走的执行,规划层规划全局路径和局部路径,全局路径是起始位置到目标位置的完整规划路线,以路径最短原则进行规划;局部路径是对机器人行走过程预设范围内的局部路径规划,避开不在地图上的动态障碍物,加入视觉进行空间避障,在行进时视觉实时检测出前方空间内的移动或静态空间障碍物,并将障碍物加入局部避障地图中。The execution layer includes the navigation module. The navigation module is responsible for letting the robot reach the designated work point or work area. The planning layer in the navigation module is responsible for the execution of the robot's walking. The planning layer plans the global path and the local path. The global path is from the starting position to the target. The complete planning route of the location is planned according to the principle of the shortest path; the local path is the local path planning within the preset range of the robot's walking process, avoiding the dynamic obstacles that are not on the map, adding vision for spatial obstacle avoidance, and visualizing when traveling. Detects moving or static space obstacles in the forward space in real time, and adds the obstacles to the local obstacle avoidance map. 10.根据权利要求6所述的基于酒店清洁机器人的视觉与激光融合slam系统,其特征在于,在所述模块M3中:10. the vision and laser fusion slam system based on hotel cleaning robot according to claim 6, is characterized in that, in described module M3: 当机器人到达预设地点后进入预设区域,通过视觉对预设区域进行初步识别,利用相关深度学习方法,在拍摄到的环境寻找所需目标物体,当发现目标物体时,通过局部路径规划行驶至目标位置,再利用点云匹配算法将提前建立的物体模型与目标物体进行配对,纠正导航点位,得到工作点位。When the robot arrives at the preset location, it enters the preset area, initially recognizes the preset area through vision, and uses relevant deep learning methods to find the desired target object in the photographed environment. To the target position, the point cloud matching algorithm is used to pair the object model established in advance with the target object, correct the navigation point, and obtain the working point.
CN202210423475.6A 2022-04-21 2022-04-21 Visual and laser fusion slam method and system based on hotel cleaning robot Pending CN114947653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210423475.6A CN114947653A (en) 2022-04-21 2022-04-21 Visual and laser fusion slam method and system based on hotel cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210423475.6A CN114947653A (en) 2022-04-21 2022-04-21 Visual and laser fusion slam method and system based on hotel cleaning robot

Publications (1)

Publication Number Publication Date
CN114947653A true CN114947653A (en) 2022-08-30

Family

ID=82980160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210423475.6A Pending CN114947653A (en) 2022-04-21 2022-04-21 Visual and laser fusion slam method and system based on hotel cleaning robot

Country Status (1)

Country Link
CN (1) CN114947653A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109998416A (en) * 2019-04-22 2019-07-12 深兰科技(上海)有限公司 A kind of dust-collecting robot
CN110147106A (en) * 2019-05-29 2019-08-20 福建(泉州)哈工大工程技术研究院 Intelligent mobile service robot with laser and visual fusion obstacle avoidance system
CN111664843A (en) * 2020-05-22 2020-09-15 杭州电子科技大学 SLAM-based intelligent storage checking method
US20210007572A1 (en) * 2019-07-11 2021-01-14 Lg Electronics Inc. Mobile robot using artificial intelligence and controlling method thereof
CN114158984A (en) * 2021-12-22 2022-03-11 上海景吾酷租科技发展有限公司 Cleaning robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109998416A (en) * 2019-04-22 2019-07-12 深兰科技(上海)有限公司 A kind of dust-collecting robot
CN110147106A (en) * 2019-05-29 2019-08-20 福建(泉州)哈工大工程技术研究院 Intelligent mobile service robot with laser and visual fusion obstacle avoidance system
US20210007572A1 (en) * 2019-07-11 2021-01-14 Lg Electronics Inc. Mobile robot using artificial intelligence and controlling method thereof
CN111664843A (en) * 2020-05-22 2020-09-15 杭州电子科技大学 SLAM-based intelligent storage checking method
CN114158984A (en) * 2021-12-22 2022-03-11 上海景吾酷租科技发展有限公司 Cleaning robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王庞伟 等: "《智能网联汽车技术系列 智能网联汽车电子技术》", 31 July 2021, 机械工业出版社, pages: 55 - 58 *

Similar Documents

Publication Publication Date Title
CN110285813B (en) Man-machine co-fusion navigation device and method for indoor mobile robot
Wang et al. Interactive and immersive process-level digital twin for collaborative human–robot construction work
KR102697721B1 (en) Systems and methods for autonomous robot motion planning and navigation
Thakar et al. A survey of wheeled mobile manipulation: A decision-making perspective
Kohrt et al. An online robot trajectory planning and programming support system for industrial use
Herrero et al. Skill based robot programming: Assembly, vision and Workspace Monitoring skill interaction
CN110488811B (en) Method for predicting pedestrian track by robot based on social network model
Yasuda Behavior-based autonomous cooperative control of intelligent mobile robot systems with embedded Petri nets
Liu et al. Vision AI-based human-robot collaborative assembly driven by autonomous robots
Chen et al. A framework of teleoperated and stereo vision guided mobile manipulation for industrial automation
Hudson et al. Model-based autonomous system for performing dexterous, human-level manipulation tasks
Wang et al. Automatic high-level motion sequencing methods for enabling multi-tasking construction robots
Kaiser et al. An affordance-based pilot interface for high-level control of humanoid robots in supervised autonomy
CN114800524B (en) A system and method for active collision avoidance of a human-computer interaction collaborative robot
Prieto et al. Multiagent robotic systems and exploration algorithms: Applications for data collection in construction sites
CN114947653A (en) Visual and laser fusion slam method and system based on hotel cleaning robot
Madhevan et al. Identification of probabilistic approaches and map-based navigation in motion planning for mobile robots
Lunenburg et al. Tech united eindhoven team description 2012
Rastegarpanah et al. Mobile robotics and 3D printing: addressing challenges in path planning and scalability
CN115847428B (en) Mechanical assembly auxiliary guiding system and method based on AR technology
CN115857524A (en) Man-machine co-fusion intelligent motion planning method of hexapod robot in complex environment
CN114332392A (en) Cleaning robot deployment method and system
Kovačić et al. Autonomous vehicles and automated warehousing systems for industry 4.0
CN112099487A (en) ROS-based map construction and simultaneous localization method
Wang Enabling Human-Robot Partnerships in Digitally-Driven Construction Work through Integration of Building Information Models, Interactive Virtual Reality, and Process-Level Digital Twins

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination