CN114947653A - Visual and laser fusion slam method and system based on hotel cleaning robot - Google Patents
Visual and laser fusion slam method and system based on hotel cleaning robot Download PDFInfo
- Publication number
- CN114947653A CN114947653A CN202210423475.6A CN202210423475A CN114947653A CN 114947653 A CN114947653 A CN 114947653A CN 202210423475 A CN202210423475 A CN 202210423475A CN 114947653 A CN114947653 A CN 114947653A
- Authority
- CN
- China
- Prior art keywords
- map
- obstacle avoidance
- robot
- layer
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000004140 cleaning Methods 0.000 title claims abstract description 39
- 230000004927 fusion Effects 0.000 title claims abstract description 23
- 230000000007 visual effect Effects 0.000 title claims description 14
- 230000008569 process Effects 0.000 claims abstract description 32
- 238000000354 decomposition reaction Methods 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 7
- 230000003068 static effect Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4011—Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4002—Installations of electric equipment
- A47L11/4008—Arrangements of switches, indicators or the like
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4061—Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L2201/00—Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
- A47L2201/04—Automatic control of the travelling movement; Automatic obstacle detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a vision and laser fusion slam method and system based on a hotel cleaning robot, comprising the following steps: step S1: establishing a grid map and marking the position of a preset place on the map; step S2: generating an obstacle avoidance map in real time according to a preset place; step S3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location; step S4: and feeding back information reaching the specified working point position. The invention ensures the safety of the cleaning robot in the whole task advancing process, saves the time for manually marking each working point position in the cleaning process by intelligently identifying the target object by using vision, and greatly reduces the risk of errors of manual marking.
Description
Technical Field
The invention relates to the field of robot navigation, in particular to a visual and laser fusion slam method and system based on a hotel cleaning robot.
Background
The hotel cleaning robot is a special robot for operation and hotel toilets, because the robot carries a series of mechanical equipment (such as a mechanical arm, a camera, an electric clamping jaw, an electric brush and the like), the shape is huge and irregular, the robot can encounter obstacles in a plurality of spaces during traveling, especially the mechanical arm can also touch the obstacles or people in a furled state, the requirement of safe traveling cannot be met by simply depending on a sensor of a two-dimensional laser radar, a visual sensor is additionally arranged on the basis to increase the obstacle avoidance capability of the spaces, and the walking safety and the walking stability of the cleaning robot can be greatly increased by using a slam system consisting of the visual sensor and the laser sensor.
Patent document CN109144067A (application number: CN201811083718.6) discloses an intelligent cleaning robot and a path planning method thereof, wherein a sensor module is used for analyzing and feeding back real-time cleaning environment information; the accurate positioning module is used for acquiring the position of the current intelligent cleaning robot on an environment map; establishing an environment map by using a geometric-topological mixed map technology, planning an optimal cleaning path by using an advanced path planning algorithm by combining the environment map and a real-time position, and uploading data to a cloud platform to realize real-time analysis, recording and control; the driving module is used for driving the intelligent cleaning robot to operate and perform cleaning work according to the planned optimal path; the man-machine interaction module can utilize a temperature and humidity sensor to combine with a camera to realize the display of the working state and performance of the intelligent cleaning robot, and can complete the remote control and the reservation function of the intelligent cleaning robot through the wifi/Bluetooth technology. But the invention has insufficient safety guarantee for the robot.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a visual and laser fusion slam method and system based on a hotel cleaning robot.
According to the vision and laser fusion slam method based on the hotel cleaning robot, provided by the invention, the method comprises the following steps:
step S1: establishing a grid map and marking the position of a preset place on the map;
step S2: generating an obstacle avoidance map in real time according to a preset place;
step S3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location;
step S4: and feeding back information reaching the specified working point position.
Preferably, in the step S1:
and establishing a 2d grid map used for corresponding navigation by using the two-dimensional laser radar.
Preferably, in the step S2:
the robot travels according to a preset place, a planning layer generates an obstacle avoidance map in a preset range in real time in the traveling process, the obstacle avoidance map is used for real-time dynamic obstacle avoidance of the robot, the obstacle avoidance map integrates visual and laser data information, and the robot avoids preset obstacles in a space according to the obstacle avoidance map.
Preferably, the operation process comprises a task layer, an execution layer and a feedback layer; the task layer is responsible for task planning, task decomposition and issuing of each branch task; the execution layer is responsible for decomposing the issued tasks again and executing the tasks once by each branch module; the feedback layer feeds back to the upper layer after each module executes the corresponding task or sends an alarm to the upper layer when a problem is encountered;
the execution layer comprises a navigation module, the navigation module is responsible for enabling the robot to reach a designated operation point or operation area, a planning layer in the navigation module is responsible for executing the walking of the robot, and the planning layer plans a global path and a local path, wherein the global path is a complete planned route from an initial position to a target position and is planned according to the shortest route principle; the local path is a local path planning within a preset range in the walking process of the robot, dynamic obstacles which are not on a map are avoided, vision is added to carry out space obstacle avoidance, moving or static space obstacles in a front space are detected in real time during the walking process, and the obstacles are added into the local obstacle avoidance map.
Preferably, in the step S3:
the method comprises the steps of entering a preset area after a robot reaches the preset place, carrying out preliminary identification on the preset area through vision, searching a required target object in a shot environment by using a related deep learning method, driving to a target position through local path planning when the target object is found, pairing an object model established in advance with the target object by using a point cloud matching algorithm, and correcting navigation point positions to obtain working point positions.
According to the invention, the vision and laser fusion slam system based on the hotel cleaning robot comprises:
module M1: establishing a grid map and marking the position of a preset place on the map;
module M2: generating an obstacle avoidance map in real time according to a preset place;
module M3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location;
module M4: and feeding back information reaching the specified working point position.
Preferably, in said module M1:
and establishing a 2d grid map used for corresponding navigation by using the two-dimensional laser radar.
Preferably, in said module M2:
the robot travels according to a preset place, a planning layer generates an obstacle avoidance map of a preset range in real time in the traveling process, the obstacle avoidance map is used for real-time dynamic obstacle avoidance of the robot, the obstacle avoidance map integrates visual and laser data information, and the robot avoids preset obstacles in a space according to the obstacle avoidance map.
Preferably, the operation process comprises a task layer, an execution layer and a feedback layer; the task layer is responsible for task planning, task decomposition and issuing of each branch task; the execution layer is responsible for decomposing the issued tasks again and executing the tasks once by each branch module; the feedback layer feeds back to the upper layer after each module executes the corresponding task or sends an alarm to the upper layer when a problem is encountered;
the execution layer comprises a navigation module, the navigation module is responsible for enabling the robot to reach a designated operation point or operation area, a planning layer in the navigation module is responsible for executing the walking of the robot, and the planning layer plans a global path and a local path, wherein the global path is a complete planned route from an initial position to a target position and is planned according to the shortest route principle; the local path is a local path planning within a preset range in the walking process of the robot, dynamic obstacles which are not on a map are avoided, vision is added to carry out space obstacle avoidance, moving or static space obstacles in a front space are detected in real time during the walking process, and the obstacles are added into the local obstacle avoidance map.
Preferably, in said module M3:
the method comprises the steps that when a robot enters a preset area after arriving at a preset place, the preset area is preliminarily recognized through vision, a required target object is searched in a shot environment through a related deep learning method, when the target object is found, the robot drives to a target position through local path planning, an object model established in advance is paired with the target object through a point cloud matching algorithm, and a navigation point location is corrected to obtain a working point location.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention ensures the safety of the cleaning robot in the whole task advancing process;
2. according to the invention, the target object is intelligently identified by using vision, so that the time for manually marking each working point position in the cleaning process is saved, and the risk of errors caused by manual marking is greatly reduced.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1:
according to the vision and laser fusion slam method based on the hotel cleaning robot provided by the invention, as shown in fig. 1, the method comprises the following steps:
step S1: establishing a grid map and marking the position of a preset place on the map;
specifically, in the step S1:
and establishing a 2d grid map used for corresponding navigation by using the two-dimensional laser radar.
Step S2: generating an obstacle avoidance map in real time according to a preset place;
specifically, in the step S2:
the robot travels according to a preset place, a planning layer generates an obstacle avoidance map in a preset range in real time in the traveling process, the obstacle avoidance map is used for real-time dynamic obstacle avoidance of the robot, the obstacle avoidance map integrates visual and laser data information, and the robot avoids preset obstacles in a space according to the obstacle avoidance map.
Specifically, the operation process comprises a task layer, an execution layer and a feedback layer; the task layer is responsible for task planning, task decomposition and issuing of each branch task; the execution layer is responsible for decomposing the issued tasks again and executing the tasks once by each branch module; the feedback layer feeds back to the upper layer after each module executes the corresponding task or sends an alarm to the upper layer when a problem is encountered;
the execution layer comprises a navigation module, the navigation module is responsible for enabling the robot to reach a designated operation point or operation area, a planning layer in the navigation module is responsible for executing walking of the robot, the planning layer plans a global path and a local path, the global path is a complete planned route from an initial position to a target position, and planning is carried out according to the shortest path principle; the local path is a local path planning within a preset range in the walking process of the robot, dynamic obstacles which are not on a map are avoided, vision is added to carry out space obstacle avoidance, moving or static space obstacles in a front space are detected in real time during the walking process, and the obstacles are added into the local obstacle avoidance map.
Step S3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location;
specifically, in the step S3:
the method comprises the steps that when a robot enters a preset area after arriving at a preset place, the preset area is preliminarily recognized through vision, a required target object is searched in a shot environment through a related deep learning method, when the target object is found, the robot drives to a target position through local path planning, an object model established in advance is paired with the target object through a point cloud matching algorithm, and a navigation point location is corrected to obtain a working point location.
Step S4: and feeding back information reaching the specified working point position.
Example 2:
example 2 is a preferred example of example 1, and the present invention will be described in more detail.
Those skilled in the art can understand the vision and laser fusion slam method based on the hotel cleaning robot provided by the invention as a specific implementation manner of the vision and laser fusion slam system based on the hotel cleaning robot, that is, the vision and laser fusion slam system based on the hotel cleaning robot can be realized by executing the step flow of the vision and laser fusion slam method based on the hotel cleaning robot.
According to the invention, the vision and laser fusion slam system based on the hotel cleaning robot comprises:
module M1: establishing a grid map and marking the position of a preset place on the map;
specifically, in the module M1:
and establishing a 2d grid map used for corresponding navigation by using the two-dimensional laser radar.
Module M2: generating an obstacle avoidance map in real time according to a preset place;
specifically, in the module M2:
the robot travels according to a preset place, a planning layer generates an obstacle avoidance map in a preset range in real time in the traveling process, the obstacle avoidance map is used for real-time dynamic obstacle avoidance of the robot, the obstacle avoidance map integrates visual and laser data information, and the robot avoids preset obstacles in a space according to the obstacle avoidance map.
Specifically, the operation process comprises a task layer, an execution layer and a feedback layer; the task layer is responsible for task planning, task decomposition and issuing of each branch task; the execution layer is responsible for decomposing the issued tasks again and executing the tasks once by each branch module; the feedback layer feeds back to the upper layer after each module executes the corresponding task or sends an alarm to the upper layer when a problem is encountered;
the execution layer comprises a navigation module, the navigation module is responsible for enabling the robot to reach a designated operation point or operation area, a planning layer in the navigation module is responsible for executing the walking of the robot, and the planning layer plans a global path and a local path, wherein the global path is a complete planned route from an initial position to a target position and is planned according to the shortest route principle; the local path is a local path planning within a preset range in the walking process of the robot, dynamic obstacles which are not on a map are avoided, vision is added to carry out space obstacle avoidance, moving or static space obstacles in a front space are detected in real time during the walking process, and the obstacles are added into the local obstacle avoidance map.
Module M3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location;
specifically, in the module M3:
the method comprises the steps that when a robot enters a preset area after arriving at a preset place, the preset area is preliminarily recognized through vision, a required target object is searched in a shot environment through a related deep learning method, when the target object is found, the robot drives to a target position through local path planning, an object model established in advance is paired with the target object through a point cloud matching algorithm, and a navigation point location is corrected to obtain a working point location.
Module M4: and feeding back information reaching the specified working point position.
Example 3:
example 3 is a preferred example of example 1, and the present invention will be described in more detail.
Aiming at the problem that the traditional two-dimensional laser slam algorithm cannot be used due to the fact that the toilet of the hotel is narrow at present, the technical problem to be solved by the invention is embodied in the following points:
1) the slam method combining vision and laser ensures the safety of the robot in the process of traveling
2) The vision can also bring more spatial information to the two-dimensional laser radar, so that the robot can autonomously identify a target object, and a certain judgment logic is added to autonomously mark the target point and arrive at the target point
The method comprises the following steps:
step 1: establishing a 2d grid map used for corresponding navigation by using a two-dimensional laser radar;
step 2: marking the position of each room doorway and the position of a room toilet on a map;
and step 3: with the issuing of upper layer tasks, the robot starts to advance and moves towards the doorway of a first room, a certain-range obstacle avoidance map is generated in real time in the advancing process of a planning layer, the map is used for real-time dynamic obstacle avoidance of the robot, the map is integrated with visual and laser data information, and various large and small obstacles in the space can be avoided;
and 4, step 4: when the robot reaches the door of a room, the robot can enter a toilet of the corresponding room, because the space in the toilet is narrow, the room can be preliminarily identified through vision, a required target (such as a counter basin, a mirror, a closestool and the like) is searched in the shot environment by using a related deep learning method, when the target is found, the robot drives to a target position through local path planning, an object model established in advance is matched with the target object by using a point cloud matching algorithm, and the final navigation point location is corrected to obtain the final working point location;
and 5: and informing the upper layer of the arrival of the specified working point position through a feedback layer, and waiting for the issuing of the next task.
Wherein, the step 3 comprises the following steps:
step 3.1: the cleaning process comprises a task layer, an execution layer and a feedback layer, wherein the task layer is mainly responsible for task planning, task decomposition and issuing of each branch task, the execution layer decomposes and executes the issued task again by each branch module, and the feedback layer sends an alarm signal to the upper layer after each module executes the corresponding task and feeds back the task to the upper layer or when any problem occurs;
step 3.2: the navigation module is a branch module in the execution layer, the module is responsible for enabling the robot to reach a designated operation point or operation area, the execution part responsible for enabling the robot to walk is a navigation planning layer, the planning layer plans a global path and a local path, the global path is a complete planned path from an initial position to a target position (theoretically planned according to the shortest path principle), the local path is a local path plan (mainly aiming at avoiding some dynamic obstacles which are not on a map) in a certain range in the walking process of the robot, as the two-dimensional laser radar can only detect obstacles with a certain height and can not ensure the overall safety of the cleaning robot, the vision is added for carrying out space obstacle avoidance, the vision can detect moving or static space obstacles in a front space in real time during the walking and adds the obstacles into a local obstacle avoidance map, at this time, the local planner can plan a safe path accordingly.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the apparatus, and the modules thereof provided by the present invention may be considered as a hardware component, and the modules included in the system, the apparatus, and the modules for implementing various programs may also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A visual and laser fusion slam method based on a hotel cleaning robot is characterized by comprising the following steps:
step S1: establishing a grid map and marking the position of a preset place on the map;
step S2: generating an obstacle avoidance map in real time according to a preset place;
step S3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location;
step S4: and feeding back information reaching the specified working point position.
2. The hotel cleaning robot based vision and laser fusion slam method of claim 1, wherein in step S1:
and establishing a 2d grid map used for corresponding navigation by using the two-dimensional laser radar.
3. The hotel cleaning robot based vision and laser fusion slam method of claim 1, wherein in step S2:
the robot travels according to a preset place, a planning layer generates an obstacle avoidance map in a preset range in real time in the traveling process, the obstacle avoidance map is used for real-time dynamic obstacle avoidance of the robot, the obstacle avoidance map integrates visual and laser data information, and the robot avoids preset obstacles in a space according to the obstacle avoidance map.
4. The vision and laser fusion slam method based on a hotel cleaning robot of claim 3, wherein:
the operation process comprises a task layer, an execution layer and a feedback layer; the task layer is responsible for task planning, task decomposition and issuing of each branch task; the execution layer is responsible for decomposing the issued tasks again and executing the tasks once by each branch module; the feedback layer feeds back to the upper layer after each module executes the corresponding task or sends an alarm to the upper layer when a problem is encountered;
the execution layer comprises a navigation module, the navigation module is responsible for enabling the robot to reach a designated operation point or operation area, a planning layer in the navigation module is responsible for executing the walking of the robot, and the planning layer plans a global path and a local path, wherein the global path is a complete planned route from an initial position to a target position and is planned according to the shortest route principle; the local path is a local path planning within a preset range in the walking process of the robot, dynamic obstacles which are not on a map are avoided, vision is added to carry out space obstacle avoidance, moving or static space obstacles in a front space are detected in real time during the walking process, and the obstacles are added into the local obstacle avoidance map.
5. The hotel cleaning robot based vision and laser fusion slam method of claim 1, wherein in step S3:
the method comprises the steps that when a robot enters a preset area after arriving at a preset place, the preset area is preliminarily recognized through vision, a required target object is searched in a shot environment through a related deep learning method, when the target object is found, the robot drives to a target position through local path planning, an object model established in advance is paired with the target object through a point cloud matching algorithm, and a navigation point location is corrected to obtain a working point location.
6. A vision and laser fusion slam system based on a hotel cleaning robot, comprising:
module M1: establishing a grid map and marking the position of a preset place on the map;
module M2: generating an obstacle avoidance map in real time according to a preset place;
module M3: identifying a target object in a preset place according to the obstacle avoidance map when the obstacle avoidance map travels to the preset place, and obtaining a working point location;
module M4: and feeding back information reaching the specified working point position.
7. The hotel cleaning robot based vision and laser fusion slam system of claim 6, wherein in module M1:
and establishing a 2d grid map used for corresponding navigation by using the two-dimensional laser radar.
8. The hotel cleaning robot based vision and laser fusion slam system of claim 6, wherein in module M2:
the robot travels according to a preset place, a planning layer generates an obstacle avoidance map in a preset range in real time in the traveling process, the obstacle avoidance map is used for real-time dynamic obstacle avoidance of the robot, the obstacle avoidance map integrates visual and laser data information, and the robot avoids preset obstacles in a space according to the obstacle avoidance map.
9. The hotel cleaning robot based vision and laser fusion slam system of claim 8, wherein:
the operation process comprises a task layer, an execution layer and a feedback layer; the task layer is responsible for task planning, task decomposition and issuing of each branch task; the execution layer is responsible for decomposing the issued tasks again and executing the tasks once by each branch module; the feedback layer feeds back to the upper layer after each module executes the corresponding task or sends an alarm to the upper layer when a problem is encountered;
the execution layer comprises a navigation module, the navigation module is responsible for enabling the robot to reach a designated operation point or operation area, a planning layer in the navigation module is responsible for executing the walking of the robot, and the planning layer plans a global path and a local path, wherein the global path is a complete planned route from an initial position to a target position and is planned according to the shortest route principle; the local path is a local path planning within a preset range in the walking process of the robot, dynamic obstacles which are not on a map are avoided, vision is added to carry out space obstacle avoidance, moving or static space obstacles in a front space are detected in real time during the walking process, and the obstacles are added into the local obstacle avoidance map.
10. The hotel cleaning robot based vision and laser fusion slam system of claim 6, wherein in module M3:
the method comprises the steps that when a robot enters a preset area after arriving at a preset place, the preset area is preliminarily recognized through vision, a required target object is searched in a shot environment through a related deep learning method, when the target object is found, the robot drives to a target position through local path planning, an object model established in advance is paired with the target object through a point cloud matching algorithm, and a navigation point location is corrected to obtain a working point location.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210423475.6A CN114947653A (en) | 2022-04-21 | 2022-04-21 | Visual and laser fusion slam method and system based on hotel cleaning robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210423475.6A CN114947653A (en) | 2022-04-21 | 2022-04-21 | Visual and laser fusion slam method and system based on hotel cleaning robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114947653A true CN114947653A (en) | 2022-08-30 |
Family
ID=82980160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210423475.6A Pending CN114947653A (en) | 2022-04-21 | 2022-04-21 | Visual and laser fusion slam method and system based on hotel cleaning robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114947653A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109998416A (en) * | 2019-04-22 | 2019-07-12 | 深兰科技(上海)有限公司 | A kind of dust-collecting robot |
CN110147106A (en) * | 2019-05-29 | 2019-08-20 | 福建(泉州)哈工大工程技术研究院 | Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system |
CN111664843A (en) * | 2020-05-22 | 2020-09-15 | 杭州电子科技大学 | SLAM-based intelligent storage checking method |
US20210007572A1 (en) * | 2019-07-11 | 2021-01-14 | Lg Electronics Inc. | Mobile robot using artificial intelligence and controlling method thereof |
CN114158984A (en) * | 2021-12-22 | 2022-03-11 | 上海景吾酷租科技发展有限公司 | Cleaning robot |
-
2022
- 2022-04-21 CN CN202210423475.6A patent/CN114947653A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109998416A (en) * | 2019-04-22 | 2019-07-12 | 深兰科技(上海)有限公司 | A kind of dust-collecting robot |
CN110147106A (en) * | 2019-05-29 | 2019-08-20 | 福建(泉州)哈工大工程技术研究院 | Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system |
US20210007572A1 (en) * | 2019-07-11 | 2021-01-14 | Lg Electronics Inc. | Mobile robot using artificial intelligence and controlling method thereof |
CN111664843A (en) * | 2020-05-22 | 2020-09-15 | 杭州电子科技大学 | SLAM-based intelligent storage checking method |
CN114158984A (en) * | 2021-12-22 | 2022-03-11 | 上海景吾酷租科技发展有限公司 | Cleaning robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10612934B2 (en) | System and methods for robotic autonomous motion planning and navigation | |
EP3612906B1 (en) | Method and system for environment map generation and alignment | |
US11219998B2 (en) | Method for treating a surface and corresponding automated device | |
EP3552775B1 (en) | Robotic system and method for operating on a workpiece | |
JP7527663B2 (en) | Self-propelled printing robot and printing method with line printing path optimization | |
CN111693050B (en) | Indoor medium and large robot navigation method based on building information model | |
US11369983B2 (en) | Automaton for treating a surface | |
CN112518739B (en) | Track-mounted chassis robot reconnaissance intelligent autonomous navigation method | |
WO2017092565A1 (en) | Charging system and charging method, computer storage medium | |
CN113566823A (en) | Method and system for planning path of transport robot in unknown environment | |
You et al. | An autonomous robot for pruning modern, planar fruit trees | |
CN115599099A (en) | ROS-based autonomous navigation robot | |
US20240104758A1 (en) | Auto-locating and positioning relative to an aircraft | |
CN114947653A (en) | Visual and laser fusion slam method and system based on hotel cleaning robot | |
JP7481903B2 (en) | Information processing device, information processing method, information processing system, and computer program | |
Behringer et al. | Rascal-an autonomous ground vehicle for desert driving in the darpa grand challenge 2005 | |
CN111300409A (en) | Industrial robot path planning method | |
Siswoyo et al. | Development Of an Autonomous Robot To Guide Visitors In Health Facilities Using A Heskylens Camera: Development Of an Autonomous Robot To Guide Visitors In Health Facilities Using A Heskylens Camera | |
CN112214018B (en) | Robot path planning method and device | |
CN114995424A (en) | Control method of mobile workstation and related equipment thereof | |
Wang | Enabling Human-Robot Partnerships in Digitally-Driven Construction Work through Integration of Building Information Models, Interactive Virtual Reality, and Process-Level Digital Twins | |
Kidiraliev et al. | Using optical sensors for industrial robot-human interactions in a Gazebo environment | |
JP2021099681A (en) | Position measurement system and position measurement method | |
CN114474054B (en) | Human-shaped robot navigation method | |
Novoselov et al. | Algorithm for Finding the Optimal Way to Move a Mobile Platform Among Indefinite Obstacles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |