CN108297115B - Autonomous repositioning method for robot - Google Patents
Autonomous repositioning method for robot Download PDFInfo
- Publication number
- CN108297115B CN108297115B CN201810107434.XA CN201810107434A CN108297115B CN 108297115 B CN108297115 B CN 108297115B CN 201810107434 A CN201810107434 A CN 201810107434A CN 108297115 B CN108297115 B CN 108297115B
- Authority
- CN
- China
- Prior art keywords
- robot
- environment
- image
- preset
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/088—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
- B25J13/089—Determining the position of the robot with reference to its environment
Abstract
The invention discloses an autonomous repositioning method of a robot, which belongs to the technology of the robot field and comprises the following steps: step S1, selecting a preset number of possible positions of the robot in the environment map to form a position set; step S2, scanning by the laser radar to obtain environmental data of the robot at the current position; step S3, performing feature matching with each possible position according to the environment data, and completing positioning if only one possible position is matched with the current position; otherwise, go to step S4; step S4, the image acquisition module acquires the environmental image of the current position; and step S5, performing feature matching according to the environment image and each possible position to complete positioning. The technical scheme has the beneficial effects that: the invention improves the autonomy of the whole positioning process, and has high positioning speed and high robustness.
Description
Technical Field
The invention relates to a technology in the field of robots, in particular to an autonomous repositioning method of a robot.
Background
With the development of science and technology and the improvement of living standard, more and more tasks can be replaced by the robot, and in some occasions, the robot is required to be capable of quickly and accurately positioning even after being moved and restarted.
In the prior art, in order to enable a robot to perform self-positioning after being moved to a new position or when being started, an adopted technology is called as an SLAM (synchronous positioning and mapping) technology, so that the position of a mobile device in an unknown environment is positioned, and the robot performs self-positioning according to position estimation and a map in a moving process. Meanwhile, an incremental map is built on the basis of self positioning, and autonomous positioning and navigation map building of the robot are realized. One assumption premise of SLAM technology is that the initial pose of the robot is known or roughly accurate, and more data needs to be collected to realize the positioning of the robot by artificially controlling the movement of the robot, which is time-consuming and affects the operation of the robot.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an autonomous repositioning method for a robot. According to the invention, firstly, the feature matching is carried out on the environment data and the environment map, and the positioning is realized by matching the environment image of the current position where the robot is located with the reference image, so that the autonomy of the whole positioning process is improved, the positioning speed is high, and the robustness is high.
The invention is realized by the following technical scheme:
the invention relates to an autonomous repositioning method of a robot, wherein the robot is provided with a laser radar and an image acquisition module, and an environment map of the current environment is prestored in the robot; further comprising the steps of:
step S1, selecting a preset number of possible positions of the robot in the environment map to form a position set;
step S2, scanning by the laser radar to obtain environmental data of the robot at the current position;
step S3, performing feature matching with each of the possible locations according to the environment data:
if only one possible position is matched with the current position, positioning is finished;
otherwise, go to step S4;
step S4, the image acquisition module acquires the environmental image of the current position;
step S5, performing feature matching according to the environment image and each possible position in the position set, obtaining a possible position matched with the current position, and completing positioning.
Preferably, in the method for autonomously repositioning a robot, in step S3, the feature matching specifically includes the following steps:
step S31, extracting the current environment feature at the current position according to the environment data;
step S32, extracting possible environmental features at the possible positions from the environmental map;
step S32, processing to obtain a coincidence degree of the current environmental characteristic and the possible environmental characteristic, and if the coincidence degree is greater than a preset first threshold, matching the possible location with the current location.
Preferably, the autonomous relocation method of the robot, wherein before the step S4, the following steps are first performed:
screening the possible positions with the contact ratio larger than a preset second threshold value, acquiring a preset number of map positions in the environment map around the screened possible positions according to Gaussian distribution to serve as the possible positions, and merging the possible positions into the position set.
Preferably, the autonomous relocation method of the robot, wherein before the step S4, the following steps are first performed:
and screening out the possible positions with the contact ratio smaller than a preset third threshold value and deleting the possible positions from the position set.
Preferably, the autonomous repositioning method of the robot, wherein the step S4 specifically includes the following steps:
step S41, extracting a reference image associated with each possible position from a reference video image sequence prestored in the robot, where the reference image includes pose information of the robot at the possible position;
in step S42, the image acquisition module acquires the environment image corresponding to each reference image according to the pose information in each reference image.
Preferably, in the autonomous repositioning method for the robot, in step S5, the feature matching specifically includes the following steps:
step S51, subtracting the reference image of the possible position from the corresponding environment image;
in step S52, if the difference is smaller than a preset fourth threshold, the current position is matched with the possible position.
Preferably, the robot autonomous relocation method, wherein the environment data includes point cloud data acquired by the laser radar.
Preferably, the robot autonomous repositioning method includes the step of acquiring pose information including a shooting angle of the image acquisition module.
Preferably, the robot autonomous relocation method is such that the lidar collects the environmental data along a circumferential direction of the robot.
The beneficial effects of the above technical scheme are:
according to the invention, firstly, the feature matching is carried out on the environment data and the environment map, and the positioning is realized by matching the environment image of the current position where the robot is located with the reference image, so that the autonomy of the whole positioning process is improved, the positioning speed is high, and the robustness is high.
Drawings
FIG. 1 is a flow chart illustrating a method for autonomous repositioning of a robot according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart illustrating feature matching according to environment data according to a preferred embodiment of the present invention;
FIG. 3 is a schematic view of an environmental image acquisition process according to a preferred embodiment of the present invention;
FIG. 4 is a flowchart illustrating a process of performing feature matching according to an environment image according to a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The embodiment relates to an autonomous repositioning method of a robot.
A laser radar is preset in the robot, and the laser radar can scan the surrounding environment of the robot to obtain environment data. The environmental data obtained by the laser radar scanning comprises point cloud data.
The lidar scans the environment over 360 degrees of the surroundings to obtain environmental data. Alternatively, the mobile robot scans the surrounding environment data by moving within the vicinity.
Point cloud data is a collection of data points in three-dimensional space. The laser radar scans the surface of an object to obtain a plurality of data points to form point cloud data.
The robot prestores an environment map of the current environment, wherein the environment map is formed in advance by the robot through SLAM technology in the current environment. When the robot needs to be repositioned after moving in the current environment, i.e. to a position in the environment map.
The coordinates of the data points obtained by the first scanning of the laser radar are all coordinate values in the robot coordinate system. Then, the coordinate values in the robot coordinate system need to be converted into coordinate values in the coordinate system in the environment map.
As shown in fig. 1, the autonomous relocation method specifically includes the following steps:
in step S1, a set of positions is formed by selecting a preset number of possible positions of the robot in the environment map.
In the location set, the number of possible locations is proportional to the size of the traffic area in the environment map in the robot.
And step S2, collecting the environmental data of the robot at the current position by the laser radar.
The environment data of the current position comprises point cloud data of the current position, depth data of the robot and surrounding objects, and shape data of the objects in the surrounding environment.
Step S3, performing feature matching with each possible position according to the environmental data of the current position, and completing positioning if only one possible position is matched with the current position; otherwise, go to step S4;
as shown in fig. 2, the specific steps of performing feature matching according to the environment data include:
in step S31, the current environmental features at the current location are extracted from the environmental data.
The point cloud data, the depth data, and the shape data in the environment data are descriptions of the environment at the current location. Certain features present in the current environment also react to the environmental data as well as to the environmental map.
And extracting characteristic data from the point cloud data as the current environment characteristic.
In step S32, possible environmental features at possible locations are extracted from the environmental map.
The environment map accurately reflects the characteristics of each position in the current environment. The environment map stores point cloud data during construction, and feature data can be extracted from the point cloud data to serve as possible environment features at possible positions.
In step S32, the process obtains the coincidence degree of the current environmental characteristic and the possible environmental characteristic, and if the coincidence degree is greater than the preset first threshold, the possible location matches the current location. The preset first threshold may be 90% or 95%.
The processing obtains that the coincidence ratio of the current environmental features of the current location and the possible environmental features at each possible location in the location set is greater than 90%, and then the possible location is matched with the current location. When only one possible position in the position set is matched with the current position, positioning is completed, and the accurate position of the current position in the environment map is obtained.
If there are two or more possible locations that match the current location, then the location is determined by step 4.
If there is no possible location in the location set that matches the current location, a further positioning is also performed, via step 4.
In a preferred embodiment, before proceeding to step S4, possible locations with a coincidence degree greater than a preset second threshold value, i.e., 70%, are screened out, and a preset number of locations around the screened out possible locations are selected as a possible location merging location set according to a gaussian distribution. Since new possible positions are added to the set of positions, steps 2 and 3 can be performed again after the addition. The preset second threshold may be 70% or 75%.
In a preferred embodiment, before step S4, possible locations with a coincidence degree less than a preset third threshold value, i.e., 30%, are screened out and the screened out possible locations are deleted from the location set. The third threshold value may be 30% or 35%.
Deleting the possible positions of the part in the position set can reduce the workload in step 4, thereby improving the efficiency.
In step S4, the image capturing module captures an environmental image of the current location.
As shown in fig. 3, step S4 specifically includes the following steps:
in step S41, a reference image associated with each possible position is extracted from a reference video image sequence pre-stored in the robot, and the reference image includes pose information of the robot at the possible position.
The reference video image sequence is formed by the image acquisition module through acquisition together when the robot establishes an environment map. The image of each frame in the reference video image includes pose information of the robot at the time the image was captured. The pose information comprises a shooting angle when the image acquisition module acquires the image.
The image located at the possible position or the image located the farthest from the possible position is selected as the reference image of the possible position. Each possible position in the position set is associated with a reference image, i.e. the number of reference images is the same as the number of possible positions in the position set.
In step S42, the image acquisition module acquires an environment image corresponding to each reference image according to the pose information in each reference image.
Each reference image comprises different pose information, and the image acquisition module acquires the environment image at the current position according to the different pose information. That is, each possible position has an environment image corresponding to it, and each environment image corresponds to a reference image.
And step S5, performing feature matching according to the environment image and each possible position to obtain a possible position matched with the current position, and completing positioning.
As shown in fig. 4, the specific process of performing feature matching according to the environment image includes:
step S51, subtracting the reference image of the possible position from the corresponding environment image;
in step S52, if the difference is smaller than a preset fourth threshold, the current position is matched with the possible position.
After each possible position in the position set is matched with the corresponding environment image of the current position, the possible position matched with the current position in the position set is the current position of the robot in the environment map, and the robot is positioned.
Compared with the prior art, the autonomous repositioning method of the robot of the invention comprises the following steps:
according to the invention, firstly, the feature matching is carried out on the environment data and the environment map, and the positioning is realized by matching the environment image of the current position where the robot is located with the reference image, so that the autonomy of the whole positioning process is improved, the positioning speed is high, and the robustness is high.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
Claims (7)
1. The robot is provided with a laser radar and an image acquisition module, and an environment map of the current environment is prestored in the robot; further comprising the steps of:
step S1, selecting a preset number of possible positions of the robot in the environment map to form a position set;
step S2, scanning by the laser radar to obtain environmental data of the robot at the current position;
step S3, performing feature matching with each of the possible locations according to the environment data:
if only one possible position is matched with the current position, positioning is finished;
otherwise, go to step S4;
step S4, the image acquisition module acquires the environmental image of the current position;
step S5, performing feature matching according to the environment image and each possible position in the position set to obtain a possible position matched with the current position, and completing positioning;
in the step S3, the feature matching specifically includes the following steps:
step S31, extracting the current environment feature at the current position according to the environment data;
step S32, extracting possible environmental features at the possible positions from the environmental map;
step S33, processing to obtain a coincidence degree of the current environmental characteristic and the possible environmental characteristic, and if the coincidence degree is greater than a preset first threshold, matching the possible location with the current location;
after the step S3, before the step S4, the following steps are first performed:
screening the possible positions with the contact ratio larger than a preset second threshold, acquiring a preset number of map positions in the environment map around the screened possible positions according to Gaussian distribution, taking the map positions as the possible positions, merging the possible positions into the position set, if new possible positions are added into the position set, executing the steps S2-S3 again after the new possible positions are added, wherein the preset second threshold is smaller than the preset first threshold.
2. The autonomous repositioning method of robot according to claim 1, wherein after the step S3 and before the step S4, the following steps are further performed:
screening out the possible positions with the contact ratio smaller than a preset third threshold value, and deleting the possible positions from the position set, wherein the preset third threshold value is smaller than the preset second threshold value.
3. The method for autonomous relocation of a robot according to claim 1, wherein said step S4 specifically comprises the steps of:
step S41, extracting a reference image associated with each possible position from a reference video image sequence prestored in the robot, where the reference image includes pose information of the robot at the possible position;
in step S42, the image acquisition module acquires the environment image corresponding to each reference image according to the pose information in each reference image.
4. The autonomous repositioning method of a robot according to claim 3, wherein in step S5, the feature matching comprises in particular the steps of:
step S51, subtracting the reference image of the possible position from the corresponding environment image;
in step S52, if the difference is smaller than a preset fourth threshold, the current position is matched with the possible position.
5. The method of autonomous repositioning of a robot according to claim 1, wherein the environmental data comprises point cloud data acquired by the lidar.
6. The autonomous repositioning method of a robot according to claim 5, wherein the lidar collects the environmental data along a circumferential direction of the robot.
7. The autonomous repositioning method of a robot according to claim 3, wherein the pose information includes a shooting angle of the image acquisition module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810107434.XA CN108297115B (en) | 2018-02-02 | 2018-02-02 | Autonomous repositioning method for robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810107434.XA CN108297115B (en) | 2018-02-02 | 2018-02-02 | Autonomous repositioning method for robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108297115A CN108297115A (en) | 2018-07-20 |
CN108297115B true CN108297115B (en) | 2021-09-28 |
Family
ID=62850984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810107434.XA Active CN108297115B (en) | 2018-02-02 | 2018-02-02 | Autonomous repositioning method for robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108297115B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110763232B (en) * | 2018-07-25 | 2021-06-29 | 深圳市优必选科技有限公司 | Robot and navigation positioning method and device thereof |
CN109255817A (en) * | 2018-09-14 | 2019-01-22 | 北京猎户星空科技有限公司 | A kind of the vision method for relocating and device of smart machine |
CN109195106B (en) * | 2018-09-17 | 2020-01-03 | 北京三快在线科技有限公司 | Train positioning method and device |
CN109993794A (en) * | 2019-03-29 | 2019-07-09 | 北京猎户星空科技有限公司 | A kind of robot method for relocating, device, control equipment and storage medium |
CN110307838B (en) * | 2019-08-26 | 2019-12-10 | 深圳市优必选科技股份有限公司 | Robot repositioning method and device, computer-readable storage medium and robot |
CN110988795A (en) * | 2020-03-03 | 2020-04-10 | 杭州蓝芯科技有限公司 | Mark-free navigation AGV global initial positioning method integrating WIFI positioning |
CN111267102A (en) * | 2020-03-09 | 2020-06-12 | 深圳拓邦股份有限公司 | Method and device for acquiring initial position of robot, robot and storage medium |
CN111267103A (en) * | 2020-03-09 | 2020-06-12 | 深圳拓邦股份有限公司 | Method and device for acquiring initial position of robot, robot and storage medium |
CN112269386B (en) * | 2020-10-28 | 2024-04-02 | 深圳拓邦股份有限公司 | Symmetrical environment repositioning method, symmetrical environment repositioning device and robot |
CN112729302B (en) * | 2020-12-15 | 2024-03-29 | 深圳供电局有限公司 | Navigation method and device for inspection robot, inspection robot and storage medium |
CN113204030A (en) * | 2021-04-13 | 2021-08-03 | 珠海市一微半导体有限公司 | Multipoint zone constraint repositioning method, chip and robot |
CN116442286B (en) * | 2023-06-15 | 2023-10-20 | 国网瑞嘉(天津)智能机器人有限公司 | Robot work object positioning system, method, device, robot and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120132888A (en) * | 2011-05-30 | 2012-12-10 | 대우조선해양 주식회사 | Method for image processing of laser vision system |
CN105547305A (en) * | 2015-12-04 | 2016-05-04 | 北京布科思科技有限公司 | Pose solving method based on wireless positioning and laser map matching |
CN105652871A (en) * | 2016-02-19 | 2016-06-08 | 深圳杉川科技有限公司 | Repositioning method for mobile robot |
CN105716611A (en) * | 2016-01-29 | 2016-06-29 | 西安电子科技大学 | Environmental information-based indoor mobile robot and positioning method thereof |
CN106679647A (en) * | 2016-12-02 | 2017-05-17 | 北京贝虎机器人技术有限公司 | Method and device for initializing pose of autonomous mobile equipment |
-
2018
- 2018-02-02 CN CN201810107434.XA patent/CN108297115B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120132888A (en) * | 2011-05-30 | 2012-12-10 | 대우조선해양 주식회사 | Method for image processing of laser vision system |
CN105547305A (en) * | 2015-12-04 | 2016-05-04 | 北京布科思科技有限公司 | Pose solving method based on wireless positioning and laser map matching |
CN105716611A (en) * | 2016-01-29 | 2016-06-29 | 西安电子科技大学 | Environmental information-based indoor mobile robot and positioning method thereof |
CN105652871A (en) * | 2016-02-19 | 2016-06-08 | 深圳杉川科技有限公司 | Repositioning method for mobile robot |
CN106679647A (en) * | 2016-12-02 | 2017-05-17 | 北京贝虎机器人技术有限公司 | Method and device for initializing pose of autonomous mobile equipment |
Non-Patent Citations (1)
Title |
---|
无人机地标匹配定位技术研究;赵红等;《系统工程与电子技术》;20030731;第25卷(第7期);第870-872页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108297115A (en) | 2018-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108297115B (en) | Autonomous repositioning method for robot | |
US10824853B2 (en) | Human detection system for construction machine | |
US20180012371A1 (en) | Image Registration with Device Data | |
CN109934847B (en) | Method and device for estimating posture of weak texture three-dimensional object | |
JP2008275391A (en) | Position attitude measurement device and method | |
CN108362223B (en) | Portable 3D scanner, scanning system and scanning method | |
WO2021134285A1 (en) | Image tracking processing method and apparatus, and computer device and storage medium | |
CN110428490B (en) | Method and device for constructing model | |
CN112414403B (en) | Robot positioning and attitude determining method, equipment and storage medium | |
JP6524529B2 (en) | Building limit judging device | |
CN110134117B (en) | Mobile robot repositioning method, mobile robot and electronic equipment | |
CN111815707A (en) | Point cloud determining method, point cloud screening device and computer equipment | |
CN110634138A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN111161334B (en) | Semantic map construction method based on deep learning | |
CN111239768A (en) | Method for automatically constructing map and searching inspection target by electric power inspection robot | |
CN112904369A (en) | Robot repositioning method, device, robot and computer-readable storage medium | |
CN114937130B (en) | Topographic map mapping method, device, equipment and storage medium | |
CN114612786A (en) | Obstacle detection method, mobile robot and machine-readable storage medium | |
WO2020015501A1 (en) | Map construction method, apparatus, storage medium and electronic device | |
CN113313765B (en) | Positioning method, positioning device, electronic equipment and storage medium | |
JP2016148956A (en) | Positioning device, positioning method and positioning computer program | |
CN111724432B (en) | Object three-dimensional detection method and device | |
CN113987246A (en) | Automatic picture naming method, device, medium and electronic equipment for unmanned aerial vehicle inspection | |
CN116160458B (en) | Multi-sensor fusion rapid positioning method, equipment and system for mobile robot | |
WO2020153264A1 (en) | Calibration method and calibration device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20180720 Assignee: Huizhi robot technology (Shenzhen) Co.,Ltd. Assignor: FLYINGWINGS INTELLIGENT ROBOT TECHNOLOGY (SHANGHAI) CO.,LTD. Contract record no.: X2022980014977 Denomination of invention: An autonomous relocation method for robots Granted publication date: 20210928 License type: Common License Record date: 20220914 |