CN114216452B - Positioning method and device for robot - Google Patents

Positioning method and device for robot Download PDF

Info

Publication number
CN114216452B
CN114216452B CN202111481096.4A CN202111481096A CN114216452B CN 114216452 B CN114216452 B CN 114216452B CN 202111481096 A CN202111481096 A CN 202111481096A CN 114216452 B CN114216452 B CN 114216452B
Authority
CN
China
Prior art keywords
matching
map
pose
layer
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111481096.4A
Other languages
Chinese (zh)
Other versions
CN114216452A (en
Inventor
陈波
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202111481096.4A priority Critical patent/CN114216452B/en
Publication of CN114216452A publication Critical patent/CN114216452A/en
Application granted granted Critical
Publication of CN114216452B publication Critical patent/CN114216452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means

Abstract

The invention relates to the technical field of robots, in particular to a positioning method of a robot, which comprises the following steps: acquiring a laser spot diagram acquired by a robot, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points; matching the laser point map with the current grid map of the robot, and obtaining a matching result; if the matching result meets the matching condition, updating the current grid map according to the laser point diagram, and determining the current position of the robot. The method can quickly correct the positioning of the robot, so that the robot has strong adaptability to the field environment, the positioning accuracy of the robot in the room is improved, and the method has the characteristic of low cost.

Description

Positioning method and device for robot
Technical Field
The present invention relates to the field of robots, and in particular, to a method and an apparatus for positioning a robot.
Background
The positioning method of the robot generally adopts a positioning mode based on laser radar data, and particularly an indoor service robot. However, since the laser radar data is mostly distance information, when the environments are similar or the environments are changed, the positioning result of the robot is easy to deviate, even the positioning is lost, and the problem of low positioning precision of the robot is caused.
Disclosure of Invention
According to the positioning method and device for the robot, the technical problem that in the prior art, the positioning accuracy of the robot is low is solved, the positioning of the robot is quickly corrected, the adaptability of the robot to the field environment is high, the positioning accuracy of the robot in a room is improved, and the positioning method and device for the robot also have the technical effect of being low in cost.
In a first aspect, an embodiment of the present invention provides a positioning method for a robot, including:
acquiring a laser spot diagram acquired by a robot, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points;
matching the laser point map with the current grid map of the robot, and obtaining a matching result;
if the matching result meets the matching condition, updating the current grid map according to the laser point diagram, and determining the current position of the robot.
Preferably, the matching the laser spot map with the current grid map of the robot, and obtaining a matching result, includes:
after the current grid map is downsampled into an N-layer grid sub-map, matching the laser point map with the N-layer grid sub-map, and obtaining an N-layer pose map set, wherein the resolution of the N-layer grid sub-map is sequentially reduced, and N is more than or equal to 2;
Up-sampling each frame of pose graphs in the N-th layer pose graph set, and obtaining a sampled pose graph set of the N-th layer;
and matching the N-layer sampled pose atlas with an N-1 layer grid sub-map to obtain an N-1 layer pose atlas, up-sampling each frame of pose image in the N-1 layer pose image to obtain an N-1 layer sampled pose atlas, and repeatedly executing the operation until the N-layer sampled pose atlas is matched with a first layer grid sub-map in the N layer grid sub-map, and obtaining a final pose image, wherein the matching result is the final pose image.
Preferably, the matching the laser spot diagram with the nth layer grid sub-map and obtaining the nth layer pose diagram set includes:
taking each pixel point in the N-th layer grid sub-map as a rotating point, after rotating the N-th layer grid sub-map once according to a first preset rotating angle, matching with the laser point map to obtain a matching gesture map and a projection point set on the matching gesture map, and obtaining the matching probability of the matching gesture map according to the projection point set on the matching gesture map, wherein the projection point set on the matching gesture map is a set of points projected by the plurality of laser points in the N-th layer grid sub-map;
After the N-layer grid sub-map rotates for one circle, a matching pose image set of each pixel point in the N-layer grid sub-map is obtained, and after the matching pose image set is obtained, a matching pose image total set of the N-layer grid sub-map is obtained, wherein each frame of matching pose image in the matching pose image total set is ordered according to the descending order of the matching probability of the matching pose image;
taking the matching pose graphs corresponding to the matching probability of the matching pose graph with the largest matching pose graph total set as a starting point, and taking out a first preset number of matching pose graphs from the matching pose graph total set to form the N-layer pose graph set.
Preferably, the obtaining the matching probability of the matching pose graph according to the projection point set on the matching pose graph includes:
and obtaining the matching probability according to the probability value of each projection point in the projection point set on the matching pose graph, wherein the probability value is a relation probability value between the semantic value of the laser point corresponding to each projection point and the semantic value of the map point in the Nth layer grid sub-map.
Preferably, the matching the sampled pose atlas of the nth layer with the grid sub-map of the N-1 th layer to obtain the pose atlas of the N-1 th layer includes:
For each frame of the N-layer sampled pose graph in the N-layer sampled pose graph set, taking each pixel point of a datum point of the N-layer sampled pose graph as a rotation point, rotating the N-layer sampled pose graph once within a set rotation range according to a second preset rotation angle to obtain a rotation pose graph and a projection point set on the rotation pose graph, and obtaining the matching probability of the rotation pose graph according to the projection point set on the rotation pose graph, wherein the projection point set on the rotation pose graph is a set of points projected by the N-layer sampled pose graph in the N-1-layer grid sub-map;
after the sampled pose image of the nth layer rotates within the set rotation range, a rotation pose image set of each pixel point of a reference point of the sampled pose image of the nth layer is obtained, and after the rotation pose image set is obtained, a rotation pose image total set of the sampled pose image set of the nth layer is obtained, wherein each frame of rotation pose image of the rotation pose image total set is ordered according to the sequence of decreasing matching probability of the rotation pose image;
Taking the rotation pose image corresponding to the rotation pose image with the largest matching probability of the rotation pose image in the rotation pose image total set as a starting point, and taking out a second preset number of rotation pose images from the rotation pose image total set to form the N-1 layer pose image set.
Preferably, if the matching result meets a matching condition, updating the current grid map according to the laser spot diagram, and determining the current position of the robot, including:
if the matching probability of the matching result is not smaller than a first matching threshold value, updating the current grid map according to the laser point diagram, and determining the current position of the robot.
Preferably, if the matching result meets a matching condition, updating the current grid map according to the laser spot diagram, and determining the current position of the robot, including:
if the matching probability of the matching result is smaller than the first matching threshold and is not smaller than a second matching threshold, and the matching probability of the projection point appointed in the matching result is located in a laser point matching range, updating the current grid map according to the laser point diagram, and determining the current position of the robot, wherein the second matching threshold is smaller than the first matching threshold.
Based on the same inventive concept, the present invention also provides a positioning device of a robot, including:
the acquisition module is used for acquiring a laser spot diagram acquired by the robot, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points;
the matching module is used for matching the laser point map with the current grid map of the robot and obtaining a matching result;
and the execution module is used for updating the current grid map according to the laser point diagram and determining the current position of the robot if the matching result meets the matching condition.
Based on the same inventive concept, in a third aspect, the present invention provides a robot comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the positioning method of the robot when executing the program.
Based on the same inventive concept, in a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a positioning method of a robot.
One or more technical solutions in the embodiments of the present invention at least have the following technical effects or advantages:
in the embodiment of the invention, a laser spot diagram acquired by a robot is acquired firstly, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points; and matching the laser dot map with the current grid map of the robot, and obtaining a matching result. And then judging the matching, if the matching result meets the matching condition, updating the current grid map according to the laser dot diagram, and determining the current position of the robot. The positioning method provided by the embodiment of the invention can be used for rapidly correcting the positioning of the robot, so that the robot has strong adaptability to the field environment, the indoor positioning precision of the robot is improved, and the method has the characteristic of low cost.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also throughout the drawings, like reference numerals are used to designate like parts. In the drawings:
Fig. 1 is a schematic flow chart of steps of a positioning method of a robot in an embodiment of the invention;
FIG. 2 shows a block schematic of a positioning device of a robot in an embodiment of the invention;
fig. 3 shows a schematic structural view of a robot in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1
A first embodiment of the present invention provides a positioning method of a robot, as shown in fig. 1, which is applied to a robot. Next, the specific implementation steps of the positioning method of the robot provided in this embodiment will be described in detail with reference to fig. 1:
firstly, executing step S101, and acquiring a laser spot diagram acquired by a robot, wherein the laser spot diagram comprises a plurality of laser spots and semantic values of each laser spot in the plurality of laser spots;
Specifically, when the robot is in a certain space, a laser spot diagram of the space acquired by the robot is acquired, and the laser spot diagram comprises a plurality of laser points. When each laser spot is acquired, not only the position information of the laser spot but also the semantic value of the laser spot is acquired. The semantic value of the laser spot represents the specific physical meaning of the laser. For example, the laser spot semantic value is 1, and the real object represented by the laser spot is unknown. The semantic value of the laser point is 2, and the real object represented by the laser point is a wall surface. The semantic value of the laser point is 3, and the real object represented by the laser point is a door. The laser spot semantic value is 4, which represents the physical object as a cabinet.
Next, executing step 102, matching the laser dot map with the current grid map of the robot, and obtaining a matching result;
specifically, firstly, after the current grid map is downsampled into an N-layer grid sub-map, the laser dot map is matched with the N-layer grid sub-map, and an N-layer pose map set is obtained, wherein the resolution of the N-layer grid sub-map is sequentially reduced, and N is more than or equal to 2.
The step of downsampling the current grid map into N layers of grid sub-maps is specifically as follows: the current grid map is divided into N layers of grid sub-maps with sequentially reduced resolutions. For example, the current grid map is downsampled into a first layer grid sub-map having a resolution of 800×800, a second layer grid sub-map having a resolution of 400×400, and a third layer grid sub-map having a resolution of 100×100.
And after the current grid map is downsampled into an N-layer grid sub-map, matching the laser dot map with the N-layer grid sub-map, and obtaining an N-layer pose map set. The specific process for obtaining the pose atlas of the nth layer is as follows:
the method comprises the steps of taking each pixel point in an N-th layer grid sub-map as a rotation point, carrying out matching with a laser point map after rotating the N-th layer grid sub-map once according to a first preset rotation angle to obtain a matching gesture map and a projection point set on the matching gesture map, and obtaining matching probability of the matching gesture map according to the projection point set on the matching gesture map, wherein the projection point set on the matching gesture map is a set of points projected by a plurality of laser points in the N-th layer grid sub-map, and the first preset rotation angle is set according to actual requirements.
And secondly, after the N-layer grid sub-map rotates for one circle, a matching gesture image set of each pixel point in the N-layer grid sub-map is obtained, and after the matching gesture image set is obtained, a matching gesture image total set of the N-layer grid sub-map is obtained, wherein each frame of matching gesture image in the matching gesture image total set is ordered according to the descending order of the matching probability of the matching gesture images.
And thirdly, taking the matching pose graph corresponding to the matching probability of the largest matching pose graph in the matching pose graph total set as a starting point, and taking out the first preset number of matching pose graphs from the matching pose graph total set to form an N-layer pose graph set. The first preset number is set according to actual requirements.
For example, assuming that the nth layer grid sub-map is a third layer grid sub-map, the resolution of the third layer grid sub-map is 100×100, and the resolution of 100×100 indicates that there are 100×100=10000 pixels in the third layer grid sub-map. And matching each pixel point of the third grid map as a rotation point. For each rotation point, after the third layer grid map rotates once at a first preset rotation angle of 30 degrees in the clockwise direction from the forward direction, matching the third layer grid sub-map with the laser point map to obtain a frame of matching pose map and a projection point set of the matching pose map; and obtaining the matching probability of the matching pose graph according to the projection point set of the matching pose graph.
Rotating the third layer grid sub-map by 30 degrees again, at the moment, representing that the third layer grid sub-map has rotated by 60 degrees, and matching the third layer grid sub-map rotated by 30 degrees again with the laser point map to obtain a frame of matching pose map and a projection point set of the matching pose map; and obtaining the matching probability of the matching pose graph according to the projection point set of the matching pose graph.
Rotating the third layer grid sub-map by 30 degrees again, at the moment, representing that the third layer grid sub-map has been rotated by 90 degrees, and matching the third layer grid sub-map rotated by 30 degrees again with the laser point map to obtain a frame of matching pose map and a projection point set of the matching pose map; and obtaining the matching probability of the matching pose graph according to the projection point set of the matching pose graph. And by analogy, continuing to rotate the third-layer grid map, and continuing to match the third-layer grid sub-map with the laser dot map.
After rotating the third layer grid sub-map 360 ° (i.e., one revolution), a set of matching pose maps for the rotation point is obtained. Because the first preset rotation angle is rotated once for 30 degrees to obtain a frame of matching pose chart, the rotation point obtains 360 degrees/30 degrees=12 frames of matching pose chart in total.
And obtaining a matching pose atlas of each rotation point according to the process of each rotation point. After the matching pose graph set of each rotation point is obtained, the matching pose graph total set of the third-layer grid sub-map is obtained, namely, 12×10000=120000 frames of matching pose graphs are obtained.
When the matching pose graph of each frame is obtained, the matching probability of the matching pose graph of each frame is also obtained. And sequencing each frame of matching pose graphs in the matching pose graph total set according to the descending order of the matching probabilities, and taking the matching pose graph corresponding to the matching probability of the largest matching pose graph in the matching pose graph total set as a starting point. For example, the matching probability of the 1 st frame matching pose map is 0.99, the matching probability of the 2 nd frame matching pose map is 0.98, the matching probability of the 3 rd frame matching pose map is 0.95, and the matching probability of the 12 th ten thousand frame matching pose map is 0.0001.
And (3) taking out the matching pose graphs of the first 30 frames from the matching pose graph total set, namely taking out the matching pose graphs of the 1 st frame to the matching pose graphs of the 30 th frame to form a pose graph set of a third layer.
The matching probability of each frame of matching pose graph is obtained by obtaining matching probability according to the probability value of each projection point in the projection point set on the matching pose graph, wherein the probability value is the relation probability value between the semantic value of the laser point corresponding to each projection point and the semantic value of the map point in the N-layer grid sub-map.
Specifically, taking matching of the laser point map and the nth layer grid sub-map as an example, when a frame of matching pose map is obtained, a projection point set exists on the matching pose map, wherein the projection point set on the matching pose map is a set of points projected by a plurality of laser points of the laser point map in the nth layer grid sub-map. Each projection point in the set of projection points represents a point where a certain laser point is projected in the nth layer grid sub-map. The probability value of each projection point is a probability according to a relationship between the semantic value of the laser point corresponding to each projection point and the semantic value of the map point in the N-th layer grid sub-map.
The probability values of the projection points are set as follows:
the semantic value of the laser point corresponding to the projection point and the semantic value of the map point in the grid sub-map of the nth layer are unknown, and the probability value of the projection point is 1;
The semantic value of the laser point corresponding to the projection point is unknown, and the semantic value of the map point in the N layer grid sub-map is not unknown, so that the probability value of the projection point is 0.5;
the semantic value of the laser point corresponding to the projection point is not unknown, and the semantic value of the map point in the N layer grid sub-map is unknown, so that the probability value of the projection point is 0.9;
the semantic value of the laser point corresponding to the projection point is not unknown, the semantic value of the map point in the N layer grid sub-map is not unknown, the semantic value of the laser point corresponding to the projection point is the same as the semantic value of the map point in the N layer grid sub-map, and the probability value of the projection point is 5;
the semantic value of the laser point corresponding to the projection point is not unknown, the semantic value of the map point in the N layer grid sub-map is not unknown, and the semantic value of the laser point corresponding to the projection point is different from the semantic value of the map point in the N layer grid sub-map, so that the probability value of the projection point is 0.2.
After the probability value of each projection point in the projection point set of the matching pose graph is determined, the probability value of each projection point is added to obtain a sum S, and the sum S is divided by the number n of the projection points in the projection point set to obtain the matching probability of the matching pose graph.
After the pose atlas of the nth layer is obtained, up-sampling each frame of pose atlas in the pose atlas of the nth layer, and obtaining the sampled pose atlas of the nth layer.
Specifically, the N-th layer pose chart of each frame in the N-th layer pose chart set represents the orientation of one reference point. For example, the nth layer grid sub-map is a third layer grid sub-map, the resolution of the third layer grid map is 100×100, and the pose map of one third layer represents the pose map of the 3 rd orientation of the 1000 th pixel point in the third layer grid map, wherein the 1000 th pixel point is a reference point, the 3 rd orientation is an orientation when the reference point is taken as a rotation point, and the third layer grid map is rotated clockwise from the forward direction by a first preset rotation angle of 30 ° three times, namely, the orientation is directed to the forward direction of 90 °. When the pose map of the third layer is up-sampled to the grid sub-map of the second layer (the resolution of the grid sub-map of the second layer is 400×400), one pixel point of the pose map of the third layer is changed to four pixel points, namely, the reference point of the pose map of the third layer is changed to four pixel points, and the pose map of the third layer becomes the sampled pose map of the third layer after up-sampling. And (5) up-sampling the displacement graphs of the third layer of 30 frames to obtain a sampled pose graph set of the third layer.
Then, matching the sampled pose atlas of the N layer with the grid sub-map of the N-1 layer to obtain the pose atlas of the N-1 layer, up-sampling each frame of pose image in the pose image of the N-1 layer to obtain the sampled pose atlas of the N-1 layer, and repeatedly executing the operations until the sampled pose atlas of the N layer is matched with the grid sub-map of the first layer in the grid sub-map of the N layer, and obtaining a final pose image, wherein a matching result is the final pose image.
Specifically, the specific process of matching the N-layer sampled pose atlas with the N-1 layer grid sub-map to obtain the N-1 layer pose atlas is that aiming at each frame of the N-layer sampled pose atlas, the N-layer sampled pose atlas is obtained:
taking each pixel point of a reference point of a sampled pose image of an nth layer as a rotation point, rotating the sampled pose image of the nth layer within a set rotation range according to a second preset rotation angle to obtain a rotation pose image and a projection point set on the rotation pose image, and obtaining the matching probability of the rotation pose image according to the projection point set on the rotation pose image, wherein the projection point set on the rotation pose image is a set of points projected by the sampled pose image of the nth layer in an N-1 layer grid sub-map;
Secondly, after the sampled pose image of the nth layer rotates within a set rotation range, a rotation pose image set of each pixel point of a reference point of the sampled pose image of the nth layer is obtained, and after the rotation pose image set is obtained, a rotation pose image total set of the sampled pose image set of the nth layer is obtained, wherein each frame of rotation pose image of the rotation pose image total set is ordered according to the sequence of decreasing matching probability of the rotation pose image;
and thirdly, taking the rotation pose graph corresponding to the matching probability of the rotation pose graph with the largest rotation pose graph total set as a starting point, and taking out a second preset number of rotation pose graphs from the rotation pose graph total set to form an N-1 layer pose graph set. The second preset number is set according to actual requirements.
Taking the N-layer grid sub-map as a third-layer grid sub-map, wherein the resolution of the third-layer grid sub-map is 100 multiplied by 100, the N-1-layer grid sub-map is a second-layer grid sub-map, and the resolution of the second-layer grid sub-map is 400 multiplied by 400 as an example.
For the sampled pose map of the third layer of each frame in the sampled pose map set of the third layer, as known from the up-sampling process, the reference point of the sampled pose map of the third layer includes four pixel points, and each pixel point of the reference point of the sampled pose map of the third layer is taken as a rotation point. For each rotation point of the sampled pose map of the third layer of a certain frame, it is assumed that the orientation of the sampled pose map of the third layer is 90 ° right south, and the orientation of the sampled pose map of the third layer at the reference point thereof is in the range of 60 ° to 120 °. After the sampled pose image of the third layer rotates once within the range of +/-30 degrees of the orientation of the reference point of the sampled pose image of the third layer by a second preset rotation angle of 10 degrees, matching the sampled pose image of the third layer with the grid sub-map of the second layer to obtain a frame of rotation pose image (the rotation pose image and the matching pose image both represent the matched pose image) and a projection point set of the rotation pose image; and obtaining the matching probability of the rotation pose graph according to the projection point set of the rotation pose graph. The method for obtaining the matching probability of the rotation gesture map is the same as the method for obtaining the matching probability of the rotation gesture map, and will not be described herein.
And rotating the sampled pose graph of the third layer within the range of 60-120 degrees to obtain a rotation pose graph set of the rotation point, namely a 6-frame rotation pose graph. After the above operation is performed on each pixel point of the reference point of the sampled pose map of the third layer, a rotated pose map set of the sampled pose map of the third layer, that is, a rotated pose map of 6×4=24 frames is obtained. And after the sampled pose graphs of the third layer of each frame all execute the operation, obtaining a rotating pose graph total set of the sampled pose graph set of the third layer.
When the rotation pose graph of each frame is obtained, the matching probability of the rotation pose graph of each frame is also obtained. And sequencing each frame of rotation pose graphs in the rotation pose graph total set according to the descending order of the matching probability, and taking the rotation pose graph corresponding to the matching probability of the rotation pose graph with the largest rotation pose graph in the rotation pose graph total set as a starting point. For example, the matching probability of the 1 st frame rotation pose map is 0.99, the matching probability of the 2 nd frame rotation pose map is 0.98, the matching probability of the 3 rd frame rotation pose map is 0.95.
And taking out the rotation pose graphs of the first 15 frames from the rotation pose graph total set, namely taking out the rotation pose graphs of the 1 st frame to the rotation pose graphs of the 15 th frame to form a pose graph set of the second layer.
After the pose atlas of the second layer is obtained, up-sampling each frame of pose image in the pose atlas of the second layer, and obtaining the sampled pose atlas of the second layer. And then, matching the sampled pose atlas of the second layer with the grid sub-map of the first layer to obtain a final pose image, namely obtaining a matching result. For the process of matching the second layer of sampled pose atlas with the first layer of grid sub-map, please refer to the process of matching the third layer of sampled pose atlas with the second layer of grid sub-map, and the principles are the same.
Then, step S103 is executed, and if the matching result satisfies the matching condition, the current grid map is updated according to the laser spot diagram, and the current position of the robot is determined.
Specifically, if the matching probability of the matching result is not smaller than a first matching threshold, that is, the matching probability of the final pose map is not smaller than the first matching threshold, updating the current grid map according to the laser point map, and determining the current position of the robot. The first matching threshold is usually 0.8, and may be set according to actual requirements.
If the matching probability of the matching result is smaller than the first matching threshold and not smaller than the second matching threshold, and the matching probability of the projection point appointed in the matching result is located in the laser point matching range, updating the current grid map according to the laser point diagram, and determining the current position of the robot, wherein the second matching threshold is smaller than the first matching threshold. The second matching threshold is usually 0.5, and can be set according to practical requirements. The laser spot matching range can also be set according to actual requirements.
The process of obtaining the matching probability of the appointed projection points is to obtain a projection point set of a matching result (final pose diagram), take the projection point with the semantic value of the laser point in the matching result not smaller than the semantic threshold value as the appointed projection point, add the probability value of each appointed projection point to obtain the sum, and divide the sum by the number of the appointed projection points to obtain the matching probability of the appointed projection points. The semantic threshold is set according to the actual requirements, for example, the semantic threshold is 2.
One or more technical solutions in the embodiments of the present invention at least have the following technical effects or advantages:
in the embodiment, a laser spot diagram acquired by a robot is acquired first, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points; and then matching the laser dot map with the current grid map of the robot, and obtaining a matching result. And then judging the matching, if the matching result meets the matching condition, updating the current grid map according to the laser dot diagram, and determining the current position of the robot. The positioning method of the embodiment quickly corrects the positioning of the robot, so that the robot has strong adaptability to the field environment, the indoor positioning precision of the robot is improved, and the method has the characteristic of low cost.
Example two
Based on the same inventive concept, a second embodiment of the present invention further provides a positioning device of a robot, as shown in fig. 2, including:
an acquisition module 201, configured to acquire a laser spot diagram acquired by a robot, where the laser spot diagram includes a plurality of laser points and a semantic value of each of the plurality of laser points;
the matching module 202 is configured to match the laser spot diagram with a current grid map of the robot, and obtain a matching result;
and the execution module 203 is configured to update the current grid map according to the laser spot diagram and determine the current position of the robot if the matching result meets the matching condition.
As an optional embodiment, the matching module 202, configured to match the laser spot map with the current grid map of the robot, and obtain a matching result, includes:
after the current grid map is downsampled into an N-layer grid sub-map, matching the laser point map with the N-layer grid sub-map, and obtaining an N-layer pose map set, wherein the resolution of the N-layer grid sub-map is sequentially reduced, and N is more than or equal to 2;
up-sampling each frame of pose graphs in the N-th layer pose graph set, and obtaining a sampled pose graph set of the N-th layer;
And matching the N-layer sampled pose atlas with an N-1 layer grid sub-map to obtain an N-1 layer pose atlas, up-sampling each frame of pose image in the N-1 layer pose image to obtain an N-1 layer sampled pose atlas, and repeatedly executing the operation until the N-layer sampled pose atlas is matched with a first layer grid sub-map in the N layer grid sub-map, and obtaining a final pose image, wherein the matching result is the final pose image.
As an optional embodiment, the matching module 202, configured to match the laser spot map with an nth layer grid sub-map, and obtain an nth layer pose map set, includes:
taking each pixel point in the N-th layer grid sub-map as a rotating point, after rotating the N-th layer grid sub-map once according to a first preset rotating angle, matching with the laser point map to obtain a matching gesture map and a projection point set on the matching gesture map, and obtaining the matching probability of the matching gesture map according to the projection point set on the matching gesture map, wherein the projection point set on the matching gesture map is a set of points projected by the plurality of laser points in the N-th layer grid sub-map;
After the N-layer grid sub-map rotates for one circle, a matching pose image set of each pixel point in the N-layer grid sub-map is obtained, and after the matching pose image set is obtained, a matching pose image total set of the N-layer grid sub-map is obtained, wherein each frame of matching pose image in the matching pose image total set is ordered according to the descending order of the matching probability of the matching pose image;
taking the matching pose graphs corresponding to the matching probability of the matching pose graph with the largest matching pose graph total set as a starting point, and taking out a first preset number of matching pose graphs from the matching pose graph total set to form the N-layer pose graph set.
As an optional embodiment, the matching module 202, configured to obtain, according to the set of projection points on the matching pose map, a matching probability of the matching pose map, includes:
and obtaining the matching probability according to the probability value of each projection point in the projection point set on the matching pose graph, wherein the probability value is a relation probability value between the semantic value of the laser point corresponding to each projection point and the semantic value of the map point in the Nth layer grid sub-map.
As an optional embodiment, the matching module 202, configured to match the sampled pose atlas of the nth layer with the grid sub-map of the N-1 th layer to obtain the pose atlas of the N-1 th layer, includes:
for each frame of the N-layer sampled pose graph in the N-layer sampled pose graph set, taking each pixel point of a datum point of the N-layer sampled pose graph as a rotation point, rotating the N-layer sampled pose graph once within a set rotation range according to a second preset rotation angle to obtain a rotation pose graph and a projection point set on the rotation pose graph, and obtaining the matching probability of the rotation pose graph according to the projection point set on the rotation pose graph, wherein the projection point set on the rotation pose graph is a set of points projected by the N-layer sampled pose graph in the N-1-layer grid sub-map;
after the sampled pose image of the nth layer rotates within the set rotation range, a rotation pose image set of each pixel point of a reference point of the sampled pose image of the nth layer is obtained, and after the rotation pose image set is obtained, a rotation pose image total set of the sampled pose image set of the nth layer is obtained, wherein each frame of rotation pose image of the rotation pose image total set is ordered according to the sequence of decreasing matching probability of the rotation pose image;
Taking the rotation pose image corresponding to the rotation pose image with the largest matching probability of the rotation pose image in the rotation pose image total set as a starting point, and taking out a second preset number of rotation pose images from the rotation pose image total set to form the N-1 layer pose image set.
As an optional embodiment, the executing module 203, configured to update the current grid map according to the laser spot diagram if the matching result meets a matching condition, and determine a current position of the robot, includes:
if the matching probability of the matching result is not smaller than a first matching threshold value, updating the current grid map according to the laser point diagram, and determining the current position of the robot.
As an optional embodiment, the executing module 203, configured to update the current grid map according to the laser spot diagram if the matching result meets a matching condition, and determine a current position of the robot, includes:
if the matching probability of the matching result is smaller than the first matching threshold and is not smaller than a second matching threshold, and the matching probability of the projection point appointed in the matching result is located in a laser point matching range, updating the current grid map according to the laser point diagram, and determining the current position of the robot, wherein the second matching threshold is smaller than the first matching threshold.
Since the positioning device of the robot described in this embodiment is a device for implementing the positioning method of the robot described in the first embodiment of the present application, based on the positioning method of the robot described in the first embodiment of the present application, those skilled in the art can understand the specific implementation of the positioning device of the robot of this embodiment and various modifications thereof, so how the positioning device of the robot implements the method in the first embodiment of the present application will not be described in detail herein. The device used by those skilled in the art to implement the positioning method of the robot in the first embodiment of the present application is within the scope of the protection intended in the present application.
Example III
Based on the same inventive concept, the third embodiment of the present invention also provides a robot, as shown in fig. 3, including a memory 304, a processor 302, and a computer program stored on the memory 304 and executable on the processor 302, wherein the processor 302 implements steps of any one of the above-mentioned positioning methods of the robot when executing the program.
Where in FIG. 3 a bus architecture (represented by bus 300), bus 300 may comprise any number of interconnected buses and bridges, with bus 300 linking together various circuits, including one or more processors, represented by processor 302, and memory, represented by memory 304. Bus 300 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. Bus interface 306 provides an interface between bus 300 and receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 302 is responsible for managing the bus 300 and general processing, while the memory 304 may be used to store data used by the processor 302 in performing operations.
Example IV
Based on the same inventive concept, the fourth embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any one of the methods of the positioning method of the robot of the previous embodiment.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. A method of positioning a robot, comprising:
acquiring a laser spot diagram acquired by a robot, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points;
matching the laser spot diagram with the current grid map of the robot, and obtaining a matching result, wherein the matching result comprises the following steps:
after the current grid map is downsampled into an N-layer grid sub-map, matching the laser point map with the N-layer grid sub-map, and obtaining an N-layer pose map set, wherein the resolution of the N-layer grid sub-map is sequentially reduced, and N is more than or equal to 2;
up-sampling each frame of pose graphs in the N-th layer pose graph set, and obtaining a sampled pose graph set of the N-th layer;
matching the N-layer sampled pose atlas with an N-1 layer grid sub-map to obtain an N-1 layer pose atlas, up-sampling each frame of pose image in the N-1 layer pose image to obtain an N-1 layer sampled pose atlas, and repeatedly executing the operations until the N-layer sampled pose atlas is matched with a first layer grid sub-map in the N layer grid sub-map, and obtaining a final pose image, wherein the matching result is the final pose image;
The step of matching the laser point map with the grid sub-map of the nth layer and obtaining the pose map set of the nth layer comprises the following steps:
taking each pixel point in the N-th layer grid sub-map as a rotating point, after rotating the N-th layer grid sub-map once according to a first preset rotating angle, matching with the laser point map to obtain a matching gesture map and a projection point set on the matching gesture map, and obtaining the matching probability of the matching gesture map according to the projection point set on the matching gesture map, wherein the projection point set on the matching gesture map is a set of points projected by the plurality of laser points in the N-th layer grid sub-map;
after the N-layer grid sub-map rotates for one circle, a matching pose image set of each pixel point in the N-layer grid sub-map is obtained, and after the matching pose image set is obtained, a matching pose image total set of the N-layer grid sub-map is obtained, wherein each frame of matching pose image in the matching pose image total set is ordered according to the descending order of the matching probability of the matching pose image;
taking a matching pose graph corresponding to the matching probability of the matching pose graph with the largest matching pose graph total set as a starting point, and taking out a first preset number of matching pose graphs from the matching pose graph total set to form a pose graph set of the Nth layer;
If the matching result meets the matching condition, updating the current grid map according to the laser point diagram, and determining the current position of the robot.
2. The method of claim 1, wherein the obtaining the matching probability of the matching pose map according to the set of projection points on the matching pose map comprises:
and obtaining the matching probability according to the probability value of each projection point in the projection point set on the matching pose graph, wherein the probability value is a relation probability value between the semantic value of the laser point corresponding to each projection point and the semantic value of the map point in the Nth layer grid sub-map.
3. The method of claim 1, wherein the matching the N-layer sampled pose atlas with an N-1 layer grid sub-map to obtain an N-1 layer pose atlas comprises:
for each frame of the N-layer sampled pose graph in the N-layer sampled pose graph set, taking each pixel point of a datum point of the N-layer sampled pose graph as a rotation point, rotating the N-layer sampled pose graph once within a set rotation range according to a second preset rotation angle to obtain a rotation pose graph and a projection point set on the rotation pose graph, and obtaining the matching probability of the rotation pose graph according to the projection point set on the rotation pose graph, wherein the projection point set on the rotation pose graph is a set of points projected by the N-layer sampled pose graph in the N-1-layer grid sub-map;
After the sampled pose image of the nth layer rotates within the set rotation range, a rotation pose image set of each pixel point of a reference point of the sampled pose image of the nth layer is obtained, and after the rotation pose image set is obtained, a rotation pose image total set of the sampled pose image set of the nth layer is obtained, wherein each frame of rotation pose image of the rotation pose image total set is ordered according to the sequence of decreasing matching probability of the rotation pose image;
taking the rotation pose image corresponding to the rotation pose image with the largest matching probability of the rotation pose image in the rotation pose image total set as a starting point, and taking out a second preset number of rotation pose images from the rotation pose image total set to form the N-1 layer pose image set.
4. The method of claim 1, wherein if the matching result satisfies a matching condition, updating the current grid map according to the laser spot diagram, and determining the current position of the robot, comprises:
if the matching probability of the matching result is not smaller than a first matching threshold value, updating the current grid map according to the laser point diagram, and determining the current position of the robot.
5. The method of claim 4, wherein if the matching result satisfies a matching condition, updating the current grid map according to the laser spot diagram, and determining the current position of the robot, comprises:
if the matching probability of the matching result is smaller than the first matching threshold and is not smaller than a second matching threshold, and the matching probability of the projection point appointed in the matching result is located in a laser point matching range, updating the current grid map according to the laser point diagram, and determining the current position of the robot, wherein the second matching threshold is smaller than the first matching threshold.
6. A positioning device for a robot, comprising:
the acquisition module is used for acquiring a laser spot diagram acquired by the robot, wherein the laser spot diagram comprises a plurality of laser points and semantic values of each laser point in the plurality of laser points;
the matching module is used for matching the laser point map with the current grid map of the robot and obtaining a matching result, and comprises the following steps:
after the current grid map is downsampled into an N-layer grid sub-map, matching the laser point map with the N-layer grid sub-map, and obtaining an N-layer pose map set, wherein the resolution of the N-layer grid sub-map is sequentially reduced, and N is more than or equal to 2;
Up-sampling each frame of pose graphs in the N-th layer pose graph set, and obtaining a sampled pose graph set of the N-th layer;
matching the N-layer sampled pose atlas with an N-1 layer grid sub-map to obtain an N-1 layer pose atlas, up-sampling each frame of pose image in the N-1 layer pose image to obtain an N-1 layer sampled pose atlas, and repeatedly executing the operations until the N-layer sampled pose atlas is matched with a first layer grid sub-map in the N layer grid sub-map, and obtaining a final pose image, wherein the matching result is the final pose image;
the step of matching the laser point map with the grid sub-map of the nth layer and obtaining the pose map set of the nth layer comprises the following steps:
taking each pixel point in the N-th layer grid sub-map as a rotating point, after rotating the N-th layer grid sub-map once according to a first preset rotating angle, matching with the laser point map to obtain a matching gesture map and a projection point set on the matching gesture map, and obtaining the matching probability of the matching gesture map according to the projection point set on the matching gesture map, wherein the projection point set on the matching gesture map is a set of points projected by the plurality of laser points in the N-th layer grid sub-map;
After the N-layer grid sub-map rotates for one circle, a matching pose image set of each pixel point in the N-layer grid sub-map is obtained, and after the matching pose image set is obtained, a matching pose image total set of the N-layer grid sub-map is obtained, wherein each frame of matching pose image in the matching pose image total set is ordered according to the descending order of the matching probability of the matching pose image;
taking a matching pose graph corresponding to the matching probability of the matching pose graph with the largest matching pose graph total set as a starting point, and taking out a first preset number of matching pose graphs from the matching pose graph total set to form a pose graph set of the Nth layer;
and the execution module is used for updating the current grid map according to the laser point diagram and determining the current position of the robot if the matching result meets the matching condition.
7. A robot comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method steps of any of claims 1-5 when the program is executed.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the method steps of any of claims 1-5.
CN202111481096.4A 2021-12-06 2021-12-06 Positioning method and device for robot Active CN114216452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111481096.4A CN114216452B (en) 2021-12-06 2021-12-06 Positioning method and device for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111481096.4A CN114216452B (en) 2021-12-06 2021-12-06 Positioning method and device for robot

Publications (2)

Publication Number Publication Date
CN114216452A CN114216452A (en) 2022-03-22
CN114216452B true CN114216452B (en) 2024-03-19

Family

ID=80700023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111481096.4A Active CN114216452B (en) 2021-12-06 2021-12-06 Positioning method and device for robot

Country Status (1)

Country Link
CN (1) CN114216452B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112179330A (en) * 2020-09-14 2021-01-05 浙江大华技术股份有限公司 Pose determination method and device of mobile equipment
CN112258618A (en) * 2020-11-04 2021-01-22 中国科学院空天信息创新研究院 Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN113375683A (en) * 2021-06-10 2021-09-10 亿嘉和科技股份有限公司 Real-time updating method for robot environment map
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717710B (en) * 2018-05-18 2022-04-22 京东方科技集团股份有限公司 Positioning method, device and system in indoor environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112179330A (en) * 2020-09-14 2021-01-05 浙江大华技术股份有限公司 Pose determination method and device of mobile equipment
CN112258618A (en) * 2020-11-04 2021-01-22 中国科学院空天信息创新研究院 Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN113375683A (en) * 2021-06-10 2021-09-10 亿嘉和科技股份有限公司 Real-time updating method for robot environment map
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot

Also Published As

Publication number Publication date
CN114216452A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN109613543B (en) Method and device for correcting laser point cloud data, storage medium and electronic equipment
EP3534234A1 (en) Localization method and device
CN111426312B (en) Method, device and equipment for updating positioning map and storage medium
EP3621286B1 (en) Method, and apparatus for clock synchronization, device, storage medium and vehicle
CN107784671B (en) Method and system for visual instant positioning and drawing
CN112595323A (en) Robot and drawing establishing method and device thereof
CN115631305A (en) Driving method of skeleton model of virtual character, plug-in and terminal equipment
US11354238B2 (en) Method and device for determining memory size
CN109871019B (en) Method and device for acquiring coordinates by automatic driving
CN112017205B (en) Automatic calibration method and system for space positions of laser radar and camera sensor
CN112750168B (en) Calibration method and device for internal parameters of event camera, computer equipment and storage medium
CN111209978A (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN110599586A (en) Semi-dense scene reconstruction method and device, electronic equipment and storage medium
JP2021189625A (en) On-road obstacle detection device, on-road obstacle detection method, and on-road obstacle detection program
CN111486867B (en) Calibration device and method for installation parameters of vision and inertia mixed tracking assembly
CN114943952A (en) Method, system, device and medium for obstacle fusion under multi-camera overlapped view field
CN114216452B (en) Positioning method and device for robot
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN112102417B (en) Method and device for determining world coordinates
CN113513983B (en) Precision detection method and device, electronic equipment and medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN115388878A (en) Map construction method and device and terminal equipment
CN114387498A (en) Target detection method and device, electronic equipment and storage medium
CN115619958B (en) Target aerial view generation method and device, electronic device and storage medium
CN111238486A (en) Navigation method and device for unmanned equipment, storage medium and unmanned equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant