CN113739785A - Robot positioning method and device and storage medium - Google Patents

Robot positioning method and device and storage medium Download PDF

Info

Publication number
CN113739785A
CN113739785A CN202010473324.2A CN202010473324A CN113739785A CN 113739785 A CN113739785 A CN 113739785A CN 202010473324 A CN202010473324 A CN 202010473324A CN 113739785 A CN113739785 A CN 113739785A
Authority
CN
China
Prior art keywords
laser
frame
laser frame
current
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010473324.2A
Other languages
Chinese (zh)
Inventor
秦野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN202010473324.2A priority Critical patent/CN113739785A/en
Publication of CN113739785A publication Critical patent/CN113739785A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a robot positioning method, a device and a storage medium, wherein a plurality of light-reflecting objects are deployed in the environment where a robot is located, and the method comprises the following steps: respectively acquiring a laser frame and mileage information corresponding to the laser frame by using a laser sensor and an inertial sensor of the robot; judging whether a light-reflecting object is observed in the current laser frame; if the current laser frame observes a reflective object, the following operations are performed: determining a light-reflecting object observed by a current laser frame and position information of the light-reflecting object observed by the current laser frame in a robot coordinate system of the current laser frame; determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.

Description

Robot positioning method and device and storage medium
Technical Field
The invention relates to the technical field of autonomous robot positioning, in particular to a robot positioning method, a robot positioning device and a storage medium.
Background
Autonomous positioning of a mobile robot means that the robot calculates the pose of the robot in the scene by using its own sensor, such as a laser or a code disc.
In order to complete the autonomous positioning of the robot, a map of the environment where the robot is located is usually required to be established, and the laser positioning is completed according to the matching condition of the laser data and the map. However, in the environment where the robot is located, due to the fact that goods often move, the difference between the map built before and the scene actually scanned by the robot is large (the positions of the goods change when the map is built and when the robot works later), and the situation can cause the robot to be positioned in an error independently.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and a storage medium for positioning a mobile robot, which can realize accurate positioning of the robot and have high versatility and robustness.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of positioning a robot having a plurality of light reflecting objects disposed in an environment of the robot, the method comprising:
respectively acquiring a laser frame and mileage information corresponding to the laser frame by using a laser sensor and an inertial sensor of the robot;
judging whether a light-reflecting object is observed in the current laser frame;
if the current laser frame observes a reflective object, performing a positioning operation, wherein the positioning operation comprises the following steps:
determining a light-reflecting object observed by a current laser frame and position information of the light-reflecting object observed by the current laser frame in a robot coordinate system of the current laser frame;
determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
A robot positioning apparatus in which a plurality of light-reflective objects are deployed in an environment of a robot, the apparatus comprising: a processor, and a non-transitory computer readable storage medium connected to the processor by a bus:
the non-transitory computer readable storage medium storing a computer program executable on the processor, the processor implementing the following steps when executing the program:
respectively acquiring a laser frame and mileage information corresponding to the laser frame by using a laser sensor and an inertial sensor of the robot;
judging whether a light-reflecting object is observed in the current laser frame;
if the reflecting object is observed in the current laser frame, executing positioning operation; the positioning operation comprises:
determining a light-reflecting object observed by a current laser frame and position information of the light-reflecting object observed by the current laser frame in a robot coordinate system of the current laser frame;
determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps in the robot positioning method as described above.
According to the technical scheme, the laser sensor and the inertial sensor are respectively used for acquiring the mileage information corresponding to the laser frame and the laser frame, and under the condition that the current laser frame observes a reflective object, the robot pose corresponding to the current laser frame is determined by using the related information of N-1 laser frames observing the reflective object before the current laser frame and the current laser frame, such as the mileage information corresponding to the reflective object observed by the laser frame and the laser frame, so that the robot positioning is realized. In the robot positioning process, the dependence on a reflective object in a laser frame can be effectively reduced by using the mileage information corresponding to the laser frame, and the robot positioning method has high universality and robustness.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flowchart of a robot positioning method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a second robot positioning method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a positioning method for a third robot according to an embodiment of the present invention;
FIG. 4 is a flowchart of a four-robot positioning method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a five-robot positioning method according to an embodiment of the present invention;
FIG. 6 is a flowchart of a six-robot positioning method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a radius filtering method according to an embodiment of the present invention;
FIG. 8 is a flowchart of a seven-robot positioning method according to an embodiment of the present invention
FIG. 9 is a flowchart of an eight-robot positioning method according to an embodiment of the present invention
FIG. 10 is a flowchart of a nine-robot positioning method according to an embodiment of the present invention;
FIG. 11 is a flowchart of a ten-robot positioning method according to an embodiment of the invention;
fig. 12 is a schematic structural diagram of a robot positioning device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For the situation of autonomous positioning error of the robot caused by position change of an object in an environment where the robot is located, in the related art, auxiliary positioning may be performed by deploying a reflective strip or a reflective column in the environment where the robot is located, for example, in the scheme of patent CN109613550A, the reflective strip is separated from laser data, and the robot is positioned by using the observed reflective strip, however, this scheme requires that a sufficient number of reflective strips (3 and more than 3) are observed each time, otherwise, the robot cannot be positioned by using the observed reflective strip. However, in the actual use process, due to the fact that the opening angle of the laser is small (only 180 degrees), the reflective strips deployed in advance are shielded, and the like, it is difficult to ensure that each frame of laser can observe 3 or more than 3 reflective strips, and therefore the observed reflective strips cannot be used for positioning the robot.
In the embodiment of the invention, the positions of the reflective objects in a plurality of laser frames and the mileage information of the plurality of laser frames, which are acquired in the moving process of the robot, are utilized to predict the robot poses of the plurality of laser frames, so that the positioning of the laser frames is realized.
The following description is made for the purpose of illustrating the principles of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a robot positioning method according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where a robot is located; as shown in fig. 1, the method comprises the steps of:
step 101, respectively utilizing a laser sensor and an inertial sensor of the robot to acquire a laser frame and mileage information corresponding to the laser frame in the moving process of the robot.
In this embodiment, the robot itself is configured with a laser sensor and an inertial sensor. Wherein the content of the first and second substances,
and the laser sensor is used for acquiring laser data of the environment where the robot is located. The laser sensor can continuously acquire laser data in the moving process of the robot.
And the inertial sensor, such as a speedometer, a coded disc and the like, is used for acquiring mileage information in the moving process of the robot. In this embodiment, the mileage information whose acquisition time is the same as the laser data acquisition time of the laser frame is the mileage information corresponding to the laser frame.
102, judging whether a reflective object is observed in the current laser frame, and if the reflective object is observed in the current laser frame, executing positioning operation; the positioning operation includes the following steps 103 to 104.
In the present embodiment, the light-reflecting object refers to an object made of a light-reflecting material having a high reflectance, such as a light-reflecting stripe, a light-reflecting column (i.e., a stripe-shaped object, a columnar object made of a light-reflecting material having a high reflectance characteristic), and the like.
And 103, determining the reflecting object observed by the current laser frame and the position information of the reflecting object observed by the current laser frame in the robot coordinate system of the current laser frame.
And 104, determining a laser frame sequence consisting of N-1 laser frames observing the reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
In the embodiment of the invention, the robot pose corresponding to the current laser frame refers to the pose of the robot when the laser sensor acquires the laser frame.
As can be seen from the method shown in fig. 1, in this embodiment, the laser sensor and the inertial sensor are respectively used to acquire the mileage information corresponding to the laser frame and the laser frame, and when the reflective object is observed in the current laser frame, the robot pose corresponding to the current laser frame is determined by using the relevant information of the N-1 laser frames observed to the reflective object before the current laser frame and the current laser frame, such as the mileage information corresponding to the reflective object and the laser frame observed in the laser frame, so as to realize the robot positioning.
Referring to fig. 2, fig. 2 is a flowchart of a robot positioning method according to a second embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where a robot is located; as shown in fig. 2, the method comprises the steps of:
step 201, respectively using a laser sensor and an inertial sensor of the robot to acquire a laser frame and mileage information corresponding to the laser frame in the moving process of the robot.
In the embodiment of the invention, the robot is provided with a laser sensor and an inertial sensor. Wherein the content of the first and second substances,
and the laser sensor is used for acquiring laser data of the environment where the robot is located. The laser sensor can continuously acquire laser data in the moving process of the robot.
And the inertial sensor, such as a speedometer, a coded disc and the like, is used for acquiring mileage information in the moving process of the robot. In the embodiment of the invention, the mileage information with the same acquisition time as the laser data acquisition time of the laser frame is the mileage information corresponding to the laser frame.
Step 202, judging whether the current laser frame observes a reflective object, if so, executing the positioning operation steps 203 to 204, otherwise, executing the step 205.
In the embodiment of the present invention, the light-reflecting object refers to an object made of a light-reflecting material having a high reflectance, such as a light-reflecting stripe, a light-reflecting column (i.e., a stripe-shaped object, a columnar object made of a light-reflecting material having a high reflectance characteristic), and the like.
Step 203, determining the reflective object observed by the current laser frame and the position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
And 204, determining a laser frame sequence consisting of N-1 laser frames observing the reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
In the embodiment of the invention, the robot pose corresponding to the current laser frame refers to the pose of the robot when the laser sensor acquires the laser frame.
And step 205, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame.
In this embodiment, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame specifically includes:
s11, determining mileage change information between the current laser frame and a laser frame before the current laser frame according to the mileage information corresponding to the current laser frame and the mileage information corresponding to a laser frame before the current laser frame;
and S12, determining the robot pose corresponding to the current laser frame according to the determined mileage change information and the robot pose corresponding to the laser frame before the current laser frame.
In step S11, the laser frame before the current laser frame may be any laser frame of the current laser frame, for example, a laser frame before the current laser frame.
As can be seen from the method shown in fig. 2, in this embodiment, when a reflective object is observed in a current laser frame, the robot pose corresponding to the current laser frame is determined by using the current laser frame and the relevant information of N-1 laser frames before the current laser frame, where the reflective object is observed, such as the mileage information corresponding to the reflective object and the laser frame observed in the laser frame, so as to realize robot positioning. And under the condition that the reflecting object is not observed in the current laser frame, the robot pose corresponding to the current laser frame is determined by directly utilizing the mileage information of the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame, so that the positioning method is simpler.
Referring to fig. 3, fig. 3 is a flowchart of a robot positioning method provided by a third embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where a robot is located; as shown in fig. 3, the method comprises the steps of:
step 301, acquiring a laser frame and mileage information corresponding to the laser frame in the moving process of the robot by using a laser sensor and an inertial sensor of the robot respectively.
Step 3021, determining whether the current laser frame meets a condition as a key frame, if yes, executing step 3022, otherwise, executing step 305;
and step 3022, judging whether the current laser frame observes a reflective object, and if the current laser frame observes a reflective object, executing step 3023.
Step 3023, setting the current laser frame as a key frame, and performing a positioning operation; the positioning operation comprises: step 303 to step 304;
step 303, determining the position information of the reflective object observed by the current laser frame and the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
Step 304, determining a laser frame sequence composed of the first N key frames of the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
The implementation of "determining a key frame sequence composed of the first N-1 key frames of the current key frame and the current key frame, and determining the robot pose corresponding to the current key frame according to the position information of the light-reflecting object observed by each key frame in the key frame sequence in the robot coordinate system of the key frame and the mileage information corresponding to the key frame" refers to steps 8041 to 8044 of the method shown in fig. 8, which are not described in detail herein.
The above step 304 is a specific refinement of the step 104 shown in fig. 1, that is, in this embodiment, the first N-1 key frames of the current laser frame are taken as N-1 laser frames observed to reflect the light object before the current laser frame, and form a laser frame sequence with the current laser frame.
And 305, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame.
In this embodiment, the robot pose corresponding to the current laser frame is determined according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame, which is the same as the implementation method of step 205 shown in fig. 2, that is, steps S11 and S12 are also included.
It should be noted that, in this embodiment, a laser frame before the current laser frame may be any laser frame of the current laser frame, and preferably, may be a laser frame before the current laser frame or a last key frame before the current laser frame.
The above two branch flows of steps 3021 to 3023 and steps 3021 to 305 are specific refinements of "determining whether the reflective object is observed in the current laser frame, and if the reflective object is observed in the current laser frame," performing the positioning operation "in step 102 shown in fig. 1.
As can be seen from the method shown in fig. 3, in this embodiment, when the current laser frame meets the condition as the key frame and a reflective object is observed, the current laser frame is set as the key frame, and the related information of the current laser frame and the first N-1 key frames of the current laser frame is utilized, such as: the robot pose corresponding to the current laser frame is determined by the mileage information corresponding to the light-reflecting object and the laser frame observed by the key frame, so that the robot is positioned, and the dependence of the robot positioning process on the light-reflecting object is minimized by using the mileage information corresponding to the key frame, so that the method has higher universality and robustness.
Referring to fig. 4, fig. 4 is a flowchart of a positioning method for four robots according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where the robot is located; as shown in fig. 4, the method comprises the steps of:
step 401, respectively using a laser sensor and an inertial sensor of the robot to acquire a laser frame and mileage information corresponding to the laser frame in the moving process of the robot.
Step 4021a, determining the number of interval frames between the current laser frame and the latest key frame before the current laser frame.
Step 4021b, judging whether the interval frame number is greater than a preset interval frame number threshold, if so, determining that the current laser frame meets the condition of being a key frame, and executing step 4022; otherwise, it is determined that the current laser frame does not satisfy the condition as a key frame, and step 405 is performed.
In this embodiment, the preset interval frame number threshold is a positive integer, for example, the preset interval frame number threshold is 5.
The above steps 4021a and 4021b are one possible implementation of step 3021 shown in fig. 3.
Step 4022, judging whether the current laser frame observes a reflective object, if so, executing the following step 4023.
Step 4023, setting the current laser frame as a key frame, and executing positioning operation; the positioning operation includes steps 403 to 404.
And step 403, determining the reflective object observed by the current laser frame and the position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
Step 404, determining a laser frame sequence composed of the first N key frames of the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
The above step 404 is a specific refinement of the step 104 shown in fig. 1, that is, in this embodiment, the first N-1 key frames of the current laser frame are taken as N-1 laser frames observed to reflect the light object before the current laser frame, and form a laser frame sequence with the current laser frame.
And 405, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame.
In this embodiment, the robot pose corresponding to the current laser frame is determined according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame, which is the same as the implementation method of step 205 shown in fig. 2, that is, steps S11 and S12 are also included.
It should be noted that, in this embodiment, a laser frame before the current laser frame may be any laser frame of the current laser frame, and preferably, may be a laser frame before the current laser frame or a last key frame before the current laser frame.
The above two branch flows of steps 4021a to 4023 and steps 4021a to 405 are detailed refinements of "determining whether the current laser frame observes a reflective object, and if the current laser frame observes a reflective object, performing a positioning operation" in step 102 shown in fig. 1.
As can be seen from the method shown in fig. 4, in addition to the advantages of the embodiment shown in fig. 3, in this embodiment, whether the number of frame intervals between the current laser frame and the nearest key frame before the current laser frame is not less than the preset threshold value of frame intervals is used as a condition for determining whether the current laser frame meets the key frame, which can avoid poor positioning effect caused by too small frame intervals between N laser frames participating in positioning calculation of the current laser frame, because the corresponding robot poses are relatively close when the frame intervals between the key frames are small, and the final positioning result is affected when the robot poses are too close.
Referring to fig. 5, fig. 5 is a flowchart of a robot positioning method according to a fourth embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where the robot is located; as shown in fig. 5, the method comprises the steps of:
and 501, acquiring a laser frame and mileage information corresponding to the laser frame in the moving process of the robot by using a laser sensor and an inertial sensor of the robot respectively.
Step 5021a, calculating the mileage difference value between the mileage information corresponding to the current laser frame and the mileage information corresponding to the latest key frame before the current laser frame.
Step 5021b, judging whether the mileage difference value is larger than a preset mileage threshold value, if so, determining that the current laser frame meets the condition of being used as a key frame, and executing step 5022; otherwise, it is determined that the current laser frame does not satisfy the condition as a key frame, and step 505 is performed.
In this embodiment, the preset interval frame number threshold is a positive integer, for example, the preset interval frame number threshold is 8.
The above step 5021a only 5021b is another possible implementation of 3021 shown in fig. 3.
Step 5022, judging whether the current laser frame observes a reflective object, if so, executing the following operation step 5023.
Step 5023, setting the current laser frame as a key frame and executing positioning operation; the positioning operation includes the following steps 503 to 504;
step 503, determining the reflective object observed by the current laser frame and the position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
Step 504, determining a laser frame sequence composed of the first N-1 key frames of the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
The above step 504 is a specific refinement of the step 104 shown in fig. 1, that is, in this embodiment, the first N-1 key frames of the current laser frame are taken as N-1 laser frames observed to reflect the light object before the current laser frame, and form a laser frame sequence with the current laser frame.
And 505, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame.
In this embodiment, the robot pose corresponding to the current laser frame is determined according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame, which is the same as the implementation method of step 205 shown in fig. 2, that is, steps S11 and S12 are also included.
It should be noted that, in this embodiment, a laser frame before the current laser frame may be any laser frame of the current laser frame, and preferably, may be a laser frame before the current laser frame or a last key frame before the current laser frame.
The above two branch flows of steps 5021a to 5023 and steps 5021a to 505 are detailed refinements of "determining whether the current laser frame observes the reflective object, and if the current laser frame observes the reflective object, performing the positioning operation" in step 102 shown in fig. 1.
As can be seen from the method shown in fig. 5, in addition to the advantages of the embodiment shown in fig. 3, in this embodiment, whether the mileage difference between the current laser frame and the nearest key frame before the current laser frame is not less than the preset mileage threshold is used as a condition for determining whether the current laser frame meets the key frame, and this method can avoid poor positioning effect caused by too small distance intervals between N laser frames involved in the positioning calculation of the current laser frame, because the distance intervals between the key frames are small, the corresponding robot poses are relatively close, and the final positioning result is affected when the robot poses are too close.
The above steps 4021a to 4021b shown in fig. 4 and steps 4021a to 4021b shown in fig. 5 show two possible implementation schemes of step 4021 shown in fig. 3, and in practical applications, a third implementation scheme may be adopted, that is, two possible implementation schemes given by the steps 4021a to 4021b shown in fig. 4 and the steps 4021a to 4021b shown in fig. 5 are combined, that is, step 4021 shown in fig. 3 may also be realized by the following method:
determining the number of interval frames between a current laser frame and a latest key frame before the current laser frame, calculating the mileage difference value between the mileage information corresponding to the current laser frame and the mileage information corresponding to the latest key frame before the current laser frame, if the number of interval frames is greater than a preset interval frame number threshold value and/or the mileage difference value is greater than a preset mileage threshold value, determining that the current laser frame meets the condition as the key frame, and otherwise, determining that the current laser frame does not meet the condition as the key frame.
Referring to fig. 6, fig. 6 is a flowchart of a positioning method for a six-robot according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where the robot is located; as shown in fig. 6, the method comprises the steps of:
step 601, respectively utilizing a laser sensor and an inertial sensor of the robot to acquire a laser frame and mileage information corresponding to the laser frame in the moving process of the robot.
Step 602, determining whether the current laser frame observes a reflective object, if so, performing the following positioning steps 6031 to 604.
In this embodiment, the laser frame includes laser point information, and each laser point information includes information such as a reflectivity of the laser point, a distance between the laser sensor and the laser point, and an observation angle of the laser sensor to the laser point.
In the present embodiment, the light-reflecting object refers to an object made of a light-reflecting material having a high reflectance, such as a light-reflecting stripe, a light-reflecting column (i.e., a stripe-shaped object, a columnar object made of a light-reflecting material having a high reflectance characteristic), and the like.
In this embodiment, the method for determining whether the reflective object is observed in the current laser frame may specifically be as follows:
judging whether the laser points belonging to the reflective object are contained in the current laser frame or not (namely, the laser points with the reflectivity exceeding a preset reflectivity threshold value) according to the reflectivity of each laser point in the current laser frame, if so, determining that the reflective object is observed in the current laser frame, otherwise, determining that the reflective object is not observed in the current laser frame.
Step 6031, determining the position information of the laser point in the robot coordinate system of the current laser frame according to the distance between the laser sensor and each laser point in the current laser frame and the observation angle of the laser point;
in practical applications, for each laser spot observed by the laser sensor, the distance from the laser sensor to the laser spot and the observation angle of the laser sensor to the laser spot determine the positional information of the laser spot in a coordinate system (i.e., a robot coordinate system) constructed with the laser sensor as the origin.
Step 6032, determining laser points belonging to a reflective object according to the reflectivity of each laser point in the current laser frame, and dividing the laser points belonging to the reflective object, of which the distance between the laser points is smaller than a first preset distance threshold value, into the same group according to the position information of the laser points belonging to the reflective object in the robot coordinate system of the current laser frame;
in practical application, the reflectivity of each laser point in a laser frame determines whether the laser point belongs to a reflective object, if the reflectivity is higher than a preset reflectivity threshold value, the laser point belongs to the reflective object, otherwise, the laser point does not belong to the reflective object. Because the reflecting objects are dispersedly deployed in the environment where the robot is located, when one laser frame comprises a plurality of reflecting objects, the distances between all laser points belonging to the same reflecting object are short, and the distances between position points belonging to different reflecting objects are long, so that grouping can be performed according to the position information of each laser point in the laser frame, the laser points with short distances are divided into the same group, and the laser points in the group belong to the same reflecting object.
Step 6033, determining the area covered by each group of laser points as a reflective object observed by the current laser frame, and determining the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame.
In practical application, because there is a light spot in a laser frame acquired by a laser sensor, after laser points in a current laser frame are grouped, each group of laser points may have noise (also referred to as outliers), which may result in an inaccurate result when determining position information of a reflective object in a robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame.
Therefore, in the embodiment of the present invention, before determining the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame, the group of laser points may be further subjected to outlier rejection processing, and an outlier in the group of laser points may be rejected by using a radius outlierrooval (radius outlierrooval) method, where the specific method is as follows: and for each laser point in the group of laser points, if the number of laser points in the group of laser points, the distance between which and the laser point is smaller than a second preset distance threshold value, is smaller than a preset number of laser points, the laser point is removed from the group of laser points.
As shown in fig. 7, in the schematic diagram of the radius filtering method provided by the embodiment of the present invention, in a group of laser points, each laser point needs to have enough neighbors within a certain distance range (i.e. within a second preset distance threshold, in fig. 7, the second preset distance threshold is the radius of the circle shown in fig. 7), for example, if at least 1 neighbor is specified (the number of preset laser points is 1), the laser point in the center of the circle 1 needs to be deleted; if it is specified that there are at least 2 neighbors (the preset number of laser points is 2), the laser points at the centers of the circle 2 and the circle 3 need to be deleted in addition to the laser point at the center of the circle 1.
In an embodiment of the present invention, the method for determining the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame may specifically be:
and calculating the mean value of the abscissa and the mean value of the ordinate of the group of laser points in the robot coordinate system of the current laser frame, and taking the coordinate position determined by the mean value of the abscissa and the mean value of the ordinate as the position information of the light-reflecting object in the robot coordinate system of the current laser frame.
For example, using the coordinates { x }k,ykDenotes the position information of the kth laser point in the set of laser points in the robot coordinate system of the current laser frame, the coordinate is
Figure BDA0002515022670000181
Namely the position information of the reflecting object in the robot coordinate system of the current laser frame, wherein m is the number of the laser points in the group of laser points.
The above steps 6031 to 6033 are a specific refinement of step 103 shown in fig. 1.
Step 604, determining a laser frame sequence consisting of N-1 laser frames observed to reflect objects before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflecting objects observed by each laser frame in the robot coordinate system of the laser frame in the laser frame sequence and the mileage information corresponding to the laser frame.
As can be seen from the method shown in fig. 6, in this embodiment, the characteristic that the reflective object has high reflectivity is used to determine the laser point belonging to the reflective object in the current laser frame, and accordingly, the reflective object in the current laser frame is identified, so that the identification accuracy of the reflective object is high. In addition, the invention also utilizes the relevant information of the current laser frame and the first N-1 key frames of the current laser frame, such as: the robot pose corresponding to the current laser frame is determined by the mileage information corresponding to the light-reflecting object and the laser frame observed by the key frame, so that the robot is positioned, and the dependence of the robot positioning process on the light-reflecting object is minimized by using the mileage information corresponding to the key frame, so that the method has higher universality and robustness.
Referring to fig. 8, fig. 8 is a flowchart of a positioning method for a seven-robot according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where the robot is located; as shown in fig. 8, the method comprises the steps of:
step 801, respectively utilizing a laser sensor and an inertial sensor of the robot to acquire a laser frame and mileage information corresponding to the laser frame in the moving process of the robot.
Step 802, determining whether the current laser frame observes a reflective object, if so, performing the following operation steps 803 to 8044.
Step 803, determining the reflective object observed by the current laser frame and the position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
Step 8041, determining a laser frame sequence consisting of N-1 laser frames observed to reflect the object before the current laser frame and the current laser frame;
step 8042, determining first position information of the reflective object in a world coordinate system according to position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames, and describing second position information of the reflective object in the world coordinate system by using the position information of the reflective object in the laser frame in the robot coordinate system of the laser frame and the assumed pose variable of the laser frame;
in practical applications, there may be more than one reflective object observed in each laser frame of the sequence of laser frames. In the embodiment of the invention, for a certain laser frame, only the first position information and the second position information of a reflective object in the laser frame in the world coordinate system can be determined; first position information and second position information of a plurality of light-reflecting objects in the laser frame in a world coordinate system can also be determined; first and second position information of all light-reflecting objects in the laser frame in the world coordinate system can also be determined.
8043, calculating first position and posture change information between two adjacent laser frames according to the respective corresponding mileage information of the two laser frames in the laser frame sequence, and describing second position and posture change information between the two laser frames by using the assumed position and posture variables of the two laser frames;
step 8044, determining an assumed pose variable value of each laser frame when the difference between the first position information and the second position information of the reflective object observed by each laser frame in the sequence of laser frames is minimized and the difference between the first pose change information and the second pose change information between two adjacent laser frames is minimized, and determining the assumed pose variable value of the current laser frame as the robot pose corresponding to the current laser frame.
The above steps 8042 to 8044 are specific refinements of "determining the robot pose corresponding to the current laser frame according to the position information of the light-reflecting object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame" in step 104 shown in fig. 1.
The above steps 8041 to 8044 are specific refinements of step 104 shown in fig. 1.
As can be seen from the method shown in fig. 8, in this embodiment, the position information of the reflective object in the robot coordinate system observed by each laser frame in the sequence of laser frames is used to determine the first position information of the reflective object, and the position information of the reflective object in the robot coordinate system and the assumed pose variable of the laser frame are used to describe the second position information of the reflective object; in addition, the mileage information of each laser frame in the laser frame sequence is utilized to calculate the first position and posture change between the adjacent laser frames, meanwhile, the assumed position and posture variable of each laser frame in the laser frame sequence is utilized to describe the second position and posture change between the adjacent laser frames, and the robot positions of all the laser frames in the laser frame sequence are determined by approximating the first position information and the second position information of the light reflecting object and the first position and posture change and the second position and posture change between the adjacent laser frames, so that the positioning of the current laser frame is realized. In the invention, when the difference value between the first position information and the second position information of the reflecting object observed by each laser frame in the laser frame sequence is determined to be minimized, and the difference value between the first position posture change information and the second position posture change information between two adjacent laser frames is minimized, even if only one reflecting object exists in each laser frame, the operation of minimizing the difference value between the first position information and the second position information can be executed.
Referring to fig. 9, fig. 9 is a flowchart of an eight-robot positioning method according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where a robot is located; as shown in fig. 9, the method includes the steps of:
and 901, acquiring a laser frame and mileage information corresponding to the laser frame in the moving process of the robot by respectively using a laser sensor and an inertial sensor of the robot.
And 902, judging whether the current laser frame observes a reflective object, and if the current laser frame observes the reflective object, executing the following operation steps 903 to 9044.
Step 903, determining a reflective object observed by the current laser frame and position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
9041, determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame;
9042a, determining an estimated position of a reflective object according to position information of the reflective object in a robot coordinate system of each laser frame observed by each laser frame in the sequence of laser frames and a robot pose corresponding to a previous laser frame of the laser frame in the sequence of laser frames;
in an embodiment of the present invention, the position information (including coordinates) of a reflective object in the laser frame in the robot coordinate system of the laser frame may be converted into the position information of the reflective object in the world coordinate system by using the following formula 1:
Figure BDA0002515022670000221
in the above formula 1, (lwx, lwy) is the coordinates of the light reflecting object in the world coordinate system; (lx, ly) is the coordinate of the light reflecting object in the robot coordinate system corresponding to the laser frame; (x, y, t) is the robot pose (robot position in world coordinate system) corresponding to the laser frame;
Figure BDA0002515022670000222
is the conversion matrix from the robot coordinate system to the world coordinate system corresponding to the laser frame.
In the present embodiment, in the laser frame sequence consisting of the current laser frame and N-1 laser frames observing the reflective object before the current laser frame, other laser frames than the current laser frame have been located before. And because the acquisition time interval between two adjacent laser frames in the laser frame sequence is shorter and the corresponding pose difference is smaller, when the first position information of the reflective object is determined according to the position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the laser frame sequence, the first position information of the reflective object can be estimated by using the robot pose of the laser frame before the current laser frame in the laser frame sequence.
Here, the estimated position of the reflective object is determined according to the position information of the reflective object observed by each laser frame in the sequence of laser frames in the robot coordinate system of the laser frame and the robot pose corresponding to the laser frame before the laser frame in the sequence of laser frames, and actually, the robot pose corresponding to the laser frame before the laser frame is taken as the robot pose corresponding to the laser frame, and then the estimated position of the reflective object is obtained by using the above formula 1.
Therefore, in this embodiment, determining the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames and the robot pose corresponding to the previous laser frame of the laser frame in the sequence of laser frames specifically includes:
s21, determining a transformation matrix from a robot coordinate system corresponding to the previous laser frame to a world coordinate system according to the robot pose corresponding to the previous laser frame of the laser frames in the laser frame sequence, and taking the transformation matrix as the transformation matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system;
and S22, calculating the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame and the conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system.
In step S22, the estimated position of the reflective object is calculated according to the position information of the reflective object in the robot coordinate system of the laser frame and the transformation matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system, and specifically, the estimated position of the reflective object can be obtained by substituting the position information of the reflective object in the robot coordinate system of the laser frame and the transformation matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system into the formula 1.
Step 9042b, calculating a distance between the light-reflecting object and each light-reflecting object in the prior map of the environment where the robot is located according to the estimated position of the light-reflecting object observed by each laser frame in the sequence of laser frames, and determining a position of the light-reflecting object closest to the light-reflecting object in the prior map of the environment where the robot is located as first position information of the light-reflecting object in the world coordinate system.
The above steps 9042a to 9042b are specific refinements of "determining the first position information of the reflective object in the world coordinate system according to the position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames" in step 8042 shown in fig. 8.
9042c, describing a conversion matrix from the robot coordinate system corresponding to the laser frame to a world coordinate system by using the assumed pose variable of each laser frame in the sequence of the laser frames;
in the embodiment of the invention, for the laser frame sequence consisting of the current laser frame and N-1 laser frames observing the reflective objects before the current laser frame, although other laser frames except the current laser frame have been positioned before, when the current laser frame is positioned, an assumed pose variable can be set for each laser frame in the laser frame sequence, for example, the assumed pose of the ith laser frame in the laser frame sequence is set to (x)i,yi,ti) Then, second position information in the world coordinate system of the reflective object can be determined according to the position information of the reflective object in the robot coordinate system of the laser frame and the assumed pose variable of the laser frame in each laser frame of the sequence of laser frames, i.e. step 9042d below.
Step 9042d, using a product of position information of the light-reflecting object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames and a transformation matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system, to represent second position information of the light-reflecting object in the world coordinate system.
In this embodiment, step 9042d is implemented by using the above formula 1, that is: and substituting the position information of the reflective object in the robot coordinate system of the laser frame and a conversion matrix from the robot coordinate system corresponding to the laser frame to a world coordinate system into the formula 1.
For example, assume that the position coordinates of the light reflecting object in the robot coordinate system of the laser frame are (lx)i,lyi) The assumed pose variable corresponding to the laser frame is (x)i,yi,ti) Then the following operations may be performed:
1) describing a transformation matrix from a robot coordinate system corresponding to the laser frame to a world coordinate system by using the assumed pose variable of the laser frame
Figure BDA0002515022670000251
2) Converting the robot coordinate system corresponding to the laser frame into a world coordinate system
Figure BDA0002515022670000252
And position information (lx) of the light reflecting object in the robot coordinate system of the laser framei,lyi) Substituting into equation 1 above, the following equation can be obtained:
Figure BDA0002515022670000253
of the above equations, (lwx)i,lwyi) Is the second position information of the light reflecting object in the world coordinate system.
The above steps 9042c to 9042d are specific refinements of "describing the second position information of the light reflecting object in the world coordinate system using the position information of the light reflecting object in the robot coordinate system of the laser frame and the assumed pose variable of the laser frame" in the step 8042 shown in fig. 8.
9043, calculating first position and posture change information between two adjacent laser frames according to the mileage information corresponding to the two laser frames in the laser frame sequence, and describing second position and posture change information between the two laser frames by using the assumed position and posture variables of the two laser frames;
9044, determining an assumed pose variable value of each laser frame when a difference value between first position information and second position information of a reflective object observed by each laser frame in the sequence of the laser frames is minimized and a difference value between first pose change information and second pose change information between two adjacent laser frames is minimized, and determining the assumed pose variable value of the current laser frame as a robot pose corresponding to the current laser frame.
The above steps 9042a to 9044 are specific refinements of "determining the robot pose corresponding to the current laser frame according to the position information of the light-reflecting object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame" in step 104 shown in fig. 1.
The above steps 9041 to 9044 are specific refinements of step 104 shown in fig. 1.
As can be seen from the method shown in fig. 9, in this embodiment, for a reflective object observed in each laser frame of a sequence of laser frames, first position information of the reflective object is estimated by using a robot pose corresponding to a previous laser frame of the sequence of laser frames, and then second position information of the reflective object is described by using an assumed pose variable of the laser frame, so that the first position information and the second position information of the reflective object observed in different laser frames of the sequence of laser frames can be approximated subsequently, thereby solving an assumed pose variable value of the current laser frame, and realizing positioning of the current laser frame. Compared with the scheme in the prior art that at least 3 reflective objects in the laser frame can be positioned, the method has the advantages that the dependence on the reflective objects is obviously reduced, and therefore the method has higher universality and robustness.
Referring to fig. 10, fig. 10 is a flowchart of a positioning method for a nine-robot according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where the robot is located; as shown in fig. 10, the method comprises the steps of:
step 1001, respectively utilizing a laser sensor and an inertial sensor of the robot to acquire a laser frame and mileage information corresponding to the laser frame in the moving process of the robot.
Step 1002, determining whether the current laser frame observes a reflective object, if the current laser frame observes a reflective object, executing the following operation steps 1003 to 10044.
Step 1003, determining a reflective object observed by the current laser frame and position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
Step 10041, determining a laser frame sequence consisting of N-1 laser frames observed to reflect objects before the current laser frame and the current laser frame;
step 10042, determining first position information of the reflective object in a world coordinate system according to position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames, and describing second position information of the reflective object in the world coordinate system by using the position information of the reflective object in the robot coordinate system of the laser frame and the assumed pose variable of the laser frame;
step 10043a, calculating first position and orientation variation information between two adjacent laser frames according to the respective corresponding mileage information of the two laser frames in the laser frame sequence;
in practical application, when the mileage information corresponding to two laser frames is known, the position and posture change information between the two laser frames can be determined by using the mileage information corresponding to the two laser frames, specifically, the position and posture change of the robot can be determined by using the displacement of different wheels of the robot between the two laser frames, which can be implemented by using the existing method, and this is not limited in this embodiment.
Step 10043b, describing a transformation matrix from a robot coordinate system corresponding to the laser frame to a world coordinate system by using an assumed pose variable of each of two adjacent laser frames in the sequence of laser frames;
step 9042d shown in fig. 9 already mentions how to describe the transformation matrix from the robot coordinate system to the world coordinate system corresponding to the laser frame according to the assumed pose variables of the laser frame, and is not described herein again.
Step 10043c, using a conversion relationship between a pose transformation matrix between two adjacent laser frames in the sequence of laser frames and a transformation matrix from a robot coordinate system to a world coordinate system corresponding to each of the two laser frames to represent second pose change information between the two laser frames.
In the embodiment of the invention, the pose change information (delta x) between two laser framesp,Δyp,Δtp) Corresponding to a pose transformation matrix
Figure BDA0002515022670000281
This is subsequently referred to as the pose transformation matrix between the two laser frames. Assuming that the two laser frames are the i-1 th laser frame and the ith laser frame, respectively, in practical application, the pose transformation matrix between the i-1 th laser frame and the ith laser frame and the transformation matrix from the robot coordinate system to the world coordinate system corresponding to each of the two laser frames have the following conversion relationship (conversion formula):
Figure BDA0002515022670000291
wherein (x)i-1,yi-1,ti-1) And (x)i,yi,ti) Respectively assuming pose variables of the ith-1 laser frame and the ith laser frame;
Figure BDA0002515022670000292
is a conversion matrix from a robot coordinate system corresponding to the i-1 th laser frame to a world coordinate system;
Figure BDA0002515022670000293
from the robot coordinate system to the world coordinate system corresponding to the ith laser frameAnd converting the matrix.
In the embodiment of the invention, the position and posture transformation matrix between the two laser frames and the transformation matrix from the robot coordinate system to the world coordinate system corresponding to the two laser frames are substituted into the conversion relation, and the delta x can be obtainedp、ΔypAnd Δ tpThe formula (2).
The above steps 10043b to 10043c are a specific refinement of "describing second posture change information between the two laser frames using assumed posture variables of the two laser frames" in step 8043 shown in fig. 8.
The above steps 10043a to 10043c are a detailed refinement of step 8043 shown in fig. 8.
Step 10044 is determining an assumed pose variable value of each laser frame when the difference between the first position information and the second position information of the reflective object observed by each laser frame in the sequence of laser frames is minimized (approaches to 0) and the difference between the first pose change information and the second pose change information between two adjacent laser frames is minimized (approaches to 0), and determining the assumed pose variable value of the current laser frame as the robot pose corresponding to the current laser frame.
The above steps 10042 to 10044 are specific refinements of "determining the robot pose corresponding to the current laser frame according to the position information of the light reflecting object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame" in step 104 shown in fig. 1.
The above steps 10041 to 10044 are a detailed refinement of step 104 shown in fig. 1.
As can be seen from the method shown in fig. 10, in this embodiment, for the observation of two adjacent laser frames in the sequence of laser frames, the mileage information corresponding to each of the two laser frames is used to calculate the first position and orientation change information between the two laser frames, and the assumed position variable corresponding to each of the two laser frames is used to describe the second position and orientation change information between the two laser frames, so that the first position and orientation change information and the second position and orientation change information of two adjacent laser frames in the sequence of laser frames can be approximated subsequently, thereby solving the assumed position and orientation variable value of the current laser frame, and achieving the positioning of the current laser frame. Compared with the scheme in the prior art that at least 3 reflective objects in the laser frame can be positioned, the method has the advantages that the dependence on the reflective objects is obviously reduced, and therefore the method has higher universality and robustness.
Referring to fig. 11, fig. 11 is a flowchart of a ten-robot positioning method according to an embodiment of the present invention, where a plurality of light-reflecting objects are deployed in an environment where a robot is located; as shown in fig. 11, the method comprises the steps of:
step 1101, respectively acquiring a laser frame and mileage information corresponding to the laser frame in the moving process of the robot by using a laser sensor and an inertial sensor of the robot.
Step 1102, determining whether the current laser frame observes a reflective object, and if the current laser frame observes a reflective object, performing the following operation steps 1103 to 11044 d.
Step 1103, determining a reflective object observed by the current laser frame and position information of the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame.
Step 11041, determining a laser frame sequence which is composed of N-1 laser frames observing the reflective object before the current laser frame and the current laser frame;
step 11042, determining first position information of the reflective object in a world coordinate system according to position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames, and describing second position information of the reflective object in the world coordinate system by using the position information of the reflective object in the robot coordinate system of the laser frame in the laser frame and the assumed pose variable of the laser frame;
step 11043, calculating first position and posture change information between two adjacent laser frames according to the mileage information corresponding to each of the two laser frames in the laser frame sequence, and describing second position and posture change information between the two laser frames by using the assumed position and posture variables of the two laser frames;
step 11044a, the laser frame sequenceThe difference value of the first position posture change information and the second position posture change information between two adjacent laser frames in the column is recorded as a first residual e1, and e1 is recorded asTPresetting a first weight matrix omega1E1, multiplying to obtain a first matrix corresponding to the two laser frames;
in this step, e1TIs the transpose of e1, the first matrix can be expressed as: e1T×Ω1And x e1, which is used for representing the deviation between the predicted value and the actual measured value of the pose change between two adjacent laser frames.
Step 11044b, marking the difference value of the first position information and the second position information of the reflective object in each laser frame in the sequence of the laser frames as a second residual error e2, and taking e2 asTPresetting a second weight matrix omega2E2, multiplying to obtain a second matrix corresponding to the laser frame;
in this step, e2TIs the transpose of e2, and the second matrix can be expressed as: e2T×Ω2And x e2, for representing the deviation between the predicted value and the actual measured value of the position of the light reflecting object observed by the laser frame.
Step 11044c, accumulating the first matrixes corresponding to two adjacent laser frames in the sequence of laser frames to obtain a first accumulation matrix, accumulating the second matrixes corresponding to each laser frame in the sequence of laser frames to obtain a second accumulation matrix, and constructing a nonlinear equation about the assumed pose variables of each laser frame in the sequence of laser frames by using the first accumulation matrix and the second accumulation matrix;
in this step, the first accumulation matrix can be represented as:
Figure BDA0002515022670000321
m is the number of adjacent laser frames in the laser frame sequence; the second accumulation matrix can be expressed as
Figure BDA0002515022670000322
n is less than or equal to the sum of the number of reflective objects observed by each laser frame in the sequence of laser frames and is greater than or equal to the number of laser frames in the sequence of laser frames (at least one of each laser frame)A light reflecting object).
In the embodiment of the present invention, the nonlinear equation, which is constructed by using the first accumulation matrix and the second accumulation matrix, about the assumed pose variable of each laser frame in the sequence of laser frames may specifically be as follows:
Figure BDA0002515022670000323
wherein the state is an assumed pose variable set of a laser frame in the sequence of laser frames, i.e., state { (x)1,y1,t1),(x2,y2,t2),…,(xr,yr,tr)}。
And step 11044d, solving the assumed pose variable value of each laser frame in the laser frame sequence when the nonlinear equation is minimized (namely approaching 0).
In the embodiment of the invention, the nonlinear equation can be solved by using a nonlinear optimization algorithm (such as a gauss-newton method), so that the value of the assumed pose variable of each laser frame in the sequence of the laser frames can be obtained.
The above steps 11044a to 11044d are a specific refinement of 8044 shown in fig. 8.
The above steps 11042 to 11044d are specific refinements of "determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame" in step 104 shown in fig. 1.
The above steps 11041 to 11044d are specific refinements of step 104 shown in fig. 1.
As can be seen from the method shown in FIG. 11, in this embodiment, e1 is usedTX e1 and e2TThe value of xe 2 is not less than 0, so that when the value of f (state) is minimized (approaching 0), it can be ensured that the difference between the first position information and the second position information of the reflective object observed by each laser frame in the sequence of laser frames is minimized (approaching 0), and the difference between the first position posture change information and the second position posture change information between two adjacent laser frames is minimized (approaching 0).In the invention, because the pose change between the adjacent laser frames is participated in the value taking process of the assumed pose variable of each laser frame in the laser frame sequence, the dependence of the robot positioning process on the reflective object can be minimized, thus having higher universality and robustness.
The robot positioning method according to the embodiment of the present invention is specifically described above, and the present invention further provides a robot positioning device, which is described in detail below with reference to fig. 12.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a robot positioning apparatus in an embodiment of the present invention, in which a plurality of light-reflecting objects are deployed in an environment where a robot is located, the apparatus 1200 includes a processor 1201 and a non-transitory computer-readable storage medium 1202 connected to the processor 1201 through a bus:
the non-transitory computer readable storage medium 1202 for storing a computer program executable on the processor 1201, the processor 1201 implementing the following steps when executing the program:
respectively acquiring a laser frame and mileage information corresponding to the laser frame by using a laser sensor and an inertial sensor of the robot;
judging whether a light-reflecting object is observed in the current laser frame;
if the reflecting object is observed in the current laser frame, executing positioning operation; the positioning operation comprises:
determining a light-reflecting object observed by a current laser frame and position information of the light-reflecting object observed by the current laser frame in a robot coordinate system of the current laser frame;
determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 is further configured to: and if the reflecting object is not observed in the current laser frame, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines whether a reflective object is observed in the current laser frame, and if the reflective object is observed in the current laser frame, performs a positioning operation, including:
judging whether the current laser frame meets the condition of being used as a key frame;
if the current laser frame does not meet the preset requirement, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame;
if yes, executing and judging whether the current key frame observes a reflective object, setting the current laser frame as the key frame when the current key frame observes the reflective object, and executing positioning operation;
the positioning operation comprises:
determining a light-reflecting object observed by a current key frame and position information of the light-reflecting object observed by the current key frame in a robot coordinate system of the current key frame;
determining a key frame sequence consisting of the first N-1 key frames of the current key frame and the current key frame, and determining the robot pose corresponding to the current key frame according to the position information of a reflective object observed by each key frame in the robot coordinate system of the key frame and the mileage information corresponding to the key frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201, which determines whether the current laser frame meets a condition as a key frame, includes:
determining the number of interval frames between a current laser frame and a latest key frame before the current laser frame; judging whether the interval frame number is greater than a preset interval frame number threshold value, if so, determining that the current laser frame meets the condition of being a key frame, otherwise, determining that the current laser frame does not meet the condition of being the key frame; or the like, or, alternatively,
calculating the mileage difference value of the mileage information corresponding to the current laser frame and the mileage information corresponding to the latest key frame before the current laser frame; judging whether the mileage difference value is larger than a preset mileage threshold value, if so, determining that the current laser frame meets the condition of being used as a key frame, otherwise, determining that the current laser frame does not meet the condition of being used as the key frame; or the like, or, alternatively,
determining the number of interval frames between a current laser frame and a latest key frame before the current laser frame, calculating the mileage difference value between the mileage information corresponding to the current laser frame and the mileage information corresponding to the latest key frame before the current laser frame, if the number of interval frames is greater than a preset interval frame number threshold value and/or the mileage difference value is greater than a preset mileage threshold value, determining that the current laser frame meets the condition as the key frame, and otherwise, determining that the current laser frame does not meet the condition as the key frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201, when determining whether the reflective object is observed in the current laser frame, is configured to:
judging whether the laser points belonging to the reflective object are contained in the current laser frame or not according to the reflectivity of each laser point in the current laser frame, if so, determining that the reflective object is observed in the current laser frame, otherwise, determining that the reflective object is not observed in the current laser frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame, and the robot pose and the mileage information corresponding to a laser frame before the current laser frame, including:
determining mileage change information between the current laser frame and a laser frame before the current laser frame according to the mileage information corresponding to the current laser frame and the mileage information corresponding to a laser frame before the current laser frame;
and determining the robot pose corresponding to the current laser frame according to the determined mileage change information and the robot pose corresponding to the laser frame before the current laser frame.
In the arrangement shown in figure 12 of the drawings,
the laser frame comprises laser point information, and each laser point information comprises the reflectivity of the laser point, the distance between a laser sensor and the laser point and the observation angle of the laser sensor to the laser point;
the processor 1201 determines the position information of the reflective object observed by the current laser frame and the reflective object observed by the current laser frame in the robot coordinate system of the current laser frame, including:
determining the position information of the laser point in the robot coordinate system of the current laser frame according to the distance between the laser sensor and each laser point in the current laser frame and the observation angle of the laser point;
determining laser points belonging to a reflective object according to the reflectivity of each laser point in the current laser frame, and dividing the laser points belonging to the reflective object, of which the distance between the laser points is smaller than a first preset distance threshold value, into the same group according to the position information of the laser points belonging to the reflective object in the robot coordinate system of the current laser frame;
and determining the area covered by each group of laser points as a reflective object observed by the current laser frame, and determining the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201, before determining the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame, is further configured to:
and for each laser point in the group of laser points, if the number of laser points in the group of laser points, which are less than a second preset distance threshold value from the laser point, is less than a preset number of laser points, the laser point is removed from the group of laser points.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame, and includes:
and calculating the mean value of the abscissa and the mean value of the ordinate of the group of laser points in the robot coordinate system of the current laser frame, and taking the coordinate position determined by the mean value of the abscissa and the mean value of the ordinate as the position information of the light-reflecting object in the robot coordinate system of the current laser frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines the robot pose corresponding to the current laser frame according to the position information of the light reflecting object observed by each laser frame in the laser frame sequence in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame, and includes:
determining first position information of a reflective object in a world coordinate system according to position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames, and describing second position information of the reflective object in the world coordinate system by using the position information of the reflective object in the robot coordinate system of the laser frame and an assumed pose variable of the laser frame;
calculating first position and posture change information between two adjacent laser frames according to the mileage information corresponding to the two laser frames in the laser frame sequence, and describing second position and posture change information between the two laser frames by using the assumed position and posture variables of the two laser frames;
and determining an assumed pose variable value of each laser frame when the difference value between the first position information and the second position information of the reflective object observed by each laser frame in the laser frame sequence is minimized and the difference value between the first pose change information and the second pose change information between two adjacent laser frames is minimized, and determining the assumed pose variable value of the current laser frame as the robot pose corresponding to the current laser frame.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines, according to position information of a reflective object in a robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames, first position information of the reflective object in a world coordinate system, including:
determining the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame and the robot pose corresponding to the previous laser frame of the laser frame in the laser frame sequence;
and calculating the distance between the light reflecting object and each light reflecting object in the prior map of the environment where the robot is located according to the estimated position of the light reflecting object, and determining the position of the light reflecting object which is closest to the light reflecting object in the prior map of the environment where the robot is located as first position information of the light reflecting object in a world coordinate system.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame and the robot pose corresponding to the previous laser frame of the laser frame in the sequence of laser frames, including:
determining a conversion matrix from a robot coordinate system corresponding to a previous laser frame to a world coordinate system according to a robot pose corresponding to the previous laser frame of the laser frames in the laser frame sequence, and taking the conversion matrix as the conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system;
and calculating the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame and the conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system.
In the arrangement shown in figure 12 of the drawings,
the processor 1201, which describes second position information of the reflective object in the world coordinate system using the position information of the reflective object in the laser frame in the robot coordinate system of the laser frame and the assumed pose variable of the laser frame, includes:
describing a transformation matrix from a robot coordinate system corresponding to the laser frame to a world coordinate system by using the assumed pose variable of the laser frame;
and expressing second position information of the reflective object in the world coordinate system by using the product of the position information of the reflective object in the robot coordinate system of the laser frame and a conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system.
In the arrangement shown in figure 12 of the drawings,
the processor 1201, which describes second pose change information between the two laser frames using the robot poses of the two laser frames, includes:
describing a transformation matrix from a robot coordinate system corresponding to the laser frame to a world coordinate system by using the assumed pose variable of each of the two laser frames;
and representing second position posture change information between the two laser frames by using a conversion relation between a position posture transformation matrix between the two laser frames and a conversion matrix from a robot coordinate system to a world coordinate system corresponding to the two laser frames.
In the arrangement shown in figure 12 of the drawings,
the processor 1201 determines that the difference between the first position information and the second position information of the reflective object observed by each laser frame in the sequence of laser frames is minimized, and the assumed pose variable value of each laser frame when the difference between the first pose change information and the second pose change information between two adjacent recording frames is minimized includes:
recording the difference value of the first position posture change information and the second position posture change information between two adjacent laser frames in the laser frame sequence as a first residual e1, and recording e1TPresetting a first weight matrix omega1E1, multiplying to obtain a first matrix corresponding to the two laser frames;
recording the difference value of the first position information and the second position information of the reflective object in each laser frame in the sequence of the laser frames as a second residual error e2, and taking e2 asTPresetting a second weight matrix omega2E2, multiplying to obtain a second matrix corresponding to the laser frame;
accumulating first matrixes corresponding to two adjacent laser frames in the sequence of laser frames to obtain a first accumulation matrix, accumulating second matrixes corresponding to each laser frame in the sequence of laser frames to obtain a second accumulation matrix, and constructing a nonlinear equation about the assumed pose variables of each laser frame in the sequence of laser frames by using the first accumulation matrix and the second accumulation matrix;
and solving the assumed pose variable value of each laser frame in the laser frame sequence when the nonlinear equation is minimized.
Embodiments of the present invention also provide a non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps in the robot positioning method as shown in fig. 1-6, 8-11.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (17)

1. A method for positioning a robot, wherein the robot is deployed in an environment having a plurality of light-reflecting objects, the method comprising:
respectively acquiring a laser frame and mileage information corresponding to the laser frame by using a laser sensor and an inertial sensor of the robot;
judging whether a light-reflecting object is observed in the current laser frame;
if the reflecting object is observed in the current laser frame, executing positioning operation; the positioning operation comprises:
determining a light-reflecting object observed by a current laser frame and position information of the light-reflecting object observed by the current laser frame in a robot coordinate system of the current laser frame;
determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
2. The method of claim 1, further comprising:
and if the reflecting object is not observed in the current laser frame, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame.
3. The method of claim 1, wherein the determining whether the current laser frame observes a reflective object and if the current laser frame observes a reflective object, performing a positioning operation comprises:
judging whether the current laser frame meets the condition of being used as a key frame;
if the current laser frame does not meet the preset requirement, determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame and the robot pose and the mileage information corresponding to a laser frame before the current laser frame;
if yes, executing and judging whether the current key frame observes a reflective object, setting the current laser frame as the key frame when the current key frame observes the reflective object, and executing positioning operation;
the positioning operation comprises:
determining a light-reflecting object observed by a current key frame and position information of the light-reflecting object observed by the current key frame in a robot coordinate system of the current key frame;
determining a key frame sequence consisting of the first N-1 key frames of the current key frame and the current key frame, and determining the robot pose corresponding to the current key frame according to the position information of a reflective object observed by each key frame in the robot coordinate system of the key frame and the mileage information corresponding to the key frame.
4. The method of claim 3, wherein determining whether the current laser frame satisfies the condition as a key frame comprises:
determining the number of interval frames between a current laser frame and a latest key frame before the current laser frame; judging whether the interval frame number is greater than a preset interval frame number threshold value, if so, determining that the current laser frame meets the condition of being a key frame, otherwise, determining that the current laser frame does not meet the condition of being the key frame; or the like, or, alternatively,
calculating the mileage difference value of the mileage information corresponding to the current laser frame and the mileage information corresponding to the latest key frame before the current laser frame; judging whether the mileage difference value is larger than a preset mileage threshold value, if so, determining that the current laser frame meets the condition of being used as a key frame, otherwise, determining that the current laser frame does not meet the condition of being used as the key frame; or the like, or, alternatively,
determining the number of interval frames between a current laser frame and a latest key frame before the current laser frame, calculating the mileage difference value between the mileage information corresponding to the current laser frame and the mileage information corresponding to the latest key frame before the current laser frame, if the number of interval frames is greater than a preset interval frame number threshold value and/or the mileage difference value is greater than a preset mileage threshold value, determining that the current laser frame meets the condition as the key frame, and otherwise, determining that the current laser frame does not meet the condition as the key frame.
5. The method of claim 1,
the method for judging whether the reflecting object is observed in the current laser frame comprises the following steps:
judging whether the laser points belonging to the reflective object are contained in the current laser frame or not according to the reflectivity of each laser point in the current laser frame, if so, determining that the reflective object is observed in the current laser frame, otherwise, determining that the reflective object is not observed in the current laser frame.
6. The method of claim 2 or 3, wherein determining the robot pose corresponding to the current laser frame according to the mileage information corresponding to the current laser frame, and the robot pose and the mileage information corresponding to a laser frame before the current laser frame comprises:
determining mileage change information between the current laser frame and a laser frame before the current laser frame according to the mileage information corresponding to the current laser frame and the mileage information corresponding to a laser frame before the current laser frame;
and determining the robot pose corresponding to the current laser frame according to the determined mileage change information and the robot pose corresponding to the laser frame before the current laser frame.
7. The method of claim 1, wherein the laser frame comprises laser spot information, each laser spot information comprising a reflectivity of the laser spot, a distance of a laser sensor from the laser spot, and an angle of view of the laser sensor to the laser spot;
determining the position information of a reflecting object observed by the current laser frame and the position information of the reflecting object observed by the current laser frame in the robot coordinate system of the current laser frame, wherein the position information comprises the following steps:
determining the position information of the laser point in the robot coordinate system of the current laser frame according to the distance between the laser sensor and each laser point in the current laser frame and the observation angle of the laser point;
determining laser points belonging to a reflective object according to the reflectivity of each laser point in the current laser frame, and dividing the laser points belonging to the reflective object, of which the distance between the laser points is smaller than a first preset distance threshold value, into the same group according to the position information of the laser points belonging to the reflective object in the robot coordinate system of the current laser frame;
and determining the area covered by each group of laser points as a reflective object observed by the current laser frame, and determining the position information of the reflective object in the robot coordinate system of the current laser frame according to the position information of the group of laser points in the robot coordinate system of the current laser frame.
8. The method of claim 7, wherein before determining the position information of the light reflecting object in the robot coordinate system of the current laser frame according to the position information of the set of laser points in the robot coordinate system of the current laser frame, further comprising:
and for each laser point in the group of laser points, if the number of laser points in the group of laser points, which are less than a second preset distance threshold value from the laser point, is less than a preset number of laser points, the laser point is removed from the group of laser points.
9. The method of claim 7, wherein determining the position information of the light-reflecting object in the robot coordinate system of the current laser frame according to the position information of the set of laser points in the robot coordinate system of the current laser frame comprises:
and calculating the mean value of the abscissa and the mean value of the ordinate of the group of laser points in the robot coordinate system of the current laser frame, and taking the coordinate position determined by the mean value of the abscissa and the mean value of the ordinate as the position information of the light-reflecting object in the robot coordinate system of the current laser frame.
10. The method of claim 1, wherein determining the robot pose corresponding to the current laser frame according to the position information of the light-reflecting object observed by each laser frame in the laser frame sequence in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame comprises:
determining first position information of a reflective object in a world coordinate system according to position information of the reflective object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames, and describing second position information of the reflective object in the world coordinate system by using the position information of the reflective object in the robot coordinate system of the laser frame and an assumed pose variable of the laser frame;
calculating first position and posture change information between two adjacent laser frames according to the mileage information corresponding to the two laser frames in the laser frame sequence, and describing second position and posture change information between the two laser frames by using the assumed position and posture variables of the two laser frames;
and determining an assumed pose variable value of each laser frame when the difference value between the first position information and the second position information of the reflective object observed by each laser frame in the laser frame sequence is minimized and the difference value between the first pose change information and the second pose change information between two adjacent laser frames is minimized, and determining the assumed pose variable value of the current laser frame as the robot pose corresponding to the current laser frame.
11. The method of claim 10, wherein determining the first position information of the light-reflecting object in the world coordinate system according to the position information of the light-reflecting object in the robot coordinate system of the laser frame observed by each laser frame in the sequence of laser frames comprises:
determining the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame and the robot pose corresponding to the previous laser frame of the laser frame in the laser frame sequence;
and calculating the distance between the light reflecting object and each light reflecting object in the prior map of the environment where the robot is located according to the estimated position of the light reflecting object, and determining the position of the light reflecting object which is closest to the light reflecting object in the prior map of the environment where the robot is located as first position information of the light reflecting object in a world coordinate system.
12. The method of claim 11, wherein determining the estimated position of the retro-reflective object according to the position information of the retro-reflective object in the robot coordinate system of the laser frame and the robot pose corresponding to the previous laser frame of the laser frame in the sequence of laser frames comprises:
determining a conversion matrix from a robot coordinate system corresponding to a previous laser frame to a world coordinate system according to a robot pose corresponding to the previous laser frame of the laser frames in the laser frame sequence, and taking the conversion matrix as the conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system;
and calculating the estimated position of the reflective object according to the position information of the reflective object in the robot coordinate system of the laser frame and the conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system.
13. The method of claim 10, wherein describing second position information of the retro-reflective object in a world coordinate system using position information of the retro-reflective object in the laser frame in a robot coordinate system of the laser frame and an assumed pose variable of the laser frame comprises:
describing a transformation matrix from a robot coordinate system corresponding to the laser frame to a world coordinate system by using the assumed pose variable of the laser frame;
and expressing second position information of the reflective object in the world coordinate system by using the product of the position information of the reflective object in the robot coordinate system of the laser frame and a conversion matrix from the robot coordinate system corresponding to the laser frame to the world coordinate system.
14. The method of claim 10, wherein describing second pose change information between the two laser frames using the robot poses of the two laser frames comprises:
describing a transformation matrix from a robot coordinate system corresponding to the laser frame to a world coordinate system by using the assumed pose variable of each of the two laser frames;
and representing second position posture change information between the two laser frames by using a conversion relation between a position posture transformation matrix between the two laser frames and a conversion matrix from a robot coordinate system to a world coordinate system corresponding to the two laser frames.
15. The method of claim 10, wherein determining the value of the assumed pose variable of each laser frame when the difference between the first position information and the second position information of the reflective object observed by each laser frame in the sequence of laser frames is minimized and the difference between the first pose change information and the second pose change information between two adjacent frames is minimized comprises:
recording the difference value of the first position posture change information and the second position posture change information between two adjacent laser frames in the laser frame sequence as a first residual e1, and recording e1TIn advance ofLet a first weight matrix Ω1E1, multiplying to obtain a first matrix corresponding to the two laser frames;
recording the difference value of the first position information and the second position information of the reflective object in each laser frame in the sequence of the laser frames as a second residual error e2, and taking e2 asTPresetting a second weight matrix omega2E2, multiplying to obtain a second matrix corresponding to the laser frame;
accumulating first matrixes corresponding to two adjacent laser frames in the sequence of laser frames to obtain a first accumulation matrix, accumulating second matrixes corresponding to each laser frame in the sequence of laser frames to obtain a second accumulation matrix, and constructing a nonlinear equation about the assumed pose variables of each laser frame in the sequence of laser frames by using the first accumulation matrix and the second accumulation matrix;
and solving the assumed pose variable value of each laser frame in the laser frame sequence when the nonlinear equation is minimized.
16. A robot positioning apparatus in which a plurality of light-reflecting objects are deployed in an environment in which a robot is located, the apparatus comprising a processor, and a non-transitory computer-readable storage medium connected to the processor via a bus:
the non-transitory computer readable storage medium storing a computer program executable on the processor, the processor implementing the following steps when executing the program:
respectively acquiring a laser frame and mileage information corresponding to the laser frame by using a laser sensor and an inertial sensor of the robot;
judging whether a light-reflecting object is observed in the current laser frame;
if the reflecting object is observed in the current laser frame, executing positioning operation; the positioning operation comprises:
determining a light-reflecting object observed by a current laser frame and position information of the light-reflecting object observed by the current laser frame in a robot coordinate system of the current laser frame;
determining a laser frame sequence consisting of N-1 laser frames observing a reflective object before the current laser frame and the current laser frame, and determining the robot pose corresponding to the current laser frame according to the position information of the reflective object observed by each laser frame in the robot coordinate system of the laser frame and the mileage information corresponding to the laser frame.
17. A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps in the robot positioning method of any of claims 1 to 15.
CN202010473324.2A 2020-05-29 2020-05-29 Robot positioning method and device and storage medium Pending CN113739785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010473324.2A CN113739785A (en) 2020-05-29 2020-05-29 Robot positioning method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010473324.2A CN113739785A (en) 2020-05-29 2020-05-29 Robot positioning method and device and storage medium

Publications (1)

Publication Number Publication Date
CN113739785A true CN113739785A (en) 2021-12-03

Family

ID=78724450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010473324.2A Pending CN113739785A (en) 2020-05-29 2020-05-29 Robot positioning method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113739785A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204883363U (en) * 2015-07-29 2015-12-16 广东省自动化研究所 AGV transport robot navigation system that laser guidance map found
CN105702151A (en) * 2016-03-31 2016-06-22 百度在线网络技术(北京)有限公司 Indoor map constructing method and device
CN106093954A (en) * 2016-06-02 2016-11-09 邓湘 A kind of Quick Response Code laser ranging vehicle positioning method and equipment thereof
CN107092264A (en) * 2017-06-21 2017-08-25 北京理工大学 Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
CN108253958A (en) * 2018-01-18 2018-07-06 亿嘉和科技股份有限公司 A kind of robot real-time location method under sparse environment
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110879400A (en) * 2019-11-27 2020-03-13 炬星科技(深圳)有限公司 Method, equipment and storage medium for fusion positioning of laser radar and IMU
CN110954095A (en) * 2019-12-11 2020-04-03 陕西瑞特测控技术有限公司 Combined navigation positioning system and control method thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204883363U (en) * 2015-07-29 2015-12-16 广东省自动化研究所 AGV transport robot navigation system that laser guidance map found
CN105702151A (en) * 2016-03-31 2016-06-22 百度在线网络技术(北京)有限公司 Indoor map constructing method and device
WO2017166594A1 (en) * 2016-03-31 2017-10-05 百度在线网络技术(北京)有限公司 Indoor map construction method, device, and storage method
CN106093954A (en) * 2016-06-02 2016-11-09 邓湘 A kind of Quick Response Code laser ranging vehicle positioning method and equipment thereof
CN107092264A (en) * 2017-06-21 2017-08-25 北京理工大学 Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
CN108253958A (en) * 2018-01-18 2018-07-06 亿嘉和科技股份有限公司 A kind of robot real-time location method under sparse environment
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110879400A (en) * 2019-11-27 2020-03-13 炬星科技(深圳)有限公司 Method, equipment and storage medium for fusion positioning of laser radar and IMU
CN110954095A (en) * 2019-12-11 2020-04-03 陕西瑞特测控技术有限公司 Combined navigation positioning system and control method thereof

Similar Documents

Publication Publication Date Title
US20210333108A1 (en) Path Planning Method And Device And Mobile Device
US20220198688A1 (en) Laser coarse registration method, device, mobile terminal and storage medium
WO2021208143A1 (en) Method and system for planning and sampling mobile robot path in human-machine integration environment
US20120294534A1 (en) Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus
CN112166458B (en) Target detection and tracking method, system, equipment and storage medium
CN110749895B (en) Laser radar point cloud data-based positioning method
CN112060079B (en) Robot and collision detection method and device thereof
US20230351686A1 (en) Method, device and system for cooperatively constructing point cloud map
CN112967388A (en) Training method and device for three-dimensional time sequence image neural network model
CN102981160B (en) Method and device for ascertaining aerial target track
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN111460866B (en) Lane line detection and driving control method and device and electronic equipment
CN112946612B (en) External parameter calibration method and device, electronic equipment and storage medium
CN111157012B (en) Robot navigation method and device, readable storage medium and robot
CN113252023A (en) Positioning method, device and equipment based on odometer
CN113739785A (en) Robot positioning method and device and storage medium
CN112308917A (en) Vision-based mobile robot positioning method
CN115962773A (en) Method, device and equipment for synchronous positioning and map construction of mobile robot
CN110232715B (en) Method, device and system for self calibration of multi-depth camera
CN112967399A (en) Three-dimensional time sequence image generation method and device, computer equipment and storage medium
CN116299300B (en) Determination method and device for drivable area, computer equipment and storage medium
CN117058358B (en) Scene boundary detection method and mobile platform
CN114619453B (en) Robot, map construction method, and computer-readable storage medium
CN111862322B (en) Arch axis extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.