CN111522022A - Dynamic target detection method of robot based on laser radar - Google Patents

Dynamic target detection method of robot based on laser radar Download PDF

Info

Publication number
CN111522022A
CN111522022A CN202010310030.8A CN202010310030A CN111522022A CN 111522022 A CN111522022 A CN 111522022A CN 202010310030 A CN202010310030 A CN 202010310030A CN 111522022 A CN111522022 A CN 111522022A
Authority
CN
China
Prior art keywords
radar
point cloud
pose
laser radar
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010310030.8A
Other languages
Chinese (zh)
Other versions
CN111522022B (en
Inventor
孙伟
杜川
林旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010310030.8A priority Critical patent/CN111522022B/en
Publication of CN111522022A publication Critical patent/CN111522022A/en
Application granted granted Critical
Publication of CN111522022B publication Critical patent/CN111522022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a dynamic target detection method, in particular to a dynamic target detection method for a robot based on a laser radar, which corrects positioning by using a non-gradient optimization mode, reduces positioning errors, improves detection precision, and can reach centimeter level; the invention uses radar to scan the whole plane for detection, and has wide detection range. The robot dynamic multi-target detection method based on the laser radar is provided to reduce positioning errors and improve detection precision.

Description

Dynamic target detection method of robot based on laser radar
Technical Field
The invention relates to a dynamic target detection method, in particular to a dynamic target detection method for a robot based on a laser radar.
Background
In recent years, robots have become a representative strategic target in the high technology field. The appearance and development of the robot technology not only enable the traditional industrial production appearance to be changed fundamentally, but also have profound influence on the social life of human beings. Since the autonomous mobile robot can effectively and safely perform autonomous movement only by accurately knowing its own position, the position of an obstacle in a working space, and the movement condition of the obstacle, the problem of target detection and positioning of the autonomous mobile robot is particularly important. A great deal of research work has been done in this field both at home and abroad. In terms of current research progress, sensors such as a GPS, a camera, an inertial navigation, and the like are mainly used for acquiring environmental information and processing the environmental information to realize autonomous positioning and target detection. This also determines that the accurate and robust description of the sensor's various properties and environment guarantees the correctness of the subsequent decision.
To perform accurate target detection, the robot is first positioned. In the prior art, when a robot is used for target detection, a plurality of uncertain factors exist in the positioning process. Such as the uncertainty that the robot itself has, the accumulation of odometry errors, the noise interference of the sensors, and the complexity, the unknown, etc. of the environment in which the robot is located. Due to the existence of the uncertain factors, the positioning of the robot becomes more complex, so that the environment description has errors, and the accumulation of wrong information in the system is caused, and the accumulation further aggravates the wrong deviation of the environment perception system, so that the precision of a detection result is lower.
Disclosure of Invention
The invention aims to provide a dynamic multi-target detection method for a robot based on a laser radar to reduce positioning errors and improve detection precision aiming at the defects of the prior art.
In order to achieve the aim, the method for detecting the dynamic target by the robot based on the laser radar comprises the following steps:
(1) including a laser scanning radar in the robot center for collecting and generating point clouds [ β ]n,l1n]Giving the robot positioning pose by the robot positioning unit, wherein βnThe scanning angle of the laser radar is represented, the angle range is the radar scanning full plane, l1nIndicating the distance of the detected target from the robot at the corresponding angle βnAbscissa, < i >1nEstablishing an entity radar point cloud data map for the vertical coordinate;
(2) establishing a vector scene map according to a scene, representing barrier blocks in the scene map and the peripheral edges of the scene by enclosed line segments, and obtaining coordinates of end points and starting points of all the line segments;
(3) establishing a simulation laser radar model, inputting the pose of the entity laser radar, and solving a point cloud set [ β ] consisting of intersection points of all rays emitted by the simulation laser radar, map edges and obstacle blocks by using an intercept traversal methodn,l2n]Obtaining a radar simulation graph;
(4) correcting the positioning pose of the robot by adopting a non-gradient optimization mode, and using the corrected pose pkReplacing the previously input simulated entity radar pose to obtain a new simulated laser radar point cloud set and point cloud data map;
(5) and (3) subtracting points of the entity laser radar point cloud set and the new simulation laser radar point cloud set at the same scanning angle, and comparing the difference value with a set threshold value zeta:
if the distance difference l between the continuous n points in the solid laser radar point cloud data map and the corresponding points in the simulated laser radar point cloud data map is larger than a set threshold value zeta, namely l is larger than zeta, the solid laser radar detects a target, and β of adjacent n points is usednIs recorded as the mean value of
Figure BDA0002457305550000031
The distance average value of the entity laser radars of the adjacent n points is recorded as
Figure BDA0002457305550000032
The position of the entity laser radar is taken as the origin of a polar coordinate system, and the polar coordinate of the target is obtained
Figure BDA0002457305550000033
Otherwise, no target is detected.
Compared with the prior art, the invention has the following advantages:
1) the invention uses a non-gradient optimization mode to correct positioning, reduces positioning errors, improves detection precision which can reach centimeter level;
2) the invention uses radar to scan the whole plane for detection, and has wide detection range.
Drawings
FIG. 1 general flow chart of an implementation of the present invention
FIG. 2 is a data diagram of a cloud of entity radar points in accordance with the present invention;
FIG. 3 is an actual scene map used in the present invention;
FIG. 4 is a scene vector diagram established according to an actual scene in the present invention;
FIG. 5 is a sub-flow diagram of the generation of simulated radar point cloud data in accordance with the present invention;
FIG. 6 is a simulation diagram of the intersection of rays emitted by the simulated laser radar with the scene edge and the obstacle block, respectively, in accordance with the present invention;
FIG. 7 is a diagram showing the comparison result of point cloud data of the entity radar and the simulation radar at the robot positioning point in the present invention;
FIG. 8 is a sub-flowchart of the present invention for correcting the robot positioning pose using non-gradient optimization;
FIG. 9 is a point cloud data diagram of a simulated radar in accordance with the present invention;
FIG. 10 is a diagram showing the comparison result of the point cloud data of the corrected entity radar and the simulated radar at the robot positioning point.
Detailed Description
The embodiments and effects of the present invention will be described in detail below with reference to the accompanying drawings:
referring to fig. 1, the method for detecting a dynamic target by a robot based on a laser radar includes the following specific implementation steps:
step one, a laser scanning radar is included in the center of the robot to collect and generate a point cloud set [ βn,l1n]The robot comprises a positioning unit for giving a robot positioning pose (x, y, α), wherein βnThe scanning angle of the laser radar is represented, the angle range is the radar scanning full plane, l1nIndicating the distance of the detected target from the robot at the corresponding angle βnAbscissa, < i >1nEstablishing an entity radar point cloud data map for the vertical coordinate;
UTM-30LX of hokuyo is adopted by the laser scanning radar, the UTM-30LX of the laser scanning radar hokuyo has a scanning angle theta of 270 degrees and an angular resolution lambda of 0.25 degree, and the laser scanning radar scansThe point cloud data of UTM-30LX with a range of 0.1-30m, a scanning frequency of 40HZ, hokuyo is (β)n,ln),βnAnd lnRespectively representing the angle and the distance of the nth ray emitted by the entity laser radar relative to a laser radar body polar coordinate system to βnAs the abscissa, lnA laser scanning radar point cloud data map is built for the ordinate as shown in fig. 4.
Referring to fig. 4, the angle β of the nth ray emitted by the lidar relative to the polar coordinate system of the lidar bodynIn the range of
Figure BDA0002457305550000051
And is installed in the center of the robot, the pose of the robot is the pose of the laser scanning radar, which is generated by monte carlo positioning, or is provided by a UWB and GPS system, and assuming the pose is (x, y, α), wherein x and y represent the position of the entity laser radar in the scene coordinate system, α represents the included angle between the center line of the entity laser radar and the x axis of the map coordinate system, and the UWB system is used in the example to generate the positioning pose of the robot.
Step two: establishing a vector scene map according to a scene, representing barrier blocks in the scene map and the peripheral edges of the scene by enclosed line segments, and obtaining coordinates of end points and starting points of all the line segments;
referring to fig. 3, the actual scene map used in the present invention is composed of obstacle blocks and scene edges surrounded by line segments, and the coordinates of the start and end points of all line segments are shown in table 1:
table 1: coordinates of line segment in map (Unit: CM)
Figure BDA0002457305550000052
Figure BDA0002457305550000061
According to the end point coordinates of a line segment in the table, the end point coordinates are expressed in an x-o-y rectangular coordinate system, and a vector map of the scene can be obtained, as shown in fig. 4.
Establishing a simulation laser radar model, inputting the pose of the entity laser radar, and solving a point cloud set [ β ] formed by the intersection points of all rays emitted by the simulation laser radar, the map edges and the obstacle blocks by using an intercept traversal methodn,l2n]Obtaining a radar simulation graph;
3.1) establishing polar coordinates by using the origin of coordinates of the vector map and an X axis;
3.2) assume physical Radar coordinates of (x)0,y0) The included angle between the central line of the entity radar and the direction of the x axis is α degrees, the scanning range of the radar is theta degrees, the angular resolution of the radar is lambda degrees, and the simulation radar sends n rays in total:
Figure BDA0002457305550000062
these rays emanate from radar coordinates points spaced at λ degrees, covering α -centered coverage
Figure BDA0002457305550000063
The ray finally intersects with the obstacle block or the line segment around the scene, and the intersection points are the point cloud coordinates (x) obtained by the simulated radar scanningi,yi);
3.3) referring to FIG. 5, each intersection is found by the vector method:
3.3.1) the edge contour of the ray and the obstacle block or the periphery of the scene is linearized into a vector, and the angle gamma between the ith ray and the horizontal direction meets the requirement
Figure BDA0002457305550000064
3.3.2) solving the intersection point (x) of the ray and the edge contour line in the two-dimensional space through the constraint relation of the intersection of the two straight linesi,yi):
According to the result of 3.3.1), the direction vector of the ith ray is (cos gamma, sin gamma), and the coordinates of the starting point of the ray are (x)0,y0) Then (x)0+cosγ,y0+ sin γ) is also a point on the ray, setting the start and end of the edge profileThe points are respectively (x)s,ys) And (x)e,ye),
The intersection point satisfies both equations:
Figure BDA0002457305550000071
calculating the coordinates (x) of the intersection point from the set of equationsi,yi) Simulating the distance from the radar coordinate point to the obstacle
Figure BDA0002457305550000072
3.4) recording of gamma and l for all n raysβAnd obtaining a simulated laser radar point cloud set, wherein a simulation graph of the simulated laser radar point cloud set is shown in fig. 6.
Step four: correcting the positioning pose of the robot by adopting a non-gradient optimization mode, and using the corrected pose pkReplacing the previously input simulated entity radar pose to obtain a new simulated laser radar point cloud set and point cloud data map;
through the steps, the point cloud data of the entity radar and the simulation radar at the robot positioning point are obtained, but due to certain deviation before the point cloud data of the entity radar and the simulation radar, the deviation can cause false detection, as shown in fig. 7;
in fig. 7, a first line a represents the entity lidar point cloud, a second line B represents the simulated lidar point cloud obtained in step three, and a third line C represents the difference between the two, which can be obtained by observing line C in the figure, the simulated lidar and the entity lidar point cloud are not completely matched, a black arrow is an obstacle detected by the entity lidar, and a gray arrow is a position with the maximum deviation between the two; the source of the deviation is two-sided, one is caused by the difference between the actual scene map and the simulation scene map, and the difference can be reduced by reducing the error between the simulation map and the actual scene map as much as possible when the simulation map is established; secondly, due to the fact that the positioning result of the entity robot is not accurate enough, the difference between the positioning pose used by the simulation radar and the actual pose of the laser radar can cause the overall offset of the point cloud, the offset of a line B and a line A in a square frame of a graph in FIG. 7 is reflected on the graph, the offset generated under the condition can be reflected in the entire point cloud, particularly the offset can be large at the inflection point position of the curve, and the target is easy to be detected mistakenly, so that the offset is eliminated in a non-gradient optimization mode in the embodiment, and the positioning pose of the robot is corrected;
referring to fig. 8, the specific implementation of this step is as follows:
4.1) assuming that the real pose of the radar in the map coordinate system is (x)0,y00) The pose is unknown but really exists, and the pose obtained by the robot through the positioning unit or the Monte Carlo self-positioning is (x)1,y11) And due to the existence of noise in the sensor, the pose is more or less deviated from the real pose of the robot, and the pose is defined
Figure BDA0002457305550000081
Is unbiased rate, where L is the set of solid laser radar point clouds, L' is the set of simulated laser radar point clouds, NtRepresenting the total number of points in the lidar point cloud, N in this examplet=1080,NuZeta is a set threshold value for all points satisfying that the distance difference l is less than or equal to zeta;
4.2) initializing the unbiased rate C to-1, and positioning the robot at a certain moment to obtain a pose p1=(x1,y11) Wherein x is1,y1Representing the position of the physical lidar in the scene coordinate system, α1At the moment, the included angle between the central line of the radar and the direction of the x axis is L, and the point cloud set scanned by the entity laser radar is L1n=[βn,l1n]Taking the pose as an initial value, and calculating to obtain a pose based on p1Point cloud L of a simulated radar of points2n=[βn,l2n]=f(x1,y11) Calculating the unbiased rate C at this time according to the formula in 4.1)1=fC(L1n,L2n);
4.3) let the step size be step, for x1,y11Are respectively provided withAnd (3) stepping offset to obtain an expression formula of the offset simulated laser radar point cloud set in three directions:
Figure DA00024573055537050
Figure BDA0002457305550000091
Figure BDA0002457305550000092
the point cloud set corresponds to unbiased ratios in three directions as
Figure BDA0002457305550000093
4.4) define dx, dy, d α as offsets, dx ═ Cx-C1,dy=Cy-C1,dα=Cα-C1To obtain a new positioning pose p2=(x2,y22)=(x1+dx,y1+dy,α1+dα);
4.5) repeat (4.3) and (4.4) a total of 200 times and record the maximum unbiased rate C obtainedmaxAnd CmaxCorresponding pose pk=(xk,ykk) Wherein x isk,ykTo obtain CmaxPosition of the time-simulated radar in the scene coordinate system, αkFor the angle between the center line of the radar and the x-axis at this time, pkThe point most approximate to the real pose of the robot;
step five: using the corrected pose pkAnd replacing the position and posture of the entity radar input in the past to obtain a new simulated laser radar point cloud set.
Positioning pose p of the most approximate robot obtained in the fourth stepkInputting a simulated laser radar model, generating simulated radar point cloud data based on the pose according to the third step, and generating new simulated radar point cloud data as shown in fig. 9.
At pkThe comparison between the actual radar and the simulated radar point cloud data is shown in FIG. 10, where the line A represents the actual lidarIn the point cloud, the second line B represents the new simulated radar point cloud, and the third line C represents the difference between the two, as can be seen from fig. 10, the offset between the two is greatly reduced, indicating that the pose of the robot is corrected.
Step six: and judging whether a target exists or not according to the difference value of the points of the entity laser radar point cloud set and the new simulation laser radar point cloud set at the same scanning angle.
6.1) making a difference between the points of the entity laser radar point cloud set and the new simulation laser radar point cloud set at the same scanning angle, as shown by a line C in FIG. 10, and comparing the difference value with a set threshold value zeta:
if the distance difference l between the continuous n points in the entity laser radar point cloud data map and the distance between the corresponding points in the simulation laser radar point cloud data map is larger than a set threshold value zeta, as shown in the arrow mark of fig. 10, the entity laser radar detects a target, and 6.2) is executed;
otherwise, no target is detected;
6.2) β of adjacent n pointsnIs recorded as the mean value of
Figure BDA0002457305550000101
The average value of the distances between adjacent n points is recorded as
Figure BDA0002457305550000103
The position of the entity laser radar is taken as the origin of a polar coordinate system, and the polar coordinate of the target is obtained
Figure BDA0002457305550000102
The foregoing description is only an example of the present invention and is not intended to limit the invention, so that it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (7)

1. The method for detecting the dynamic target of the robot based on the laser radar is characterized by comprising the following steps: the method comprises the following specific implementation steps:
step one, a laser scanning radar is included in the center of the robot to collect and generate a point cloud set [ βn,l1n]The robot comprises a positioning unit for giving a robot positioning pose (x, y, α), wherein βnThe scanning angle of the laser radar is represented, the angle range is the radar scanning full plane, l1nIndicating the distance of the detected target from the robot at the corresponding angle βnAbscissa, < i >1nEstablishing an entity radar point cloud data map for the vertical coordinate;
step two: establishing a vector scene map according to a scene, representing barrier blocks in the scene map and the peripheral edges of the scene by enclosed line segments, and obtaining coordinates of end points and starting points of all the line segments;
establishing a simulation laser radar model, inputting the pose of the entity laser radar, and solving a point cloud set [ β ] formed by the intersection points of all rays emitted by the simulation laser radar, the map edges and the obstacle blocks by using an intercept traversal methodn,l2n]Obtaining a radar simulation graph;
step four: correcting the positioning pose of the robot by adopting a non-gradient optimization mode, and using the corrected pose pkReplacing the previously input simulated entity radar pose to obtain a new simulated laser radar point cloud set and point cloud data map;
step five: using the corrected pose pkReplacing the position and posture of the entity radar input in the past to obtain a new simulated laser radar point cloud set;
step six: and judging whether a target exists or not according to the difference value of the points of the entity laser radar point cloud set and the new simulation laser radar point cloud set at the same scanning angle.
2. The method for dynamic target detection by lidar based robot according to claim 1, wherein: the laser scanning radar used in the first step adopts UTM-30LX of hokuyo, the UTM-30LX of the laser scanning radar hokuyo has a scanning angle theta of 270 degrees, an angular resolution lambda of 0.25 degrees, a scanning range of 0.1-30m and a scanning frequency of 40 HZ.
3. The method as claimed in claim 2, wherein the point cloud data of UTM-30LX of hokuyo is (β)n,ln),βnAnd lnRespectively representing the angle and distance of the nth ray emitted by the entity laser radar relative to the laser radar body polar coordinate system and the angle β of the nth ray emitted by the laser scanning radar relative to the laser radar body polar coordinate systemnIn the range of
Figure FDA0002457305540000021
And is installed in the center of the robot, the pose of the robot is the pose of the laser scanning radar, which is generated by monte carlo positioning, or provided by UWB and GPS systems, assuming the pose is (x, y, α), where x and y represent the position of the entity laser radar in the scene coordinate system, and α represents the included angle between the center line of the entity laser radar and the x-axis of the map coordinate system.
4. The method for dynamic target detection by lidar based robot according to claim 1, wherein: the third step specifically comprises the following steps:
3.1) establishing polar coordinates by using the origin of coordinates of the vector map and an X axis;
3.2) assume physical Radar coordinates of (x)0,y0) The included angle between the central line of the entity radar and the direction of the x axis is α degrees, the scanning range of the radar is theta degrees, the angular resolution of the radar is lambda degrees, and the simulation radar sends n rays in total:
Figure FDA0002457305540000022
these rays emanate from radar coordinates points spaced at λ degrees, covering α -centered coverage
Figure FDA0002457305540000031
The ray finally intersects with the obstacle block or the line segment around the scene, and the intersection points are the point cloud coordinates (x) obtained by the simulated radar scanningi,yi);
3.3) finding each intersection point by a vector method:
3.3.1) the edge contour of the ray and the obstacle block or the periphery of the scene is linearized into a vector, and the angle gamma between the ith ray and the horizontal direction meets the requirement
Figure FDA0002457305540000032
3.3.2) solving the intersection point (x) of the ray and the edge contour line in the two-dimensional space through the constraint relation of the intersection of the two straight linesi,yi):
According to the result of 3.3.1), the direction vector of the ith ray is (cos gamma, sin gamma), and the coordinates of the starting point of the ray are (x)0,y0) Then (x)0+cosγ,y0+ sin γ) is also a point on the ray, and the starting point and the end point of the edge contour are respectively (x)s,ys) And (x)e,ye),
The intersection point satisfies both equations:
Figure FDA0002457305540000033
calculating the coordinates (x) of the intersection point from the set of equationsi,yi) Simulating the distance from the radar coordinate point to the obstacle
Figure FDA0002457305540000034
3.4) recording of gamma and l for all n raysβAnd obtaining a simulated laser radar point cloud set.
5. The method for dynamic target detection by lidar based robot according to claim 1, wherein: the fourth step specifically comprises the following steps:
4.1) assuming that the real pose of the radar in the map coordinate system is (x)0,y00) The pose is unknown but really exists, and the pose obtained by the robot through the positioning unit or the Monte Carlo self-positioning is (x)1,y11) And due to the existence of noise in the sensor, the pose is more or less deviated from the real pose of the robot, and the pose is defined
Figure FDA0002457305540000041
Is unbiased rate, where L is the set of solid laser radar point clouds, L' is the set of simulated laser radar point clouds, NtRepresenting the total number of points in the lidar point cloud, N in this examplet=1080,NuZeta is a set threshold value for all points satisfying that the distance difference l is less than or equal to zeta;
4.2) initializing the unbiased rate C to-1, and positioning the robot at a certain moment to obtain a pose p1=(x1,y11) Wherein x is1,y1Representing the position of the physical lidar in the scene coordinate system, α1At the moment, the included angle between the central line of the radar and the direction of the x axis is L, and the point cloud set scanned by the entity laser radar is L1n=[βn,l1n]Taking the pose as an initial value, and calculating to obtain a pose based on p1Point cloud L of a simulated radar of points2n=[βn,l2n]=f(x1,y11) Calculating the unbiased rate C at this time according to the formula in 4.1)1=fC(L1n,L2n);
4.3) let the step size be step, for x1,y11And respectively stepping and shifting to obtain the expression of the shifted simulated laser radar point cloud set in three directions:
Figure DA00024573055436967
Figure FDA0002457305540000042
Figure FDA0002457305540000043
the point cloud set corresponds to unbiased ratios in three directions as
Figure FDA0002457305540000044
4.4) define dx, dy, d α as offsets, dx ═ Cx-C1,dy=Cy-C1,dα=Cα-C1To obtain a new positioning pose p2=(x2,y22)=(x1+dx,y1+dy,α1+dα);
4.5) repeat (4.3) and (4.4) a total of 200 times and record the maximum unbiased rate C obtainedmaxAnd CmaxCorresponding pose pk=(xk,ykk) Wherein x isk,ykTo obtain CmaxPosition of the time-simulated radar in the scene coordinate system, αkFor the angle between the center line of the radar and the x-axis at this time, pkThat is, the point that most closely approximates the true pose of the robot.
6. The method for dynamic target detection by lidar based robot according to claim 1, wherein: the fifth step specifically comprises the following further steps:
positioning pose p of the most approximate robot obtained in the fourth stepkInputting a simulated laser radar model, generating simulated radar point cloud data based on the pose according to the third step, and generating new simulated radar point cloud data;
at pkAnd comparing the solid radar point cloud data with the simulated radar point cloud data, wherein a line A represents the solid laser radar point cloud, a second line B represents the new simulated radar point cloud, a third line C represents the difference between the two, and the pose of the robot is corrected.
7. The method for dynamic target detection by lidar based robot according to claim 1, wherein: the sixth step specifically comprises the following further steps:
6.1) making a difference between the points of the entity laser radar point cloud set and the new simulation laser radar point cloud set at the same scanning angle, and comparing the difference value with a set threshold value zeta:
if the distance difference l between the continuous n points in the laser scanning radar point cloud data map and the distance between the corresponding points in the simulated laser radar point cloud data map is larger than a set threshold value zeta, the laser scanning radar detects a target, and 6.2) is executed;
otherwise, no target is detected;
6.2) β of adjacent n pointsnIs recorded as the mean value of
Figure FDA0002457305540000051
The average value of the distances between adjacent n points is recorded as
Figure FDA0002457305540000052
The position of the entity laser radar is taken as the origin of a polar coordinate system, and the polar coordinate of the target is obtained
Figure FDA0002457305540000053
CN202010310030.8A 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar Active CN111522022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010310030.8A CN111522022B (en) 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010310030.8A CN111522022B (en) 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar

Publications (2)

Publication Number Publication Date
CN111522022A true CN111522022A (en) 2020-08-11
CN111522022B CN111522022B (en) 2023-03-28

Family

ID=71903379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010310030.8A Active CN111522022B (en) 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar

Country Status (1)

Country Link
CN (1) CN111522022B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433211A (en) * 2020-11-27 2021-03-02 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN112732849A (en) * 2020-12-14 2021-04-30 北京航空航天大学 High-precision vector map compression method based on polar coordinate system
CN113359151A (en) * 2021-08-09 2021-09-07 浙江华睿科技股份有限公司 Robot task point positioning method and device
CN114743449A (en) * 2020-12-23 2022-07-12 北醒(北京)光子科技有限公司 Thing networking teaching aid based on laser radar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018182538A1 (en) * 2017-03-31 2018-10-04 Agency For Science, Technology And Research Systems and methods that improve alignment of a robotic arm to an object
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018182538A1 (en) * 2017-03-31 2018-10-04 Agency For Science, Technology And Research Systems and methods that improve alignment of a robotic arm to an object
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张艳国等: "基于惯性测量单元的激光雷达点云融合方法", 《系统仿真学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433211A (en) * 2020-11-27 2021-03-02 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
WO2022110653A1 (en) * 2020-11-27 2022-06-02 浙江商汤科技开发有限公司 Pose determination method and apparatus, electronic device and computer-readable storage medium
CN112732849A (en) * 2020-12-14 2021-04-30 北京航空航天大学 High-precision vector map compression method based on polar coordinate system
CN114743449A (en) * 2020-12-23 2022-07-12 北醒(北京)光子科技有限公司 Thing networking teaching aid based on laser radar
CN113359151A (en) * 2021-08-09 2021-09-07 浙江华睿科技股份有限公司 Robot task point positioning method and device

Also Published As

Publication number Publication date
CN111522022B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110645974B (en) Mobile robot indoor map construction method fusing multiple sensors
CN111522022B (en) Dynamic target detection method of robot based on laser radar
Nieto et al. Recursive scan-matching SLAM
WO2021189468A1 (en) Attitude correction method, apparatus and system for laser radar
Kümmerle et al. Large scale graph-based SLAM using aerial images as prior information
US6470271B2 (en) Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
US8831778B2 (en) Method of accurate mapping with mobile robots
US8930127B2 (en) Localization method for mobile robots based on landmarks
CN107632308B (en) Method for detecting contour of obstacle in front of vehicle based on recursive superposition algorithm
CN111693053B (en) Repositioning method and system based on mobile robot
US9739616B2 (en) Target recognition and localization methods using a laser sensor for wheeled mobile robots
CN112415494B (en) AGV double-laser-radar position calibration method, device, equipment and storage medium
CN108332752B (en) Indoor robot positioning method and device
Zhao et al. Prediction-based geometric feature extraction for 2D laser scanner
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
Großmann et al. Robust mobile robot localisation from sparse and noisy proximity readings using Hough transform and probability grids
Deng et al. Research on target recognition and path planning for EOD robot
Donoso-Aguirre et al. Mobile robot localization using the Hausdorff distance
Brandt et al. Controlled active exploration of uncalibrated environments
CN113902828A (en) Construction method of indoor two-dimensional semantic map with corner as key feature
Chang et al. Robust accurate LiDAR-GNSS/IMU self-calibration based on iterative refinement
Lee et al. Robust Robot Navigation using Polar Coordinates in Dynamic Environments
Li et al. A single-shot pose estimation approach for a 2D laser rangefinder
Szaj et al. Vehicle localization using laser scanner
CN111061273A (en) Autonomous obstacle avoidance fusion method and system for unmanned ship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant