CN112198491A - Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar - Google Patents

Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar Download PDF

Info

Publication number
CN112198491A
CN112198491A CN202011070061.7A CN202011070061A CN112198491A CN 112198491 A CN112198491 A CN 112198491A CN 202011070061 A CN202011070061 A CN 202011070061A CN 112198491 A CN112198491 A CN 112198491A
Authority
CN
China
Prior art keywords
dimensional
point cloud
data
robot
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011070061.7A
Other languages
Chinese (zh)
Other versions
CN112198491B (en
Inventor
赖志林
骆增辉
杨晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Saite Intelligent Technology Co Ltd
Original Assignee
Guangzhou Saite Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Saite Intelligent Technology Co Ltd filed Critical Guangzhou Saite Intelligent Technology Co Ltd
Priority to CN202011070061.7A priority Critical patent/CN112198491B/en
Publication of CN112198491A publication Critical patent/CN112198491A/en
Application granted granted Critical
Publication of CN112198491B publication Critical patent/CN112198491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a robot three-dimensional sensing system based on a low-cost two-dimensional laser radar and a method thereof, the robot three-dimensional sensing system comprises a plurality of two-dimensional laser radars, a three-dimensional fusion algorithm module and a dynamic window point cloud module, the two-dimensional laser radars are respectively connected with the three-dimensional fusion algorithm module, the three-dimensional fusion algorithm module is connected with the dynamic window point cloud module, the two-dimensional laser radars are respectively arranged on a robot body, point cloud data C1 is collected through the two-dimensional laser radars, the three-dimensional fusion algorithm module transforms and fuses point cloud data C1 collected by all the two-dimensional laser radars at a certain moment into three-dimensional point cloud data C3, point cloud data Z in the dynamic window module is updated by combining space-time constraint, then the point cloud data C3 is transformed into point cloud data C4, the point cloud data C4 is inserted into the point cloud data Z in the dynamic window, therefore, three-dimensional perception of the environment is achieved, hardware auxiliary equipment is not needed, and cost is effectively reduced.

Description

Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar
Technical Field
The invention relates to the technical field of robot perception, in particular to a robot three-dimensional perception system and a robot three-dimensional perception method based on a low-cost two-dimensional laser radar.
Background
The perception is used as a basic capability of the robot, and has important significance for realizing modules such as robot positioning, decision making, planning and the like. For a robot with two-dimensional information perception capability, only information in a certain plane in the environment can be detected, and a large perception blind area is provided. The robot with the three-dimensional information perception capability can completely model the three-dimensional world in the surrounding environment, and is beneficial to the execution of modules such as subsequent positioning, decision making, planning and the like.
The laser radar is an important sensor for a robot to know the world, and can be divided into a two-dimensional laser radar and a three-dimensional laser radar according to dimensions, the three-dimensional laser radar can directly sense three-dimensional environment information, but the cost is high generally, and the two-dimensional laser radar is lower in cost compared with the three-dimensional laser radar, but can only sense the two-dimensional environment information.
The application numbers are: CN201610444260.7 discloses a three-dimensional reconstruction device of variable field of vision based on swing laser radar, and this patent discloses a device based on two-dimensional laser radar realizes three-dimensional environment perception ability, the device contains a set of laser radar swing mechanism to and corresponding mechanism motion control module. However, the laser radar swing mechanism has high requirements on mechanical precision, and the mechanism motion control module has high requirements on motion control precision and high cost.
In the prior art, other low-cost sensors are used for sensing of the robot, such as ultrasonic sensors, infrared sensors, cameras, and the like. The ultrasonic sensor can only sense an object in a cone, cannot accurately measure the position of the object, and has a relatively short measuring distance; the infrared sensor can only measure objects in one line and only has one-dimensional sensing capability; the environmental information that the camera can perceive is abundant, but ordinary camera does not have scale information, can not accurate measurement environment object position, and the depth camera can provide scale information, but usually with high costs to and be susceptible to ambient light. Therefore, in the prior art, the implementation scheme for the three-dimensional perception system of the robot has the problem of high cost.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a robot three-dimensional sensing system and a robot three-dimensional sensing method based on a low-cost two-dimensional laser radar, which can effectively reduce the cost.
The invention is realized by the following technical scheme: the robot three-dimensional sensing system comprises a plurality of two-dimensional laser radars, a three-dimensional fusion algorithm module and a dynamic window point cloud module, wherein the two-dimensional laser radars are respectively connected with the three-dimensional fusion algorithm module, the three-dimensional fusion algorithm module is connected with the dynamic window point cloud module, the two-dimensional laser radars are respectively installed on a robot body and are respectively positioned at different directions, the two-dimensional laser radars are used for collecting point cloud data C1 in a sensing range of the two-dimensional laser radars, the three-dimensional fusion algorithm module is used for fusing point cloud data C1 collected by all the two-dimensional laser radars at a certain moment, and the dynamic window point cloud module is used for maintaining point cloud data Z in a period of time.
Further: the dynamic window point cloud module maintains the point cloud data Z for 1-5 min.
Further: the sensing range of the two-dimensional laser radar is a circular plane area, and the circular plane area is parallel to the mounting base plane of the two-dimensional laser radar.
Further: the robot comprises a robot body, two-dimensional laser radars, a chassis, a.
A robot three-dimensional perception method based on a low-cost two-dimensional laser radar comprises the following steps:
s1, respectively collecting point cloud data C1 in each sensing range by a plurality of two-dimensional laser radars, wherein the point cloud data C1 are radar coordinate systems of the corresponding two-dimensional laser radars, and data points P in the point cloud data C11Data point coordinates in a corresponding radar coordinate system;
s2, the three-dimensional fusion algorithm module respectively carries out the processing of the data point P in the point cloud data C1 of each two-dimensional laser radar1Transformed to a data point P in point cloud data C22The point cloud data C2 is a robot body coordinate systemData point P2The robot visual region R at the current moment is calculated by integrating point cloud data C2 into a frame of three-dimensional point cloud data C3;
s3: the three-dimensional fusion algorithm module is used for fusing the data point P in the three-dimensional point cloud data C33Transformed to a data point P in point cloud data C44The point cloud data C4 is a global positioning coordinate system, and the data point P in the point cloud data C44The coordinate of the data point under the global positioning coordinate system;
s4: traversing all data points P in the point cloud data C4 by adopting a three-dimensional fusion algorithm4Each data point P4Inserting into a dynamic window point cloud module, and using the dynamic window point cloud module to obtain a data point P4Maintaining and storing for a period of time to obtain point cloud data Z;
s5, the three-dimensional fusion algorithm module is used for updating the point cloud data Z based on space-time constraint and deleting the data points with too early insertion time or too far distance from the robot body from the point cloud data Z;
s6, deleting data points in the robot visual area R in the point cloud data Z by the three-dimensional fusion algorithm module based on the robot visual area R;
and S7, outputting the updated point cloud data Z by the dynamic window point cloud module as the three-dimensional perception data of the robot at the current moment.
Further: the data point P in the point cloud data C1 is processed in step S21Transformed to a data point P in point cloud data C22The transformation formula of (2) is:
Figure BDA0002712180550000041
wherein ,
Figure BDA0002712180550000042
representing the coordinates of the data points in any radar coordinate system,
Figure BDA0002712180550000043
is the homogeneous order of the pointCoordinates;
wherein ,
Figure BDA0002712180550000051
representing the coordinates of the data points in the coordinate system of the robot body and corresponding to the data points P1In response to this, the mobile terminal is able to,
Figure BDA0002712180550000052
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure BDA0002712180550000053
is a transformation matrix describing the transformation relationship from the radar coordinate system to the robot body coordinate system;
wherein ,
Figure BDA0002712180550000054
the method is characterized in that the method is a rotation matrix, describes the rotation relation from a radar coordinate system to a robot body coordinate system, and is given by the installation direction of a two-dimensional laser radar on the robot body;
wherein ,t12=(tx ty tz)TThe translation vector describes the translation relation from a radar coordinate system to a robot body coordinate system, and is given by the installation direction of the two-dimensional laser radar on the robot body.
Further: the data point P in the three-dimensional point cloud data C3 is processed in step S33Transformed to a data point P in point cloud data C44The transformation formula of (1):
Figure BDA0002712180550000055
wherein ,M34Is a transformation matrix describing the transformation relationship from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure BDA0002712180550000061
indicating coordinates of any robot bodyThe coordinates of the data points under the system,
Figure BDA0002712180550000062
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure BDA0002712180550000063
represents the coordinates of the data point in the global positioning coordinates, and is associated with the data point P3In response to this, the mobile terminal is able to,
Figure BDA0002712180550000064
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure BDA0002712180550000065
is a transformation matrix describing the transformation relationship from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure BDA0002712180550000066
the method is characterized in that the method is a rotation matrix, describes the rotation relation from a robot body coordinate system to a global positioning coordinate system and is given by a positioning result of the robot body in a global map;
wherein ,t34=(t’x t’y t’z)TThe translation vector describes the translation relationship from the robot body coordinate system to the global positioning coordinate system, and is given by the positioning result of the robot body in the global positioning chart.
Further: the robot visible region R described in step S2 is composed of all the perception ranges of the two-dimensional lidar.
The invention has the advantages of
Compared with the prior art, a plurality of two-dimensional laser radars are arranged on a robot body, each two-dimensional laser radar is positioned at different directions, point cloud data C1 in a sensing range of the robot body is collected through the two-dimensional laser radars, a three-dimensional fusion algorithm module transforms and fuses point cloud data C1 collected by all the two-dimensional laser radars at a certain moment into a frame of three-dimensional point cloud data C3, a visible area R of the robot at the current moment is calculated, then the point cloud data C3 is transformed into point cloud data C4, the point cloud data C4 is inserted into a dynamic window point cloud module, the dynamic window point cloud module maintains and stores the point cloud data C4 for a period of time to obtain point cloud data Z, the three-dimensional fusion algorithm module combines space-time constraint and updates the point cloud data Z to ensure that a real-time point cloud data space with a reasonable scale can be maintained, and finally the dynamic window point, as the robot three-dimensional perception data at the current moment, the three-dimensional perception of the environment is realized, hardware auxiliary equipment is not needed, and the cost is effectively reduced.
Drawings
FIG. 1 is a three-dimensional sensing system architecture diagram of the present invention;
FIG. 2 is a two-dimensional lidar sensing range diagram of the present invention;
FIG. 3 is a top view of the robot of the present invention;
FIG. 4 is a front view of the robot of the present invention;
FIG. 5 is a flow chart of a three-dimensional sensing method according to the present invention.
Description of reference numerals: the system comprises a robot body 1, a two-dimensional laser radar 2, a three-dimensional fusion algorithm module 3 and a dynamic window point cloud module 4.
Detailed Description
Referring to fig. 1, the robot three-dimensional sensing system based on the low-cost two-dimensional laser radar comprises a plurality of two-dimensional laser radars 2, a three-dimensional fusion algorithm module 3 and a dynamic window point cloud module 4, wherein the two-dimensional laser radars 2 are respectively connected with the three-dimensional fusion algorithm module 3, the three-dimensional fusion algorithm module 3 is connected with the dynamic window point cloud module 4, the two-dimensional laser radars 2 are respectively installed on a robot body 1, each two-dimensional laser radar 2 is located at different directions, the two-dimensional laser radars 2 are used for collecting point cloud data C1 in a sensing range, the three-dimensional fusion algorithm module 3 is used for fusing point cloud data C1 collected by all the two-dimensional laser radars 2 at a certain moment, and the dynamic window point cloud module 4 is used for maintaining point cloud data Z in a period of time.
Specifically, the dynamic window point cloud module 4 maintains a window around the robot, and the window scrolls along with the movement of the robot, and the point cloud data Z is in the physical space covered by the window.
The dynamic window point cloud module 4 maintains the point cloud data Z for 1min-5 min.
Referring to fig. 2, the sensing range of the two-dimensional lidar 2 is a circular plane area parallel to the mounting base of the two-dimensional lidar.
Specifically, the circle center of the circular plane area and the circle center of the mounting base plane of the two-dimensional laser radar 2 are located on the same circular central axis, and the circular central axis is the central line of the two-dimensional laser radar 2.
Referring to fig. 3, five two-dimensional lidar 2 are provided, one of the two-dimensional lidar 2 is horizontally installed on the chassis of the robot body 1, and the remaining four two-dimensional lidar 2 are respectively installed at four corners of the robot body 1 obliquely. Wherein, the dotted line ellipse is the perception range of the two-dimensional lidar 2 installed obliquely, and the dotted line circle is the perception range of the two-dimensional lidar 2 installed horizontally.
Referring to fig. 4, two-dimensional lidar 2 are respectively obliquely installed on two sides of a robot body 1, and one two-dimensional lidar 2 is horizontally installed on a chassis of the robot body 1, wherein an arrow is a central axis corresponding to the two-dimensional lidar 2, the arrow points above the two-dimensional lidar 2, and a dotted oval area is a sensing range corresponding to the two-dimensional lidar 2.
Referring to fig. 5, the invention relates to a robot three-dimensional sensing method based on a low-cost two-dimensional laser radar, which comprises the following steps:
s1, respectively collecting point cloud data C1 in each sensing range by a plurality of two-dimensional laser radars, wherein the point cloud data C1 are radar coordinate systems of the corresponding two-dimensional laser radars, and data points P in the point cloud data C11Are coordinates of data points in a corresponding radar coordinate system.
S2, the three-dimensional fusion algorithm module respectively collects the data points P in the point cloud data C1 collected by each two-dimensional laser radar1Transformed to a data point P in point cloud data C22The point cloud data C2 is the robot body seatSystem of data points P2And (3) fusing the point cloud data C2 into a frame of three-dimensional point cloud data C3 for the data point coordinates of the robot in the coordinate system of the robot body, and calculating the visible area R of the robot at the current moment.
Specifically, the three-dimensional point cloud data C3 is a union set in which point cloud data C1 collected by a plurality of two-dimensional laser radars is converted into corresponding point cloud data C2; the visible area of the robot consists of the perception range of all two-dimensional laser radars.
The data point P in the point cloud data C1 is processed1Transformed to a data point P in point cloud data C22The transformation formula of (2) is:
Figure BDA0002712180550000101
wherein ,
Figure BDA0002712180550000102
representing the coordinates of the data points in any radar coordinate system,
Figure BDA0002712180550000103
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure BDA0002712180550000104
representing the coordinates of the data points in the coordinate system of the robot body and corresponding to the data points P1In response to this, the mobile terminal is able to,
Figure BDA0002712180550000105
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure BDA0002712180550000106
is a transformation matrix describing the transformation relationship from the radar coordinate system to the robot body coordinate system;
wherein ,
Figure BDA0002712180550000107
the method is characterized in that the method is a rotation matrix, describes the rotation relation from a radar coordinate system to a robot body coordinate system, and is given by the installation direction of a two-dimensional laser radar on the robot body;
wherein ,t12=(tx ty tz)TThe translation vector describes the translation relation from a radar coordinate system to a robot body coordinate system, and is given by the installation direction of the two-dimensional laser radar on the robot body.
S3: the three-dimensional fusion algorithm module is used for fusing the data point P in the three-dimensional point cloud data C33Transformed to a data point P in point cloud data C44The point cloud data C4 is a global positioning coordinate system, and the data point P in the point cloud data C44The coordinate of the data point in the global positioning coordinate system.
Specifically, the data point P in the three-dimensional point cloud data C3 is used3Transformed to a data point P in point cloud data C44The transformation formula of (1):
Figure BDA0002712180550000111
wherein ,M34Is a transformation matrix describing the transformation relationship from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure BDA0002712180550000112
representing the coordinates of the data points in any robot body coordinate system,
Figure BDA0002712180550000113
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure BDA0002712180550000114
represents the coordinates of the data point in the global positioning coordinates, and is associated with the data point P3In response to this, the mobile terminal is able to,
Figure BDA0002712180550000115
is thatCorresponding homogeneous coordinates;
wherein ,
Figure BDA0002712180550000121
is a transformation matrix describing the transformation relationship from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure BDA0002712180550000122
the method is characterized in that the method is a rotation matrix, describes the rotation relation from a robot body coordinate system to a global positioning coordinate system and is given by a positioning result of the robot body in a global map;
wherein ,t34=(t’x t’y t’z)TThe translation vector describes the translation relationship from the robot body coordinate system to the global positioning coordinate system, and is given by the positioning result of the robot body in the global positioning chart.
S4: traversing all data points P in the point cloud data C4 by adopting a three-dimensional fusion algorithm4Each data point P4Inserting into a dynamic window point cloud module, and using the dynamic window point cloud module to obtain a data point P4Maintaining and storing for a period of time to obtain point cloud data Z;
and S5, updating the point cloud data Z by the three-dimensional fusion algorithm module based on space-time constraint, and deleting the data points with too early insertion time or too far distance from the robot body from the point cloud data Z.
Specifically, the three-dimensional fusion algorithm traverses all data points in the point cloud data Z, and deletes a data point if the insertion time of the data point exceeds a preset time threshold from the current time, or deletes the data point if the insertion time of the data point exceeds a preset distance threshold from the robot body at the current time.
And S6, deleting the data points in the robot visual area R in the point cloud data Z by the three-dimensional fusion algorithm module based on the robot visual area R.
Specifically, the three-dimensional fusion algorithm traverses all data points in the point cloud data Z, and if a certain data point is located in any plane in the robot visual area R, the data point is deleted.
And S7, outputting the updated point cloud data Z by the dynamic window point cloud module as the three-dimensional perception data of the robot at the current moment.
The robot comprises a robot body, a plurality of two-dimensional laser radars, a dynamic window point cloud module and a robot sensing module, wherein the two-dimensional laser radars are arranged on the robot body, each two-dimensional laser radar is positioned at different directions, point cloud data C1 in a sensing plane of the robot body is acquired through the two-dimensional laser radars, the three-dimensional fusion algorithm module transforms and fuses point cloud data C1 acquired by all the two-dimensional laser radars at a certain moment into three-dimensional point cloud data C3 in one frame and calculates a visible area R of the robot at the current moment, then the point cloud data C3 is transformed into point cloud data C4 and the point cloud data C4 is inserted into the dynamic window point cloud module, the dynamic window point cloud module maintains and stores the point cloud data C4 for a period of time to obtain point, therefore, three-dimensional perception of the environment is achieved, hardware auxiliary equipment is not needed, and cost is effectively reduced.
The above detailed description is specific to possible embodiments of the present invention, and the embodiments are not intended to limit the scope of the present invention, and all equivalent implementations or modifications that do not depart from the scope of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. The utility model provides a three-dimensional perception system of robot based on low-cost two-dimensional laser radar which characterized in that: the robot comprises a plurality of two-dimensional laser radars, a three-dimensional fusion algorithm module and a dynamic window point cloud module, wherein the two-dimensional laser radars are respectively connected with the three-dimensional fusion algorithm module, the three-dimensional fusion algorithm module is connected with the dynamic window point cloud module, the two-dimensional laser radars are respectively installed on a robot body, each two-dimensional laser radar is located at different directions, the two-dimensional laser radars are used for collecting point cloud data C1 in a sensing range of the two-dimensional laser radars, the three-dimensional fusion algorithm module is used for fusing point cloud data C1 collected by all the two-dimensional laser radars at a certain moment, and the dynamic window point cloud module is used for maintaining point cloud data Z in a period of time.
2. The robot three-dimensional perception system based on the low-cost two-dimensional laser radar as claimed in claim 1 is characterized in that: the dynamic window point cloud module maintains the point cloud data Z for 1-5 min.
3. The robot three-dimensional perception system based on the low-cost two-dimensional laser radar is characterized in that: the sensing range of the two-dimensional laser radar is a circular plane area, and the circular plane area is parallel to the mounting base plane of the two-dimensional laser radar.
4. The robot three-dimensional perception system based on the low-cost two-dimensional laser radar is characterized in that: the robot comprises a robot body, two-dimensional laser radars, a chassis, a.
5. A robot three-dimensional perception method based on a low-cost two-dimensional laser radar is characterized by comprising the following steps: the method comprises the following steps:
s1, respectively collecting point cloud data C1 in each sensing range by a plurality of two-dimensional laser radars, wherein the point cloud data C1 are radar coordinate systems of the corresponding two-dimensional laser radars, and data points P in the point cloud data C11Data point coordinates in a corresponding radar coordinate system;
s2, the three-dimensional fusion algorithm module respectively carries out the processing of the data point P in the point cloud data C1 of each two-dimensional laser radar1Transformed to a data point P in point cloud data C22The point cloud data C2 is a robot body coordinate system and a data point P2The data point coordinates of the robot under the coordinate system are fused into a frame of three-dimensional point cloud data C3, and the robot visual data at the current moment are calculatedA region R;
s3: the three-dimensional fusion algorithm module is used for fusing the data point P in the three-dimensional point cloud data C33Transformed to a data point P in point cloud data C44The point cloud data C4 is a global positioning coordinate system, and the data point P in the point cloud data C44The coordinate of the data point under the global positioning coordinate system;
s4: traversing all data points P in the point cloud data C4 by adopting a three-dimensional fusion algorithm4Each data point P4Inserting into a dynamic window point cloud module, and using the dynamic window point cloud module to obtain a data point P4Maintaining and storing for a period of time to obtain point cloud data Z;
s5, the three-dimensional fusion algorithm module updates point cloud data Z based on space-time constraint, and deletes data points with too early insertion time or too far distance from the robot body from the point cloud data Z;
s6, deleting data points in the robot visual area R in the point cloud data Z by the three-dimensional fusion algorithm module based on the robot visual area R;
and S7, outputting the updated point cloud data Z by the dynamic window point cloud module as the three-dimensional perception data of the robot at the current moment.
6. The robot three-dimensional perception method based on the low-cost two-dimensional laser radar as claimed in claim 5, wherein: the data point P in the point cloud data C1 is processed in step S21Transformed to a data point P in point cloud data C22The transformation formula of (2) is:
Figure FDA0002712180540000031
wherein ,
Figure FDA0002712180540000032
representing the coordinates of the data points in any radar coordinate system,
Figure FDA0002712180540000033
is the pointCorresponding homogeneous coordinates;
wherein ,
Figure FDA0002712180540000034
representing the coordinates of the data points in the coordinate system of the robot body and corresponding to the data points P1In response to this, the mobile terminal is able to,
Figure FDA0002712180540000035
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure FDA0002712180540000036
is a transformation matrix describing the transformation relationship from the radar coordinate system to the robot body coordinate system;
wherein ,
Figure FDA0002712180540000041
the method is characterized in that the method is a rotation matrix, describes the rotation relation from a radar coordinate system to a robot body coordinate system, and is given by the installation direction of a two-dimensional laser radar on the robot body;
wherein ,t12=(tx ty tz)TThe translation vector describes the translation relation from a radar coordinate system to a robot body coordinate system, and is given by the installation direction of the two-dimensional laser radar on the robot body.
7. The robot three-dimensional perception method based on the low-cost two-dimensional laser radar as claimed in claim 5, wherein: the data point P in the three-dimensional point cloud data C3 is processed in step S33Transformed to a data point P in point cloud data C44The transformation formula of (1):
Figure FDA0002712180540000042
wherein ,M34Is a transformation matrix describing the robot body coordinate system to the global positioning coordinate systemTransforming the relationship;
wherein ,
Figure FDA0002712180540000043
representing the coordinates of the data points in any robot body coordinate system,
Figure FDA0002712180540000044
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure FDA0002712180540000045
represents the coordinates of the data point in the global positioning coordinates, and is associated with the data point P3In response to this, the mobile terminal is able to,
Figure FDA0002712180540000051
is the homogeneous coordinate corresponding to the point;
wherein ,
Figure FDA0002712180540000052
is a transformation matrix describing the transformation relationship from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure FDA0002712180540000053
the method is characterized in that the method is a rotation matrix, describes the rotation relation from a robot body coordinate system to a global positioning coordinate system and is given by a positioning result of the robot body in a global map;
wherein ,t34=(t’x t’y t’z)TThe translation vector describes the translation relationship from the robot body coordinate system to the global positioning coordinate system, and is given by the positioning result of the robot body in the global positioning chart.
8. The robot three-dimensional perception method based on the low-cost two-dimensional laser radar as claimed in claim 5, wherein: the robot visible region R described in step S2 is composed of all the perception ranges of the two-dimensional lidar.
CN202011070061.7A 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar Active CN112198491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011070061.7A CN112198491B (en) 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011070061.7A CN112198491B (en) 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar

Publications (2)

Publication Number Publication Date
CN112198491A true CN112198491A (en) 2021-01-08
CN112198491B CN112198491B (en) 2023-06-09

Family

ID=74013643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011070061.7A Active CN112198491B (en) 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar

Country Status (1)

Country Link
CN (1) CN112198491B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199626A (en) * 2016-06-30 2016-12-07 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN107167141A (en) * 2017-06-15 2017-09-15 同济大学 Robot autonomous navigation system based on double line laser radars
US20180210087A1 (en) * 2017-01-26 2018-07-26 The Regents Of The University Of Michigan Localization Using 2D Maps Which Capture Vertical Structures In 3D Point Data
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN110095786A (en) * 2019-04-30 2019-08-06 北京云迹科技有限公司 Three-dimensional point cloud based on a line laser radar ground drawing generating method and system
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110658530A (en) * 2019-08-01 2020-01-07 北京联合大学 Map construction method and system based on double-laser-radar data fusion and map
CN111259807A (en) * 2020-01-17 2020-06-09 中国矿业大学 Underground limited area mobile equipment positioning system
CN111596659A (en) * 2020-05-14 2020-08-28 福勤智能科技(昆山)有限公司 Automatic guided vehicle and system based on Mecanum wheels

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199626A (en) * 2016-06-30 2016-12-07 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
US20180210087A1 (en) * 2017-01-26 2018-07-26 The Regents Of The University Of Michigan Localization Using 2D Maps Which Capture Vertical Structures In 3D Point Data
CN107167141A (en) * 2017-06-15 2017-09-15 同济大学 Robot autonomous navigation system based on double line laser radars
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN110095786A (en) * 2019-04-30 2019-08-06 北京云迹科技有限公司 Three-dimensional point cloud based on a line laser radar ground drawing generating method and system
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110658530A (en) * 2019-08-01 2020-01-07 北京联合大学 Map construction method and system based on double-laser-radar data fusion and map
CN111259807A (en) * 2020-01-17 2020-06-09 中国矿业大学 Underground limited area mobile equipment positioning system
CN111596659A (en) * 2020-05-14 2020-08-28 福勤智能科技(昆山)有限公司 Automatic guided vehicle and system based on Mecanum wheels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李琳;张旭;屠大维;: "二维和三维视觉传感集成系统联合标定方法", 仪器仪表学报 *

Also Published As

Publication number Publication date
CN112198491B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN111095355B (en) Real-time positioning and orientation tracker
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
JP2020030204A (en) Distance measurement method, program, distance measurement system and movable object
JP2019215853A (en) Method for positioning, device for positioning, device, and computer readable storage medium
JP6830140B2 (en) Motion vector field determination method, motion vector field determination device, equipment, computer readable storage medium and vehicle
CN109547769B (en) Highway traffic dynamic three-dimensional digital scene acquisition and construction system and working method thereof
CN113657224A (en) Method, device and equipment for determining object state in vehicle-road cooperation
KR20220025028A (en) Method and device for building beacon map based on visual beacon
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN114743021A (en) Fusion method and system of power transmission line image and point cloud data
CN113191388A (en) Image acquisition system for target detection model training and sample generation method
CN112733428A (en) Scanning attitude and coverage path planning method for optical measurement
CN107941167B (en) Space scanning system based on unmanned aerial vehicle carrier and structured light scanning technology and working method thereof
CN115588040A (en) System and method for counting and positioning coordinates based on full-view imaging points
CN114488094A (en) Vehicle-mounted multi-line laser radar and IMU external parameter automatic calibration method and device
CN116106870A (en) Calibration method and device for external parameters of vehicle laser radar
US20210156710A1 (en) Map processing method, device, and computer-readable storage medium
CN112381873A (en) Data labeling method and device
CN112198491A (en) Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar
CN115307646B (en) Multi-sensor fusion robot positioning method, system and device
CN116027351A (en) Hand-held/knapsack type SLAM device and positioning method
CN114429515A (en) Point cloud map construction method, device and equipment
CN113792645A (en) AI eyeball fusing image and laser radar
CN113218392A (en) Indoor positioning navigation method and navigation device
CN111521996A (en) Laser radar installation calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 510000 201, building a, No.19 nanxiangsan Road, Huangpu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU SAITE INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 510000 Room 303, 36 Kaitai Avenue, Huangpu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU SAITE INTELLIGENT TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant