CN112198491B - Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar - Google Patents

Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar Download PDF

Info

Publication number
CN112198491B
CN112198491B CN202011070061.7A CN202011070061A CN112198491B CN 112198491 B CN112198491 B CN 112198491B CN 202011070061 A CN202011070061 A CN 202011070061A CN 112198491 B CN112198491 B CN 112198491B
Authority
CN
China
Prior art keywords
point cloud
dimensional
data
cloud data
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011070061.7A
Other languages
Chinese (zh)
Other versions
CN112198491A (en
Inventor
赖志林
骆增辉
杨晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Saite Intelligent Technology Co Ltd
Original Assignee
Guangzhou Saite Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Saite Intelligent Technology Co Ltd filed Critical Guangzhou Saite Intelligent Technology Co Ltd
Priority to CN202011070061.7A priority Critical patent/CN112198491B/en
Publication of CN112198491A publication Critical patent/CN112198491A/en
Application granted granted Critical
Publication of CN112198491B publication Critical patent/CN112198491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a robot three-dimensional sensing system based on low-cost two-dimensional laser radars and a method thereof, wherein the robot three-dimensional sensing system comprises a plurality of two-dimensional laser radars, a three-dimensional fusion algorithm module and a dynamic window point cloud module, the plurality of two-dimensional laser radars are respectively connected with the three-dimensional fusion algorithm module, the three-dimensional fusion algorithm module is connected with the dynamic window point cloud module, the plurality of two-dimensional laser radars are respectively arranged on a robot body, point cloud data C1 are acquired through the two-dimensional laser radars, the three-dimensional fusion algorithm module transforms and fuses the point cloud data C1 acquired by all the two-dimensional laser radars at a certain moment into three-dimensional point cloud data C3, the point cloud data Z in the dynamic window module is updated by combining space-time constraint, the point cloud data C3 is transformed into point cloud data C4, and the point cloud data C4 is inserted into the point cloud data Z in the dynamic window point cloud module, and then the dynamic window point cloud module outputs the point cloud data Z, so that the three-dimensional sensing of the environment is realized, hardware auxiliary equipment is not needed, and the cost is effectively reduced.

Description

Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar
Technical Field
The invention relates to the technical field of robot perception, in particular to a robot three-dimensional perception system based on a low-cost two-dimensional laser radar and a method thereof.
Background
The perception is taken as a basic capability of the robot, and has important significance for the realization of modules such as robot positioning, decision making, planning and the like. For a robot with two-dimensional information sensing capability, only information in a certain plane in the environment can be detected, and a larger sensing blind area is provided. The robot with three-dimensional information perception capability can complete modeling of the three-dimensional world in the surrounding environment, and is beneficial to execution of modules such as subsequent positioning, decision making, planning and the like.
The laser radar is an important sensor for sensing the world by the robot, and can be divided into a two-dimensional laser radar and a three-dimensional laser radar according to dimensions, wherein the three-dimensional laser radar can directly sense three-dimensional environment information, but the cost is usually high, and the cost of the two-dimensional laser radar is lower than that of the three-dimensional laser radar, but only two-dimensional environment information can be sensed.
The application number is: CN201610444260.7 discloses a variable visual field three-dimensional reconstruction device based on a swinging laser radar, which discloses a device for realizing three-dimensional environment sensing capability based on a two-dimensional laser radar, wherein the device comprises a set of laser radar swinging mechanism and a corresponding mechanism motion control module. However, the laser radar swing mechanism has higher requirements on mechanical precision, and the mechanism motion control module has higher requirements on motion control precision and higher cost.
There are also some other low cost sensors in the prior art for sensing of robots, such as ultrasonic sensors, infrared sensors, cameras, etc. The ultrasonic sensor can only sense an object in one cone, can not accurately measure the position of the object, and has a relatively close measuring distance; the infrared sensor can only measure objects in one line and has one-dimensional sensing capability; the camera can sense abundant environmental information, but the ordinary camera has no scale information, and cannot accurately measure the position of an environmental object, and the depth camera can provide the scale information, but is usually high in cost and is easily influenced by environmental light. Therefore, in the prior art, the implementation scheme aiming at the three-dimensional sensing system of the robot has the problem of high cost.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a robot three-dimensional sensing system based on a low-cost two-dimensional laser radar and a method thereof, which can effectively reduce cost.
The invention is realized by the following technical scheme: the robot three-dimensional perception system based on the low-cost two-dimensional laser radar comprises a plurality of two-dimensional laser radars, a three-dimensional fusion algorithm module and a dynamic window point cloud module, wherein the two-dimensional laser radars are respectively connected with the three-dimensional fusion algorithm module, the three-dimensional fusion algorithm module is connected with the dynamic window point cloud module, the two-dimensional laser radars are respectively installed on a robot body, each two-dimensional laser radar is located in different directions, the two-dimensional laser radars are used for acquiring point cloud data C1 in the perception range of the two-dimensional laser radars, the three-dimensional fusion algorithm module is used for fusing point cloud data C1 acquired by all the two-dimensional laser radars at a certain moment, and the dynamic window point cloud module is used for maintaining point cloud data Z in a period of time.
Further: and the time for maintaining the point cloud data Z by the dynamic window point cloud module is 1min-5min.
Further: the sensing range of the two-dimensional laser radar is a circular plane area which is parallel to the mounting base surface of the two-dimensional laser radar.
Further: the two-dimensional laser radar is provided with five, one of them two-dimensional laser radar is installed on the chassis of robot body horizontally, and other four two-dimensional laser radar are installed respectively on four angles of robot body with slope.
A robot three-dimensional sensing method based on a low-cost two-dimensional laser radar comprises the following steps:
s1, a plurality of two-dimensional laser radars respectively acquire point cloud data C1 in each perception range, wherein the point cloud data C1 is a radar coordinate system of the corresponding two-dimensional laser radars, and data points P in the point cloud data C1 1 Data point coordinates in a corresponding radar coordinate system;
s2, respectively enabling the three-dimensional fusion algorithm module to enable data points P in point cloud data C1 of each two-dimensional laser radar 1 Transforming into data point P in point cloud data C2 2 The point cloud data C2 is a robot body coordinate system and is a data point P 2 The method comprises the steps of integrating point cloud data C2 into one frame of three-dimensional point cloud data C3 for data point coordinates under a robot body coordinate system, and calculating a robot visible region R under the current moment;
s3: the three-dimensional fusion algorithm module is used for carrying out three-dimensional fusion on data points P in three-dimensional point cloud data C3 3 Transforming into data point P in point cloud data C4 4 The point cloud data C4 is a global positioning coordinate system, and the data points P in the point cloud data C4 4 Data point coordinates in a global positioning coordinate system;
s4: traversing all data points P in point cloud data C4 by adopting three-dimensional fusion algorithm 4 Each data point P 4 Inserting dynamic window point cloudsIn the module, the dynamic window point cloud module sends the data point P 4 Maintaining the storage for a period of time to obtain point cloud data Z;
s5, based on space-time constraint, the three-dimensional fusion algorithm module further comprises point cloud data Z, and deleting data points with early insertion time or too far from the robot body from the point cloud data Z;
s6, deleting data points positioned in the robot visible area R in the point cloud data Z based on the robot visible area R by the three-dimensional fusion algorithm module;
and S7, outputting updated point cloud data Z by the dynamic window point cloud module to serve as three-dimensional sensing data of the robot at the current moment.
Further: in step S2, the data point P in the point cloud data C1 is obtained 1 Transforming into data point P in point cloud data C2 2 The transformation formula of (2) is:
Figure GDA0004190098530000041
wherein ,
Figure GDA0004190098530000042
representing coordinates of data points in either radar coordinate system,/->
Figure GDA0004190098530000043
Is P 1 Corresponding homogeneous coordinates;
wherein ,
Figure GDA0004190098530000051
representing the coordinates of a data point in the robot body coordinate system and corresponding to the data point P 1 Correspondingly (I)>
Figure GDA0004190098530000052
Is P 2 Corresponding homogeneous coordinates;
wherein ,
Figure GDA0004190098530000053
is a transformation matrix, and is describedThe transformation relation from the radar coordinate system to the robot body coordinate system is achieved;
wherein ,
Figure GDA0004190098530000054
the rotation matrix describes the rotation relation from the radar coordinate system to the robot body coordinate system, and is given by the installation azimuth of the two-dimensional laser radar on the robot body;
wherein ,t12 =(t x t y t z ) T Is a translation vector describing the translation relationship of the radar coordinate system to the robot body coordinate system, and is given by the installation orientation of the two-dimensional laser radar on the robot body.
Further: in step S3, the data point P in the three-dimensional point cloud data C3 is obtained 3 Transforming into data point P in point cloud data C4 4 Is a transformation formula of (2):
Figure GDA0004190098530000055
wherein ,M34 Is a transformation matrix describing the transformation relation from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure GDA0004190098530000061
data point coordinates in any robot body coordinate system are represented,/->
Figure GDA0004190098530000062
Is P 3 Corresponding homogeneous coordinates;
wherein ,
Figure GDA0004190098530000063
representing the coordinates of a data point at the global positioning coordinates and being associated with the data point P 3 Correspondingly (I)>
Figure GDA0004190098530000064
Is P 4 Corresponding homogeneousCoordinates;
wherein ,
Figure GDA0004190098530000065
is a transformation matrix describing the transformation relation from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure GDA0004190098530000066
the rotation matrix describes the rotation relation from the robot body coordinate system to the global positioning coordinate system, and the positioning result of the robot body in the global map is given;
wherein ,t34 =(t, x t, y t, z ) T The translation vector describes the translation relation from the robot body coordinate system to the global positioning coordinate system, and the positioning result of the robot body in the global positioning chart is given.
Further: the robot view area R in step S2 is composed of all the sensing ranges of the two-dimensional lidar.
The beneficial effects of the invention are that
Compared with the prior art, the method has the advantages that the two-dimensional laser radars are arranged on the robot body, each two-dimensional laser radar is located in different directions, the two-dimensional laser radars collect point cloud data C1 in the sensing range, the three-dimensional fusion algorithm module transforms and fuses the point cloud data C1 collected by all the two-dimensional laser radars at a certain moment into one frame of three-dimensional point cloud data C3 and calculates the visible area R of the robot at the current moment, then transforms the point cloud data C3 into the point cloud data C4 and inserts the point cloud data C4 into the dynamic window point cloud module, the dynamic window point cloud module maintains the point cloud data C4 stored for a period of time to obtain the point cloud data Z, the three-dimensional fusion algorithm module combines the space-time constraint to update the point cloud data Z so as to ensure that a real-time point cloud data space with reasonable scale can be maintained, and finally the dynamic window point cloud module outputs the updated point cloud data Z as the three-dimensional sensing data of the robot at the current moment, so that the three-dimensional sensing of the environment is realized, and the cost is effectively reduced.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional perception system in accordance with the present invention;
FIG. 2 is a diagram of the sensing range of the two-dimensional lidar of the present invention;
FIG. 3 is a top view of the robot of the present invention;
FIG. 4 is a front view of the robot of the present invention;
FIG. 5 is a flow chart of a three-dimensional sensing method according to the present invention.
Reference numerals illustrate: the system comprises a robot body, a 2-two-dimensional laser radar, a 3-three-dimensional fusion algorithm module and a 4-dynamic window point cloud module.
Detailed Description
Referring to fig. 1, the robot three-dimensional sensing system based on the low-cost two-dimensional laser radar comprises a plurality of two-dimensional laser radars 2, a three-dimensional fusion algorithm module 3 and a dynamic window point cloud module 4, wherein the two-dimensional laser radars 2 are respectively connected with the three-dimensional fusion algorithm module 3, the three-dimensional fusion algorithm module 3 is connected with the dynamic window point cloud module 4, the two-dimensional laser radars 2 are respectively installed on a robot body 1, each two-dimensional laser radar 2 is positioned in different directions, the two-dimensional laser radars 2 are used for collecting point cloud data C1 in the sensing range of the two-dimensional laser radars 2, the three-dimensional fusion algorithm module 3 is used for fusing the point cloud data C1 collected by all the two-dimensional laser radars 2 at a certain moment, and the dynamic window point cloud module 4 is used for maintaining the point cloud data Z in a period of time.
Specifically, the dynamic window point cloud module 4 maintains a window around the robot, and as the robot moves, the point cloud data Z is within the covered space of the window.
The time for the dynamic window point cloud module 4 to maintain the point cloud data Z is 1min-5min.
Referring to fig. 2, the sensing range of the two-dimensional lidar 2 is a circular planar area that is parallel to the mounting base of the two-dimensional lidar.
Specifically, the center of the circular plane area and the center of the mounting base surface of the two-dimensional laser radar 2 are located on the same circular mandrel, and the circular mandrel is the center line of the two-dimensional laser radar 2.
Referring to fig. 3, the two-dimensional lidar 2 is provided with five, one of the two-dimensional lidars 2 is horizontally mounted on the chassis of the robot body 1, and the remaining four two-dimensional lidars 2 are obliquely mounted on four corners of the robot body 1, respectively. Wherein, the dotted oval is the sensing range of the obliquely mounted two-dimensional lidar 2, and the dotted circle is the sensing range of the horizontally mounted two-dimensional lidar 2.
Referring to fig. 4, two-dimensional lidars 2 are obliquely installed at both sides of a robot body 1, respectively, one two-dimensional lidar 2 is horizontally installed on a chassis of the robot body 1, wherein an arrow is a central axis corresponding to the two-dimensional lidar 2, the arrow points to an upper side of the two-dimensional lidar 2, and a dotted oval area is a sensing range corresponding to the two-dimensional lidar 2.
Referring to fig. 5, the robot three-dimensional sensing method based on the low-cost two-dimensional laser radar of the invention comprises the following steps:
s1, a plurality of two-dimensional laser radars respectively acquire point cloud data C1 in each perception range, wherein the point cloud data C1 is a radar coordinate system of the corresponding two-dimensional laser radars, and data points P in the point cloud data C1 1 Is the coordinates of the data points in the corresponding radar coordinate system.
S2, respectively acquiring data points P in point cloud data C1 acquired by each two-dimensional laser radar by a three-dimensional fusion algorithm module 1 Transforming into data point P in point cloud data C2 2 The point cloud data C2 is a robot body coordinate system and is a data point P 2 And (3) fusing the point cloud data C2 into one frame of three-dimensional point cloud data C3 for the data point coordinates under the robot body coordinate system, and calculating a robot visible region R under the current moment.
Specifically, the three-dimensional point cloud data C3 is a union set of point cloud data C1 collected by a plurality of two-dimensional lidars and converted into corresponding point cloud data C2; the robot view area is composed of the sensing range of all two-dimensional lidar.
Data point P in the point cloud data C1 1 Transforming into data point P in point cloud data C2 2 The transformation formula of (2) is:
Figure GDA0004190098530000101
wherein ,
Figure GDA0004190098530000102
representing coordinates of data points in either radar coordinate system,/->
Figure GDA0004190098530000103
Is P 1 Corresponding homogeneous coordinates;
wherein ,
Figure GDA0004190098530000104
representing the coordinates of a data point in the robot body coordinate system and corresponding to the data point P 1 Correspondingly (I)>
Figure GDA0004190098530000105
Is P 2 Corresponding homogeneous coordinates;
wherein ,
Figure GDA0004190098530000106
is a transformation matrix describing the transformation relationship from the radar coordinate system to the robot body coordinate system;
wherein ,
Figure GDA0004190098530000107
the rotation matrix describes the rotation relation from the radar coordinate system to the robot body coordinate system, and is given by the installation azimuth of the two-dimensional laser radar on the robot body;
wherein ,t12 =(t x t y t z ) T Is a translation vector describing the translation relationship of the radar coordinate system to the robot body coordinate system, and is given by the installation orientation of the two-dimensional laser radar on the robot body.
S3: the three-dimensional fusion algorithm module is used for carrying out three-dimensional fusion on data points P in three-dimensional point cloud data C3 3 Transforming into data point P in point cloud data C4 4 The point cloud data C4 is a global positioning coordinate system, and the data points P in the point cloud data C4 4 Is the coordinates of the data points in the global positioning coordinate system.
Specifically, the data point P in the three-dimensional point cloud data C3 3 Transforming into data point P in point cloud data C4 4 Is a transformation formula of (2):
Figure GDA0004190098530000111
wherein ,M34 Is a transformation matrix describing the transformation relation from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure GDA0004190098530000112
data point coordinates in any robot body coordinate system are represented,/->
Figure GDA0004190098530000113
Is P 3 Corresponding homogeneous coordinates;
wherein ,
Figure GDA0004190098530000114
representing the coordinates of a data point at the global positioning coordinates and being associated with the data point P 3 Correspondingly (I)>
Figure GDA0004190098530000115
Is P 4 Corresponding homogeneous coordinates; />
wherein ,
Figure GDA0004190098530000121
is a transformation matrix describing the transformation relation from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure GDA0004190098530000122
is a rotation matrix describing the robot's body coordinate system to the global positioning coordinate systemThe rotation relation is given by the positioning result of the robot body in the global map;
wherein ,t34 =(t, x t, y t, z ) T The translation vector describes the translation relation from the robot body coordinate system to the global positioning coordinate system, and the positioning result of the robot body in the global positioning chart is given.
S4: traversing all data points P in point cloud data C4 by adopting three-dimensional fusion algorithm 4 Each data point P 4 Inserting the data points P into a dynamic window point cloud module which stores the data points P 4 Maintaining the storage for a period of time to obtain point cloud data Z;
and S5, updating the point cloud data Z based on space-time constraint by the three-dimensional fusion algorithm module, and deleting the data points which are too early in insertion time or too far from the robot body from the point cloud data Z.
Specifically, the three-dimensional fusion algorithm traverses all data points in the point cloud data Z, if the insertion moment of a certain data point exceeds a preset time threshold from the current moment, the data point is deleted, or if the insertion moment of the certain data point exceeds a preset distance threshold from the robot body at the current moment, the data point is deleted.
And S6, deleting the data points positioned in the robot visible area R in the point cloud data Z based on the robot visible area R by the three-dimensional fusion algorithm module.
Specifically, the three-dimensional fusion algorithm traverses all data points in the point cloud data Z, and if a certain data point is located in any plane in the robot visual area R, the data point is deleted.
And S7, outputting updated point cloud data Z by the dynamic window point cloud module to serve as three-dimensional sensing data of the robot at the current moment.
The method comprises the steps that a plurality of two-dimensional laser radars are installed on a robot body, each two-dimensional laser radar is located in different directions, point cloud data C1 in a sensing plane are collected through the two-dimensional laser radars, a three-dimensional fusion algorithm module transforms and fuses the point cloud data C1 collected by all the two-dimensional laser radars at a certain moment into one frame of three-dimensional point cloud data C3, a robot visible area R at the current moment is calculated, then the point cloud data C3 are transformed into point cloud data C4, the point cloud data C4 are inserted into a dynamic window point cloud module, the dynamic window point cloud module maintains and stores the point cloud data C4 for a period of time to obtain the point cloud data Z, the three-dimensional fusion algorithm module combines space-time constraint to update the point cloud data Z, so that a real-time point cloud data space with reasonable dimensions can be maintained, finally the dynamic window point cloud module outputs the point cloud data Z to serve as three-dimensional sensing data of the robot at the current moment, three-dimensional sensing data of the environment are achieved, hardware auxiliary equipment is not needed, and cost is effectively reduced.
The foregoing detailed description is directed to embodiments of the invention which are not intended to limit the scope of the invention, but rather to cover all modifications and variations within the scope of the invention.

Claims (4)

1. A robot three-dimensional perception method based on a low-cost two-dimensional laser radar is characterized in that: the robot three-dimensional sensing system based on the low-cost two-dimensional laser radar comprises a plurality of two-dimensional laser radars, a three-dimensional fusion algorithm module and a dynamic window point cloud module, wherein the two-dimensional laser radars are respectively connected with the three-dimensional fusion algorithm module, the three-dimensional fusion algorithm module is connected with the dynamic window point cloud module, the two-dimensional laser radars are respectively arranged on a robot body, each two-dimensional laser radar is positioned in different directions, the two-dimensional laser radars are used for collecting point cloud data C1 in the sensing range of the two-dimensional laser radars, the three-dimensional fusion algorithm module is used for fusing point cloud data C1 collected by all the two-dimensional laser radars at a certain moment, the dynamic window point cloud module is used for maintaining point cloud data Z in a certain period, the time for maintaining the point cloud data Z in the dynamic window point cloud module is 1min-5min, the sensing range of the two-dimensional laser radars is a round plane area which is parallel to the mounting base surface of the two-dimensional laser radars, one of the two-dimensional laser radars is horizontally arranged on the robot body, and the four laser radars are obliquely arranged on the four robot chassis respectively;
the three-dimensional sensing method of the robot comprises the following steps:
s1, a plurality of two-dimensional laser radars respectively acquire point cloud data C1 in each perception range, wherein the point cloud data C1 is a radar coordinate system of the corresponding two-dimensional laser radars, and data points P in the point cloud data C1 1 Data point coordinates in a corresponding radar coordinate system;
s2, respectively enabling the three-dimensional fusion algorithm module to enable data points P in point cloud data C1 of each two-dimensional laser radar 1 Transforming into data point P in point cloud data C2 2 The point cloud data C2 is a robot body coordinate system and is a data point P 2 The method comprises the steps of integrating point cloud data C2 into one frame of three-dimensional point cloud data C3 for data point coordinates under a robot body coordinate system, and calculating a robot visible region R under the current moment;
s3: the three-dimensional fusion algorithm module is used for carrying out three-dimensional fusion on data points P in three-dimensional point cloud data C3 3 Transforming into data point P in point cloud data C4 4 The point cloud data C4 is a global positioning coordinate system, and the data points P in the point cloud data C4 4 Data point coordinates in a global positioning coordinate system;
s4: traversing all data points P in point cloud data C4 by adopting three-dimensional fusion algorithm 4 Each data point P 4 Inserting the data points P into a dynamic window point cloud module which stores the data points P 4 Maintaining the storage for a period of time to obtain point cloud data Z;
s5, updating the point cloud data Z based on space-time constraint by the three-dimensional fusion algorithm module, and deleting data points with early insertion time or too far away from the robot body from the point cloud data Z;
s6, deleting data points positioned in the robot visible area R in the point cloud data Z based on the robot visible area R by the three-dimensional fusion algorithm module;
and S7, outputting updated point cloud data Z by the dynamic window point cloud module to serve as three-dimensional sensing data of the robot at the current moment.
2. The robot three-dimensional sensing method based on the low-cost two-dimensional laser radar according to claim 1, wherein the method comprises the following steps: in step S2, the data point P in the point cloud data C1 is obtained 1 Transforming into data point P in point cloud data C2 2 The transformation formula of (2) is:
Figure QLYQS_1
/>
wherein ,
Figure QLYQS_2
representing coordinates of data points in either radar coordinate system,/->
Figure QLYQS_3
Is P 1 Corresponding homogeneous coordinates;
wherein ,
Figure QLYQS_4
representing the coordinates of a data point in the robot body coordinate system and corresponding to the data point P 1 Correspondingly (I)>
Figure QLYQS_5
Is P 2 Corresponding homogeneous coordinates;
wherein ,
Figure QLYQS_6
is a transformation matrix describing the transformation relationship from the radar coordinate system to the robot body coordinate system;
wherein ,
Figure QLYQS_7
the rotation matrix describes the rotation relation from the radar coordinate system to the robot body coordinate system, and is given by the installation azimuth of the two-dimensional laser radar on the robot body;
wherein ,t12 =(t x t y t z ) T Is a translation vector describing a radarThe translational relation of the coordinate system to the robot body coordinate system is given by the installation orientation of the two-dimensional laser radar on the robot body.
3. The robot three-dimensional sensing method based on the low-cost two-dimensional laser radar according to claim 2, wherein the method comprises the following steps: in step S3, the data point P in the three-dimensional point cloud data C3 is obtained 3 Transforming into data point P in point cloud data C4 4 Is a transformation formula of (2):
Figure QLYQS_8
wherein ,M34 Is a transformation matrix describing the transformation relation from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure QLYQS_9
data point coordinates in any robot body coordinate system are represented,/->
Figure QLYQS_10
Is P 3 Corresponding homogeneous coordinates; />
wherein ,
Figure QLYQS_11
representing the coordinates of a data point at the global positioning coordinates and being associated with the data point P 3 Correspondingly (I)>
Figure QLYQS_12
Is P 4 Corresponding homogeneous coordinates;
wherein ,
Figure QLYQS_13
is a transformation matrix describing the transformation relation from the robot body coordinate system to the global positioning coordinate system;
wherein ,
Figure QLYQS_14
the rotation matrix describes the rotation relation from the robot body coordinate system to the global positioning coordinate system, and the positioning result of the robot body in the global map is given;
wherein ,t34 =(t, x t, y t, z ) T The translation vector describes the translation relation from the robot body coordinate system to the global positioning coordinate system, and the positioning result of the robot body in the global positioning chart is given.
4. A robot three-dimensional perception method based on a low-cost two-dimensional laser radar according to claim 3, characterized in that: the robot view area R in step S2 is composed of all the sensing ranges of the two-dimensional lidar.
CN202011070061.7A 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar Active CN112198491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011070061.7A CN112198491B (en) 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011070061.7A CN112198491B (en) 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar

Publications (2)

Publication Number Publication Date
CN112198491A CN112198491A (en) 2021-01-08
CN112198491B true CN112198491B (en) 2023-06-09

Family

ID=74013643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011070061.7A Active CN112198491B (en) 2020-09-30 2020-09-30 Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar

Country Status (1)

Country Link
CN (1) CN112198491B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199626A (en) * 2016-06-30 2016-12-07 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN107167141A (en) * 2017-06-15 2017-09-15 同济大学 Robot autonomous navigation system based on double line laser radars
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN110095786A (en) * 2019-04-30 2019-08-06 北京云迹科技有限公司 Three-dimensional point cloud based on a line laser radar ground drawing generating method and system
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110658530A (en) * 2019-08-01 2020-01-07 北京联合大学 Map construction method and system based on double-laser-radar data fusion and map
CN111259807A (en) * 2020-01-17 2020-06-09 中国矿业大学 Underground limited area mobile equipment positioning system
CN111596659A (en) * 2020-05-14 2020-08-28 福勤智能科技(昆山)有限公司 Automatic guided vehicle and system based on Mecanum wheels

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10838065B2 (en) * 2017-01-26 2020-11-17 The Regents Of The University Of Michigan Localization using 2D maps which capture vertical structures in 3D point data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199626A (en) * 2016-06-30 2016-12-07 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN107167141A (en) * 2017-06-15 2017-09-15 同济大学 Robot autonomous navigation system based on double line laser radars
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN110095786A (en) * 2019-04-30 2019-08-06 北京云迹科技有限公司 Three-dimensional point cloud based on a line laser radar ground drawing generating method and system
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110658530A (en) * 2019-08-01 2020-01-07 北京联合大学 Map construction method and system based on double-laser-radar data fusion and map
CN111259807A (en) * 2020-01-17 2020-06-09 中国矿业大学 Underground limited area mobile equipment positioning system
CN111596659A (en) * 2020-05-14 2020-08-28 福勤智能科技(昆山)有限公司 Automatic guided vehicle and system based on Mecanum wheels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
二维和三维视觉传感集成系统联合标定方法;李琳;张旭;屠大维;;仪器仪表学报(第11期) *

Also Published As

Publication number Publication date
CN112198491A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN108868268B (en) Unmanned parking space posture estimation method based on point-to-surface distance and cross-correlation entropy registration
US10915673B2 (en) Device, method, apparatus, and computer-readable medium for solar site assessment
WO2022206978A1 (en) Roadside millimeter-wave radar calibration method based on vehicle-mounted positioning apparatus
CN108983248A (en) It is a kind of that vehicle localization method is joined based on the net of 3D laser radar and V2X
US20140278065A1 (en) System and Method for Distortion Correction in Three-Dimensional Environment Visualization
CN110068836A (en) A kind of laser radar curb sensory perceptual system of intelligent driving electric cleaning car
CN111862214B (en) Computer equipment positioning method, device, computer equipment and storage medium
CN114743021A (en) Fusion method and system of power transmission line image and point cloud data
WO2023226574A1 (en) Scanning and observation system for coal-mine mechanical arm
CN114674311B (en) Indoor positioning and mapping method and system
CN112733428A (en) Scanning attitude and coverage path planning method for optical measurement
CN113075686A (en) Cable trench intelligent inspection robot mapping method based on multi-sensor fusion
CN107941167B (en) Space scanning system based on unmanned aerial vehicle carrier and structured light scanning technology and working method thereof
CN113296120B (en) Obstacle detection method and terminal
Zeng et al. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud
CN112381873B (en) Data labeling method and device
CN117518196A (en) Motion compensation method, device, system, equipment and medium for laser radar
CN112198491B (en) Robot three-dimensional sensing system and method based on low-cost two-dimensional laser radar
CN111812659A (en) Iron tower posture early warning device and method based on image recognition and laser ranging
CN116027351A (en) Hand-held/knapsack type SLAM device and positioning method
CN116381718A (en) Mining intrinsic safety type explosion-proof laser radar slam measuring method and device
CN111197986A (en) Real-time early warning and obstacle avoidance method for three-dimensional path of unmanned aerial vehicle
CN116400349A (en) Calibration method of low-resolution millimeter wave radar and optical camera
CN116704112A (en) 3D scanning system for object reconstruction
US20220046221A1 (en) Generating textured three-dimensional meshes using two-dimensional scanner and panoramic camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510000 201, building a, No.19 nanxiangsan Road, Huangpu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU SAITE INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 510000 Room 303, 36 Kaitai Avenue, Huangpu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU SAITE INTELLIGENT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210108

Assignee: Beijing Zhisaineng Technology Co.,Ltd.

Assignor: GUANGZHOU SAITE INTELLIGENT TECHNOLOGY Co.,Ltd.

Contract record no.: X2024980005834

Denomination of invention: A low-cost 2D LiDAR based robot 3D perception system and its method

Granted publication date: 20230609

License type: Common License

Record date: 20240516