CN114488183A - Obstacle point cloud processing method, device and equipment and readable storage medium - Google Patents

Obstacle point cloud processing method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN114488183A
CN114488183A CN202111650398.XA CN202111650398A CN114488183A CN 114488183 A CN114488183 A CN 114488183A CN 202111650398 A CN202111650398 A CN 202111650398A CN 114488183 A CN114488183 A CN 114488183A
Authority
CN
China
Prior art keywords
point cloud
road edge
obstacle
laser radar
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111650398.XA
Other languages
Chinese (zh)
Inventor
顾震江
梁健
赵文恺
刘文泽
林林庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202111650398.XA priority Critical patent/CN114488183A/en
Publication of CN114488183A publication Critical patent/CN114488183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/495Counter-measures or counter-counter-measures using electronic or electro-optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a processing method, a device, equipment and a readable storage medium of an obstacle point cloud, wherein the method comprises the following steps: determining ground point clouds and non-ground point clouds in corrected point clouds sent by a laser radar; determining multi-line laser radar road edge points and solid-state blind-patch laser radar road edge points according to the ground point cloud and the non-ground point cloud; determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points; and removing the obstacle point cloud according to the road edge obstacle information. According to the technical scheme, the accuracy of the point cloud map is improved by eliminating the obstacle information in the road edge, and the problem of low accuracy of the point cloud map is solved.

Description

Obstacle point cloud processing method, device and equipment and readable storage medium
Technical Field
The invention relates to the technical field of radar data processing, in particular to a method, a device and equipment for processing obstacle point cloud and a readable storage medium.
Background
At present, the construction scheme of the outdoor point cloud map is mainly based on the fusion optimization SLAM (Simultaneous Localization And Mapping) of multiple sensors such as a laser radar, an Inertial Measurement Unit (IMU), a wheel-type odometer, a Global Navigation Satellite System (GNSS) And the like. However, when the laser radar is matched, the obstacle on the road surface also generates a point cloud including an angular point or a plane point when the laser radar extracts features, the point cloud of the obstacle affects the accuracy of the laser radar feature matching and the feature descriptor generation, and meanwhile, when a dynamic obstacle point cloud appears, the point cloud map generates a smear, which causes the accuracy of the point cloud map to be low.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a method for processing a point cloud of an obstacle, aiming at solving the problem of low precision of a point cloud map.
In order to achieve the above object, the present invention provides a method for processing an obstacle point cloud, which includes:
determining ground point clouds and non-ground point clouds in corrected point clouds sent by a laser radar;
determining multi-line laser radar road edge points and solid-state blind-patch laser radar road edge points according to the ground point cloud and the non-ground point cloud;
determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points;
and removing the obstacle point cloud according to the road edge obstacle information.
Optionally, before the step of determining the corrected point cloud sent by the laser radar as the ground point cloud and the non-ground point cloud according to the ground height, the method further includes:
when receiving data sent by an inertial sensor, acquiring inertial sensor data corresponding to a current timestamp, and determining the high-frequency attitude, the angular velocity and the acceleration of the robot according to the inertial sensor data;
and performing motion compensation on the point cloud acquired by the laser radar according to the high-frequency attitude, the angular velocity and the acceleration to obtain the corrected point cloud.
Optionally, the step of determining the correction point cloud sent by the laser radar as the ground point cloud and the non-ground point cloud according to the ground height includes:
acquiring a ground height threshold;
and taking the corrected cloud points smaller than the ground height threshold value in the corrected cloud points as ground point clouds, and taking the corrected cloud points larger than or equal to the ground height threshold value in the corrected cloud points as non-ground point clouds.
Optionally, the step of determining a multiline lidar road edge point and a solid-state blind-supplementary lidar road edge point according to the ground point cloud and the non-ground point cloud includes:
acquiring the height difference between the ground and the road edge;
fitting a ground equation to the ground point cloud and the non-ground point cloud through a random forest algorithm of the solid-state blind-supplementary laser radar to obtain road edge points of the solid-state blind-supplementary laser radar; and the ground point cloud and the non-ground point cloud pass through the multi-line laser radar to obtain the road edge curvature of the road edge, and the multi-line laser radar road edge point is determined according to the road edge curvature and the height difference.
Optionally, the step of determining obstacle information according to the multiline lidar road edge point and the solid-state blind-supplementary lidar road edge point includes:
obtaining a curved surface road edge function by the aid of the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points through a preset quadratic function;
and determining the road edge obstacle information according to the curved road edge function.
Optionally, the step of removing the obstacle point cloud according to the road edge obstacle information includes:
determining the obstacle point cloud according to the road edge obstacle information;
and removing the obstacle point cloud from the road edge obstacle information through an extended Kalman filtering method.
Optionally, after the step of removing the obstacle point cloud according to the road edge obstacle information, the method further includes:
acquiring characteristic points of the road edge, and determining each key frame of the robot according to the characteristic points;
determining the key frame pose of each key frame through loop detection;
and determining an optimized point cloud map by the key frame pose through a laser odometer factor, a pre-integration factor of an inertial sensor, a global navigation satellite system factor and a closed loop factor, wherein the optimized point cloud map is the point cloud map with the obstacle point cloud removed.
In addition, to achieve the above object, the present invention provides an obstacle point cloud processing apparatus, including:
the point cloud extraction module is used for determining the corrected point cloud sent by the laser radar as a ground point cloud and a non-ground point cloud according to the ground height;
the road edge point determining module is used for determining multi-line laser radar road edge points and solid-state blind-repairing laser radar road edge points according to the ground point cloud and the non-ground point cloud;
the barrier information determining module is used for determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points;
and the obstacle point cloud removing module is used for removing the obstacle point cloud according to the road edge obstacle information.
In addition, in order to achieve the above object, the present invention further provides a processing apparatus of an obstacle point cloud, which includes a memory, a processor, and a processing program of an obstacle point cloud stored on the memory and operable on the processor, wherein the processing program of the obstacle point cloud when executed by the processor implements the steps of the processing method of the obstacle point cloud as described above.
Further, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a processing program of an obstacle point cloud, which when executed by a processor, implements the steps of the processing method of an obstacle point cloud as described above.
The embodiment of the invention provides a method, a device and equipment for processing an obstacle point cloud and a computer readable storage medium, wherein ground point cloud and non-ground point cloud in corrected point cloud sent by a laser radar are determined, after a multi-line laser radar road edge point and a solid-state blind-complementing laser radar road edge point are determined according to the ground point cloud and the non-ground point cloud, road edge obstacle information is determined according to the multi-line laser radar road edge point and the solid-state blind-complementing laser radar road edge point, and finally the obstacle information in the road edge is eliminated in a mode of removing the obstacle point cloud according to the road edge obstacle information, so that the accuracy of laser radar feature matching and feature descriptor generation is improved, the accuracy of a point cloud map is improved, and the problem of low map point cloud accuracy is solved.
Drawings
Fig. 1 is a schematic diagram of a hardware architecture of a processing apparatus for an obstacle point cloud according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a first embodiment of a method for processing an obstacle point cloud according to the present invention;
FIG. 3 is a flowchart illustrating a processing method of an obstacle point cloud according to a second embodiment of the present invention;
fig. 4 is a detailed flowchart of step S10 in the third embodiment of the obstacle point cloud processing method according to the present invention;
fig. 5 is a detailed flowchart of step S20 in the fourth embodiment of the method for processing the obstacle point cloud of the present invention;
fig. 6 is a detailed flowchart of step S30 in the fifth embodiment of the method for processing the obstacle point cloud of the present invention;
fig. 7 is a schematic detailed flowchart of step S40 in the fourth embodiment of the method for processing an obstacle point cloud according to the present invention;
FIG. 8 is a flowchart illustrating an eighth embodiment of the obstacle point cloud processing method according to the present invention;
fig. 9 is a schematic diagram of an architecture of a processing apparatus for obstacle point clouds according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It is to be understood that the appended drawings illustrate exemplary embodiments of the invention, which may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As an implementation, the processing device of the obstacle point cloud may be as shown in fig. 1.
The embodiment of the invention relates to a processing device of an obstacle point cloud, which comprises: a processor 101, e.g. a CPU, a memory 102, a communication bus 103. Wherein a communication bus 103 is used for enabling the connection communication between these components.
The memory 102 may be a high-speed RAM memory or a non-volatile memory (e.g., a disk memory). As shown in fig. 1, a program of processing of an obstacle point cloud may be included in a memory 102 as a kind of computer-readable storage medium; and the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
determining ground point clouds and non-ground point clouds in corrected point clouds sent by a laser radar;
determining multi-line laser radar road edge points and solid-state blind-patch laser radar road edge points according to the ground point cloud and the non-ground point cloud;
determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points;
and processing the obstacle point cloud according to the road edge obstacle information.
In an embodiment, the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
when receiving data sent by an inertial sensor, acquiring inertial sensor data corresponding to a current timestamp, and determining the high-frequency attitude, the angular velocity and the acceleration of the robot according to the inertial sensor data;
and performing motion compensation on the point cloud acquired by the laser radar according to the high-frequency attitude, the angular velocity and the acceleration to obtain the corrected point cloud.
In an embodiment, the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
acquiring a ground height threshold;
and taking the corrected cloud points smaller than the ground height threshold value in the corrected cloud points as ground point clouds, and taking the corrected cloud points larger than or equal to the ground height threshold value in the corrected cloud points as non-ground point clouds.
In an embodiment, the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
acquiring the height difference between the ground and the road edge;
fitting a ground equation to the ground point cloud and the non-ground point cloud through a random forest algorithm of the solid-state blind-supplementary laser radar to obtain road edge points of the solid-state blind-supplementary laser radar; and the ground point cloud and the non-ground point cloud are processed by the multi-line laser radar to obtain a road edge curvature, and the multi-line laser radar road edge point is determined according to the road edge curvature and the height difference.
In an embodiment, the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
obtaining a curved surface road edge function by the aid of the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points through a preset quadratic function;
and determining the road edge obstacle information according to the curved road edge function.
In an embodiment, the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
determining an obstacle point cloud according to the road edge obstacle information;
and removing the obstacle point cloud from the road edge obstacle information through an extended Kalman filtering method.
In an embodiment, the processor 101 may be configured to invoke a processing program of the obstacle point cloud stored in the memory 102 and perform the following operations:
acquiring characteristic points of the road edge, and determining each key frame of the robot according to the characteristic points;
determining the key frame pose of each key frame through loop detection;
and determining an optimized point cloud map by the key frame pose through a laser odometer factor, a pre-integration factor of an inertial sensor, a global navigation satellite system factor and a closed loop factor, wherein the optimized point cloud map is the point cloud map with the obstacle point cloud removed.
Based on the hardware architecture of the processing device of the obstacle point cloud based on the radar data processing technology, the embodiment of the processing method of the obstacle point cloud is provided.
SLAM can be divided into front-end processing and back-end optimization, wherein the front-end processing is to perform distortion correction, feature extraction and feature matching on point cloud data acquired by a laser radar, generate a key frame and acquire pose estimation; and (3) performing back-end optimization by adopting factor graph optimization, adding pre-integration factors, wheel-type odometer factors, GNSS factors and loop detection factors of the IMU into the factor graph, optimizing and updating all key frame position resources, finally obtaining global position and pose of global consistency, and finally splicing to generate a point cloud map. Firstly, searching corresponding imu data according to a timestamp, and performing motion compensation, namely point cloud distortion correction, on each point cloud of the laser radar by using high-frequency attitude, angular velocity and acceleration information of imu to obtain corrected point cloud; the method comprises the steps of obtaining ground point cloud and non-ground point cloud in corrected point cloud according to ground height, extracting multi-line laser radar road edge points through the difference in height between a road edge and the ground and the curvature difference between the ground point and the road edge point of each wiring harness based on the multi-line laser radar, fitting the ground point cloud and the non-ground point cloud through a random forest algorithm of a solid-state blind-complementing laser radar to obtain the solid-state blind-complementing laser radar road edge points, fitting two road edge points into a curved road edge by using a quadratic function to extract the non-ground point cloud, and removing obstacle point cloud on the road surface according to known road edge information. And then calculating the characteristic points of each laser radar, dividing all the characteristic points into angular points and plane points according to the curvature of each road edge point, extracting the angular points and the plane points of the current frame of the laser radar, respectively matching with the angular point map and the plane point map of the local key frame, and performing iterative optimization to obtain the pose of the current frame. In addition, key frames are generated by the barrier information extracted from the road edge together with the pose of the current frame through EKF (Extended Kalman Filter), and all key frames in a certain range of matching frames with the shortest distance and the farthest time interval in the history key frames of the laser radar are matched through loop detection to obtain the pose of the key frame. Adding the wheel type odometer factor, the IMU pre-integration factor, the GNSS factor and the closed-loop factor into the key frame poses to obtain a factor graph, optimizing and updating all the key frame poses, finally obtaining a global consistency pose, and splicing to generate a point cloud map.
Referring to fig. 2, in a first embodiment, the method for processing the obstacle point cloud includes the following steps:
step S10: determining ground point clouds and non-ground point clouds in corrected point clouds sent by a laser radar;
in this embodiment, the server obtains a correction point cloud sent by the laser radar, where the correction point cloud is a laser point cloud after distortion correction, and exemplarily, assuming that the output frequency of the laser radar is 10Hz, and each laser beam of the laser radar generates 2000 points by winding one turn, 2000 data need to be generated within 0.1 second, so as to perform motion compensation on the 2000 points, and eliminate an error generated by the laser radar in a motion process of the robot. And determining the corrected point cloud as the ground point cloud according to the height of the corrected point cloud from the ground, and taking the rest point clouds as non-ground point clouds.
Step S20: determining multi-line laser radar road edge points and solid-state blind-patch laser radar road edge points according to the ground point cloud and the non-ground point cloud;
in this embodiment, after determining the ground point cloud and the non-ground point cloud, the road edge point of the road surface is determined according to the two ground point clouds, and in this embodiment, two different radars, namely a multi-line laser radar and a solid-state laser blind-filling radar, are adopted to obtain the two road edge points. Compare single line laser radar, the difference of multi-line laser radar (can be 16 lines, 32 lines, 64 lines radar etc.) lies in adopting a plurality of laser emitter to poll, and the perception scope is 360, and laser radar's pencil is more, and is better to the detection effect of object, behind a laser radar's polling cycle, obtains a frame laser point cloud data. And processing the road edge points acquired according to the two radars to be used as a judgment reference for distinguishing the obstacle point clouds and other necessary point cloud images in the road edge point clouds.
Step S30: determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points;
in the embodiment, after the solid-state blind-repairing laser radar road edge points of the multi-line laser radar road edge points are determined, as the distance between point cloud data acquired by the solid-state blind-repairing laser radar is short and dense, the point cloud in the height range from the ground is intercepted, a random forest algorithm is used for fitting a ground equation to extract the point cloud, and a section of solid-state blind-repairing laser radar road edge points are extracted according to ground points and non-ground; the distance between the point cloud data acquired by the multi-line laser radar is long and sparse, the road edge points of the multi-line laser radar are extracted through the height difference between the road edge and the ground and the curvature difference between the ground point and the road edge point of each wire harness based on the multi-line laser radar, and then the obstacle information of the road edge is determined according to the two road edge points.
Step S40: and removing the obstacle point cloud according to the road edge obstacle information.
In this embodiment, after the road edge obstacle information is determined, the obstacle information is removed from the known road edge information.
In the technical scheme provided by the embodiment, the ground point cloud and the non-ground point cloud are distinguished by correcting the height of the point cloud from the ground, the two different point clouds are substituted into two different laser radars to extract two road edge points, the obstacle information in the road edge information is determined according to the two road edge points, and finally the obstacle information is eliminated, so that the precision of the point cloud map is improved, and the problem of low precision of the point cloud map is solved.
Referring to fig. 3, in the second embodiment, based on the first embodiment, before the step S10, the method further includes:
step S50: when receiving data sent by an inertial sensor, acquiring inertial sensor data corresponding to a current timestamp, and determining the high-frequency attitude, the angular velocity and the acceleration of the robot according to the inertial sensor data;
step S60: and performing motion compensation on the point cloud acquired by the laser radar according to the high-frequency attitude, the angular velocity and the acceleration to obtain the corrected point cloud.
Optionally, the present embodiment provides a method how to determine a corrected point cloud. In the implementation, the inertial sensor mounted on the robot acquires data such as high-frequency attitude, angular velocity and acceleration of a current timestamp, and performs motion compensation (global motion compensation or block motion compensation) on point cloud acquired by the laser radar according to the data.
In the technical scheme provided by the implementation, the point cloud collected by the laser radar is subjected to motion compensation through the inertial sensor, so that the point cloud distortion of the laser radar caused by the motion of the robot is eliminated, and the precision of the point cloud map is improved.
Referring to fig. 4, in the third embodiment, based on the above embodiment, the step S10 includes:
step S11: acquiring a ground height threshold;
step S12: and taking the corrected cloud points smaller than the ground height threshold value in the corrected cloud points as ground point clouds, and taking the corrected cloud points larger than or equal to the ground height threshold value in the corrected cloud points as non-ground point clouds.
Optionally, the present embodiment provides a method for determining a ground point cloud and a non-ground point cloud from the rectified point cloud. In this embodiment, the ground height threshold may be a height threshold automatically determined by the server, or the threshold may be stored in a database of the server as a preset file by an operator, after the point cloud is determined to be corrected, the server calls the threshold to determine the parts closer to and farther from the ground in the corrected point cloud, the point cloud smaller than the ground height threshold in the corrected point cloud is used as the ground point cloud, and the point cloud larger than or equal to the ground height threshold is used as the non-ground point cloud. In addition, according to another determining scheme, the server can determine the ground point cloud smaller than the threshold value only according to the ground height threshold value, and directly uses the rest point clouds in the corrected point cloud as the non-ground point cloud, so that the ground point cloud part and the non-ground point cloud part in the corrected point cloud can be rapidly identified.
In the technical scheme provided by the implementation, a height threshold is set, so that a ground point cloud part and a non-ground point cloud part in the corrected point cloud are rapidly distinguished according to the height threshold, and a follow-up server can conveniently extract road edge points according to the corrected point cloud.
Referring to fig. 5, in a fourth embodiment, based on the above embodiment, the step S20 includes:
step S21: acquiring the height difference between the ground and a road edge;
step S22: fitting a ground equation to the ground point cloud and the non-ground point cloud through a random forest algorithm of the solid-state blind-supplementary laser radar to obtain road edge points of the solid-state blind-supplementary laser radar; and the ground point cloud and the non-ground point cloud are processed by the multi-line laser radar to obtain a road edge curvature, and the multi-line laser radar road edge point is determined according to the road edge curvature and the height difference.
Optionally, the present embodiment provides a way to extract road edge nodes. After the ground point cloud and the non-ground point cloud are determined, the server acquires the height difference between the ground and the road edge, and the height difference is used as a preset parameter for the server to extract the road edge node. After the height difference is determined, a road edge point determined by the radar is obtained by fitting a ground point cloud and a non-ground point cloud by a solid-state laser blind-filling radar through a Random Forest algorithm to a ground equation, namely the road edge point of the solid-state laser blind-filling radar. For the multi-line laser radar, the generated wire harnesses are sparse and long, the difference between the ground point cloud and the curvature of the road edge node between each wire harness is large, the road edge point is extracted from the point cloud data through the curvature and the preset parameter height difference in the embodiment and serves as the road edge point of the multi-line laser radar, and the extraction mode is relatively existing, so that the explanation is omitted in the embodiment.
In the technical scheme provided by the implementation, a ground equation is obtained through a solid laser blind-filling radar and an RF algorithm to determine solid blind-filling laser radar road edge points, and the multi-line laser radar road edge points are extracted through the multi-line laser radar according to the height difference and the curvature between the road edge points, so that the server can automatically identify and generate the road edge points according to ground point clouds and non-ground point clouds.
Referring to fig. 6, in a fifth embodiment, based on the above embodiment, the step S30 includes:
step S31: obtaining a curved surface road edge function by the aid of the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points through a preset quadratic function;
step S32: and determining the road edge obstacle information according to the curved road edge function.
Optionally, the present embodiment provides a way to determine obstacle information in a road edge point. And determining a curved surface equation of the road edge by the two road edge points through a preset quadratic function, wherein the curved surface equation comprises the characteristic points in the road edge, and the obstacle information in the road edge can be determined according to the characteristic points of the curved surface equation. Exemplarily, a section value is set, and an angular point and a plain film point in the section of the current frame obtained by a surface equation are used as the obstacle information parameters.
In the technical scheme provided by the implementation, a curved function of a road edge is obtained by substituting the multi-line laser radar road edge point and the solid-state blind-repairing laser radar road edge point into a preset quadratic function, and the road edge barrier information is determined according to the curved function, so that the server can identify the barrier information of the road edge point.
Referring to fig. 7, in the sixth embodiment, based on the above embodiment, the step S40 includes:
step S41: determining an obstacle point cloud according to the road edge obstacle information;
step S42: and removing the obstacle point cloud from the road edge obstacle information through an extended Kalman filtering method.
Optionally, the present embodiment provides a method of eliminating an obstacle point cloud. The obstacle information can comprise coordinate parameters of the obstacle, point cloud data serving as the obstacle is determined according to the coordinate parameters, the point cloud belonging to the obstacle is filtered according to EKF filtering, the EKF is an expansion form of Kalman Filtering (KF) under a nonlinear condition, a nonlinear system is linearized by Taylor series expansion, then a Kalman filtering frame is adopted to Filter signals, the EKF is an efficient recursive filtering mode, the road edge information is substituted into the EKF for iterative optimization, the obstacle information can be rapidly removed from the road edge information, and a key frame in the updated and optimized road edge information is left.
In the technical scheme provided by the implementation, the obstacle point cloud influencing the point cloud map precision is removed in a way that the EKF removes the obstacle point cloud from the road edge information, meanwhile, the dynamic obstacle can be prevented from generating a smear in the point cloud map, and the precision of the point cloud map is improved.
Referring to fig. 8, in a seventh embodiment, based on the above embodiments, the step S40 includes:
step S70: acquiring characteristic points of the road edge, and determining each key frame of the robot according to the characteristic points;
step S80: determining the key frame pose of each key frame through loop detection;
step S90: and determining an optimized point cloud map by the key frame pose through a laser odometer factor, a pre-integration factor of an inertial sensor, a global navigation satellite system factor and a closed loop factor, wherein the optimized point cloud map is the point cloud map with the obstacle point cloud removed.
Optionally, the present embodiment provides a way to generate an optimized point cloud map after removing the obstacle point cloud. In this embodiment, after removing the obstacle information in the road edge, extracting feature points of the road edge, where the feature points may include an angular point and a plane point of each road edge point, extracting angular points and plane points of a current frame of the laser radar, respectively matching the extracted angular points and plane points with an angular point map and a plane point map of a local key frame, obtaining a pose of the current frame after iterative optimization, determining the current frame of the laser radar as a key frame by a laser radar odometer at a certain distance or rotating at a certain angle, making the key frame into a descriptor of the features, matching all key frames in a certain range of a matching frame with a shortest distance and a farthest time interval in the current key frame of the laser radar and a history key frame of the laser radar through loop detection to obtain a key frame pose, and then pre-integrating relative poses between 2 key frames of the laser radar through angular rate and angular rate, and finally, adding laser odometry factors, pre-integration factors of the IMU, GNSS factors and closed-loop factors into a factor graph, optimizing and updating all key frame poses by adopting open source codes GSTAM (Georgia Tech smoothening and Mapping, Ga Zhi institute of technology, smoothness and Mapping) of a common C + + library based on the factor graph, finally obtaining global consistency poses, and finally splicing to generate an optimized point cloud map.
In the technical scheme provided by the implementation, the road edge points with the barrier information removed are extracted and key frame poses are generated, and the factor graph is optimized and updated according to factors in the key frame poses, so that the server can reconstruct the point cloud map after the barrier data is removed.
In addition, referring to fig. 9, the present embodiment further provides a processing apparatus of an obstacle point cloud, including:
the point cloud extraction module 100 is used for determining the corrected point cloud sent by the laser radar as a ground point cloud and a non-ground point cloud according to the ground height;
the road edge point determining module 200 is used for determining multi-line laser radar road edge points and solid-state blind-supplementary laser radar road edge points according to the ground point cloud and the non-ground point cloud;
the obstacle information determining module 300 is configured to determine road edge obstacle information according to the multi-line lidar road edge point and the solid-state blind-supplementary lidar road edge point;
and an obstacle point cloud removing module 400, configured to remove an obstacle point cloud according to the road edge obstacle information.
Optionally, the processing device of the obstacle point cloud may further perform the following steps:
when receiving data sent by an inertial sensor, acquiring inertial sensor data corresponding to a current timestamp, and determining the high-frequency attitude, the angular velocity and the acceleration of the robot according to the inertial sensor data;
and performing motion compensation on the point cloud acquired by the laser radar according to the high-frequency attitude, the angular velocity and the acceleration to obtain the corrected point cloud.
Optionally, the processing device of the obstacle point cloud may further perform the following steps:
acquiring a ground height threshold;
and taking the corrected cloud points smaller than the ground height threshold value in the corrected cloud points as ground point clouds, and taking the corrected cloud points larger than or equal to the ground height threshold value in the corrected cloud points as non-ground point clouds.
Optionally, the processing device of the obstacle point cloud may further perform the following steps:
acquiring the height difference between the ground and the road edge;
fitting a ground equation to the ground point cloud and the non-ground point cloud through a random forest algorithm of the solid-state blind-supplementary laser radar to obtain road edge points of the solid-state blind-supplementary laser radar; and the ground point cloud and the non-ground point cloud pass through the multi-line laser radar to obtain the road edge curvature of the road edge, and the multi-line laser radar road edge point is determined according to the road edge curvature and the height difference.
Optionally, the processing device of the obstacle point cloud may further perform the following steps:
obtaining a curved surface road edge function by the aid of the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points through a preset quadratic function;
and determining the road edge obstacle information according to the curved road edge function.
Optionally, the processing device of the obstacle point cloud may further perform the following steps:
determining the obstacle point cloud according to the road edge obstacle information;
and removing the obstacle point cloud from the road edge obstacle information through an extended Kalman filtering method.
Optionally, the processing device of the obstacle point cloud may further perform the following steps:
acquiring characteristic points of the road edge, and determining each key frame of the robot according to the characteristic points;
determining the key frame pose of each key frame through loop detection;
and determining an optimized point cloud map by the key frame pose through a laser odometer factor, a pre-integration factor of an inertial sensor, a global navigation satellite system factor and a closed loop factor, wherein the optimized point cloud map is the point cloud map with the obstacle point cloud removed.
In addition, the invention also provides a processing device of the obstacle point cloud, which comprises a memory, a processor and a processing program of the obstacle point cloud stored on the memory and capable of running on the processor, wherein when the processing program of the obstacle point cloud is executed by the processor, the processing device of the obstacle point cloud realizes the steps of the processing method of the obstacle point cloud.
Furthermore, the present invention also provides a computer-readable storage medium storing a processing program of an obstacle point cloud, which when executed by a processor implements the steps of the processing method of an obstacle point cloud as described in the above embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, and includes several instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A processing method of an obstacle point cloud is characterized by comprising the following steps:
determining ground point clouds and non-ground point clouds in corrected point clouds sent by a laser radar;
determining multi-line laser radar road edge points and solid-state blind-patch laser radar road edge points according to the ground point cloud and the non-ground point cloud;
determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points;
and removing the obstacle point cloud according to the road edge obstacle information.
2. The method of processing an obstacle point cloud of claim 1, wherein the step of determining the ground point cloud and the non-ground point cloud in the rectified point cloud transmitted by the lidar further comprises:
when receiving data sent by an inertial sensor, acquiring inertial sensor data corresponding to a current timestamp, and determining the high-frequency attitude, the angular velocity and the acceleration of the robot according to the inertial sensor data;
and performing motion compensation on the point cloud acquired by the laser radar according to the high-frequency attitude, the angular velocity and the acceleration to obtain the corrected point cloud.
3. The method of processing an obstacle point cloud of claim 1, wherein the step of determining a ground point cloud and a non-ground point cloud of the rectified point cloud transmitted by the lidar comprises:
acquiring a ground height threshold;
and taking the corrected cloud points smaller than the ground height threshold value in the corrected cloud points as ground point clouds, and taking the corrected cloud points larger than or equal to the ground height threshold value in the corrected cloud points as non-ground point clouds.
4. The method of obstacle point cloud processing according to claim 1, wherein said step of determining multiline lidar waypoints and solid state blind-fill lidar waypoints from the ground point cloud and the non-ground point cloud comprises:
acquiring the height difference between the ground and the road edge;
fitting a ground equation to the ground point cloud and the non-ground point cloud through a random forest algorithm of the solid-state blind-supplementary laser radar to obtain road edge points of the solid-state blind-supplementary laser radar;
and passing the ground point cloud and the non-ground point cloud through the multi-line laser radar to obtain the road edge curvature of the road edge, and determining the multi-line laser radar road edge point according to the road edge curvature and the height difference.
5. The method of processing an obstacle point cloud of claim 1, wherein said step of determining obstacle information from said multiline lidar waypoints and said solid-state blind-complement lidar waypoints comprises:
obtaining a curved surface road edge function by the aid of the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points through a preset quadratic function;
and determining the road edge obstacle information according to the curved road edge function.
6. The method of processing an obstacle point cloud of claim 1, wherein the step of removing the obstacle point cloud according to the road edge obstacle information comprises:
determining an obstacle point cloud according to the road edge obstacle information;
and removing the obstacle point cloud from the road edge obstacle information through an extended Kalman filtering method.
7. The method of processing an obstacle point cloud of claim 1, wherein after the step of removing the obstacle point cloud according to the road edge obstacle information, further comprising:
acquiring characteristic points of the road edge, and determining each key frame of the robot according to the characteristic points;
determining the key frame pose of each key frame through loop detection;
and determining an optimized point cloud map by the key frame pose through a laser odometer factor, a pre-integration factor of an inertial sensor, a global navigation satellite system factor and a closed loop factor, wherein the optimized point cloud map is the point cloud map with the obstacle point cloud removed.
8. An obstacle point cloud processing device, comprising:
the point cloud extraction module is used for determining the corrected point cloud sent by the laser radar as a ground point cloud and a non-ground point cloud according to the ground height;
the road edge point determining module is used for determining multi-line laser radar road edge points and solid-state blind-repairing laser radar road edge points according to the ground point cloud and the non-ground point cloud;
the barrier information determining module is used for determining road edge barrier information according to the multi-line laser radar road edge points and the solid-state blind-repairing laser radar road edge points;
and the obstacle point cloud removing module is used for removing the obstacle point cloud according to the road edge obstacle information.
9. An obstacle point cloud processing apparatus, characterized by comprising: memory, processor and processing program of an obstacle point cloud stored on the memory and executable on the processor, which processing program of an obstacle point cloud when executed by the processor implements the steps of the method of processing an obstacle point cloud as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a processing program of an obstacle point cloud, which when executed by a processor implements the steps of the processing method of an obstacle point cloud according to any one of claims 1 to 7.
CN202111650398.XA 2021-12-29 2021-12-29 Obstacle point cloud processing method, device and equipment and readable storage medium Pending CN114488183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111650398.XA CN114488183A (en) 2021-12-29 2021-12-29 Obstacle point cloud processing method, device and equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111650398.XA CN114488183A (en) 2021-12-29 2021-12-29 Obstacle point cloud processing method, device and equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114488183A true CN114488183A (en) 2022-05-13

Family

ID=81508860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111650398.XA Pending CN114488183A (en) 2021-12-29 2021-12-29 Obstacle point cloud processing method, device and equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114488183A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115273068A (en) * 2022-08-02 2022-11-01 湖南大学无锡智能控制研究院 Laser point cloud dynamic obstacle removing method and device and electronic equipment
CN115855095A (en) * 2022-12-01 2023-03-28 酷黑科技(北京)有限公司 Autonomous navigation method and device and electronic equipment
CN116592888A (en) * 2023-05-08 2023-08-15 五八智能科技(杭州)有限公司 Global positioning method, system, device and medium for patrol robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115273068A (en) * 2022-08-02 2022-11-01 湖南大学无锡智能控制研究院 Laser point cloud dynamic obstacle removing method and device and electronic equipment
CN115273068B (en) * 2022-08-02 2023-05-12 湖南大学无锡智能控制研究院 Laser point cloud dynamic obstacle removing method and device and electronic equipment
WO2024027587A1 (en) * 2022-08-02 2024-02-08 湖南大学无锡智能控制研究院 Dynamic obstacle removing method and apparatus for laser point cloud, and electronic device
CN115855095A (en) * 2022-12-01 2023-03-28 酷黑科技(北京)有限公司 Autonomous navigation method and device and electronic equipment
CN116592888A (en) * 2023-05-08 2023-08-15 五八智能科技(杭州)有限公司 Global positioning method, system, device and medium for patrol robot

Similar Documents

Publication Publication Date Title
CN114488183A (en) Obstacle point cloud processing method, device and equipment and readable storage medium
KR102581263B1 (en) Method, apparatus, computing device and computer-readable storage medium for positioning
CN110178048B (en) Method and system for generating and updating vehicle environment map
CN109324337B (en) Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle
CN111812658B (en) Position determination method, device, system and computer readable storage medium
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
KR102628778B1 (en) Method and apparatus for positioning, computing device, computer-readable storage medium and computer program stored in medium
US20150142248A1 (en) Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex
JP4984659B2 (en) Own vehicle position estimation device
CN111986261B (en) Vehicle positioning method and device, electronic equipment and storage medium
US9082008B2 (en) System and methods for feature selection and matching
JP2018124787A (en) Information processing device, data managing device, data managing system, method, and program
CN110889808A (en) Positioning method, device, equipment and storage medium
CN111768489B (en) Indoor navigation map construction method and system
CN114111774B (en) Vehicle positioning method, system, equipment and computer readable storage medium
CN114264301B (en) Vehicle-mounted multi-sensor fusion positioning method, device, chip and terminal
CN112859110B (en) Positioning navigation method based on three-dimensional laser radar
CN110243364B (en) Unmanned aerial vehicle course determining method and device, unmanned aerial vehicle and storage medium
CN112578781B (en) Data processing method, device, chip system and medium
CN112036274A (en) Driving region detection method and device, electronic equipment and storage medium
CN117367412B (en) Tightly-coupled laser inertial navigation odometer integrating bundle set adjustment and map building method
CN114777770A (en) Robot positioning method, device, control terminal and readable storage medium
CN113687336A (en) Radar calibration method and device, electronic equipment and medium
CN117906598B (en) Positioning method and device of unmanned aerial vehicle equipment, computer equipment and storage medium
CN114660589B (en) Method, system and device for positioning underground tunnel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination