CN116434183B - Road static environment description method based on multipoint cloud collaborative fusion - Google Patents

Road static environment description method based on multipoint cloud collaborative fusion Download PDF

Info

Publication number
CN116434183B
CN116434183B CN202310215469.6A CN202310215469A CN116434183B CN 116434183 B CN116434183 B CN 116434183B CN 202310215469 A CN202310215469 A CN 202310215469A CN 116434183 B CN116434183 B CN 116434183B
Authority
CN
China
Prior art keywords
point cloud
laser
ground
laser sensor
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310215469.6A
Other languages
Chinese (zh)
Other versions
CN116434183A (en
Inventor
华炜
张楚润
高海明
马也驰
张顺
沈峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310215469.6A priority Critical patent/CN116434183B/en
Publication of CN116434183A publication Critical patent/CN116434183A/en
Application granted granted Critical
Publication of CN116434183B publication Critical patent/CN116434183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a road static environment description method based on multipoint cloud collaborative fusion, which comprises the following steps: collecting road environment point cloud information by using a plurality of multi-line laser sensors; performing time synchronization and space alignment on the plurality of point cloud information; performing ground segmentation by utilizing the cooperation of multiple point clouds, and extracting obstacle point clouds; fusing the result of the multipoint cloud ray tracing model, and updating the local probability grid map; and converting the local probability grid map into a binary map, and extracting, simplifying and segmenting the obstacle outline. According to the method, the ground is segmented in a multi-point cloud cooperative mode, so that the ground can be segmented more accurately, and the obstacle point cloud is extracted. In addition, a plurality of multi-line laser sensors are adopted, so that the blind area is small, the range is wide, static obstacles in the surrounding environment of the unmanned vehicle can be detected in 360 degrees in all directions, and the defect that a single laser detection has a large-range blind area is further overcome.

Description

Road static environment description method based on multipoint cloud collaborative fusion
Technical Field
The invention relates to the field of perception of static environments around unmanned vehicles, in particular to a road static environment description method based on multipoint cloud collaborative fusion.
Background
The research of the ground unmanned vehicle has wide application prospect in the aspects of automatic driving vehicles, unmanned sterilizing vehicles, unmanned meal delivery vehicles and the like. The ground unmanned vehicle is planned to avoid obstacles in the surrounding environment so as to avoid accidents such as collision, scratch and the like. The obstacles such as pedestrians, bicycles, cars, traffic cones and the like have regular appearance, size and other characteristics and can be regarded as regular obstacles. But there are not only regular obstacles but also many irregular obstacles with irregular shapes and characteristics which are difficult to unify in life, such as long drafts, forklifts, express boxes with different shapes, long baffles in the middle of roads, waste tires, fallen trees and other ground abnormal protrusions.
The target detection method based on deep learning learns the characteristics of various obstacles from a training set, so that the obstacles encountered in the running process of the real vehicle can be detected in a reasoning manner. The method can well identify the types of the obstacles appearing in the training set, but has poor effect on identifying the types of the obstacles which do not appear or rarely exist in the training set, and even has difficulty in identifying certain transformation forms of the types of the obstacles in the training set, such as pedestrians squatting on a road, laid traffic cones and the like. In addition, each time the target detection of some special scenes is required, the data of the special scenes are required to be collected and marked, and a training set is added to retrain the model, so that the labor and time are very wasted. In fact, in the field of unmanned vehicles, common regular obstacles such as pedestrians, bicycles, cars, traffic cones, common signboards and the like can obtain good detection rate by using a target detection method based on deep learning, but for irregular obstacles in a complex environment, the target detection method based on deep learning is quite invisible. In the running process of the unmanned vehicle, the missed detection of the irregular obstacle can cause great potential safety hazard, so that a high-efficiency and accurate general obstacle detection method is needed to be used as a bottom covering scheme. The general obstacle detection method is not influenced by the shape, the size, the appearance and the like of the obstacle, and can directly analyze and process the data acquired by the unmanned vehicle and detect the target. In the field of autopilot, existing general obstacle detection methods mainly include: the method comprises the steps of performing multi-sensor fusion by combining an ultrasonic radar, a millimeter wave radar and a laser radar to detect a general obstacle; universal obstacle detection in the image using travelable region segmentation and conventional computer vision methods; universal obstacle detection is based on occupancy grid maps.
Although the obstacle detection method of the ground unmanned vehicle has been greatly developed, the perception method for accurately describing the road static environment in real time still has a great room for improvement. For static barriers with different forms, which influence the running of the unmanned vehicle, on a road, the target detection method based on deep learning often ignores the types of the barriers which are not in the training set, so that missed detection is caused. The general obstacle detection method based on multi-sensor fusion needs to perform preprocessing operations such as filtering and the like on data of used sensors respectively, and needs to additionally perform existence reasoning and the like to judge whether the obstacle really exists or not, and the millimeter wave radar is easy to introduce false detection. The general obstacle detection method based on image drivable region segmentation has the advantages of general stability and generalization, easy missed detection and false detection, and inaccurate estimation of the obstacle position. The traditional general obstacle detection method based on occupying the grid map has the defects of occupying a large amount of memory and overlarge calculation cost, and is difficult to deploy in a real vehicle environment. And a single laser sensor is used for detecting general obstacles, so that the problems of large blind area range, update and retention of occupied states of static rotating obstacles and the like exist.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a road static environment description method based on multipoint cloud collaborative fusion, which has high real-time performance and good robustness, and the specific technical scheme is as follows:
a road static environment description method based on multipoint cloud collaborative fusion comprises the following steps:
step one: installing and calibrating a plurality of multi-line laser sensors on the body of the unmanned vehicle, and ensuring that the installed multi-line laser sensors complete full coverage of 360-degree environment around the unmanned vehicle;
step two: acquiring point cloud information of the road environment around the unmanned vehicle by using the calibrated multi-line laser sensor in the first step, performing time synchronization and space alignment on the plurality of point cloud information, and filtering by using the self-vehicle point cloud to obtain a current laser scanning point cloud array;
step three: performing ground segmentation on the current laser scanning point cloud array obtained in the second step by utilizing multi-point cloud cooperation, and converting the current laser scanning point cloud array into a two-dimensional obstacle point cloud array and a two-dimensional ground point Yun Shuzu corresponding to the current frame;
step four: combining the two-dimensional obstacle point cloud array and the two-dimensional ground point cloud array obtained in the third step with pose information of a vehicle to construct a local probability grid map; then fusing the results of the multi-point cloud ray tracing model to update the local probability grid map;
step five: and (3) according to the local probability grid map obtained in the step four, obtaining a binary map describing the occupation information through binarization, denoising the binary map through morphological operation, and finally extracting, simplifying and segmenting the obstacle outline.
In the second step, after time synchronization and space alignment are performed on the plurality of point cloud information, self-vehicle point cloud filtering and remote point cloud filtering are performed, so that a current laser scanning point cloud array is obtained.
Further, the step one includes:
installing and calibrating a multi-line laser sensor on a vehicle body as a main laser sensor;
and respectively installing and calibrating a multi-line laser sensor at other n positions of the vehicle body as an auxiliary laser sensor.
Further, the second step includes the following sub-steps:
(2.1) acquiring a time stamp of the main laser sensor as a current frame time stamp, and storing point cloud information of the time stamp into a current laser scanning point cloud array; then, according to the current frame time stamp, sequentially searching the point cloud information of all the auxiliary laser sensors, if the existence time difference of a certain auxiliary laser sensor is smaller than a set threshold t min Storing the point cloud information into a current laser scanning point cloud array; if not, storing a null point cloud with the point number of 0 in the current laser scanning point cloud array;
and (2.2) converting the point cloud information of the auxiliary laser sensor in the current laser scanning point cloud array to a coordinate system of the main laser sensor, and performing self-vehicle point cloud filtering and remote point cloud filtering on the point cloud information in the current laser scanning point cloud array by utilizing rectangular filtering.
Further, the third step comprises the following sub-steps:
(3.1) respectively taking the positions of the laser sensors as circle centers, constructing a sector grid map, and sequentially projecting each laser point in the current laser scanning point cloud array into a corresponding sector grid; finding out the laser point with the minimum z value in each fan-shaped grid, and extracting the ground by using a region growing method to obtain a ground expression; then, calculating the height difference between all laser points and the ground expression of the area where the laser points are located, and taking the height difference as the initial ground clearance height;
(3.2) for the point cloud scanned by the main laser sensor, sequentially converting coordinates of all laser points and projecting the converted coordinates into a sector grid map corresponding to the auxiliary laser sensor, and then calculating the height difference according to the ground expression of the area; thereby, each laser point scanned by the main laser sensor obtains (n+1) height differences; different weights are given to the height differences, and the final ground clearance height is calculated;
for the point cloud scanned by the auxiliary laser sensor, sequentially converting coordinates of all laser points and projecting the converted coordinates into fan-shaped grid maps corresponding to other auxiliary laser sensors, and calculating a height difference according to a ground expression of the area; thereby, the laser point of each auxiliary laser sensor obtains n height differences; different weights are given to the height differences, and the final ground clearance height is calculated;
(3.3) according to a given threshold value d of the ground clearance max Each laser spot is assigned a label attribute and according to a given altitude threshold h max Removing high altitude points; dividing each group of obstacle point clouds and ground point clouds into k obstacle point queues and k ground point queues according to the set angle resolution alpha, and then selecting the obstacle point closest to the obstacle point and the ground point farthest from the obstacle point by using a sorting algorithm to obtain a final two-dimensional obstacle point cloud array and a final two-dimensional ground point cloud array.
Further, the fourth step comprises the following sub-steps:
(4.1) according to the current pose information of the vehicle, calculating the pose information of each laser sensor in a vehicle body coordinate system through coordinate transformation, and converting the point coordinates in the two-dimensional obstacle point cloud array and the two-dimensional ground point cloud array into the vehicle body coordinate system; the center of the current vehicle body is taken as an origin, and the center position of the local probability grid map is determined according to a given offset;
(4.2) for each laser sensor, using the position in the vehicle body coordinate system of the laser sensor as a starting point, sequentially using the positions of points in the vehicle body coordinate system of the two-dimensional obstacle point cloud and the two-dimensional ground point cloud belonging to the laser sensor as end points, updating the probability of the unoccupied area and the occupied area by using the ray tracing model, and in the updating process, if the ray tracing models of a plurality of laser sensors perform the occupying operation on the same grid, superposing the grid probabilities thereof by the corresponding number of occupied factors r occupy The method comprises the steps of carrying out a first treatment on the surface of the If the ray tracing models of a plurality of laser sensors do non-occupation operation on the same grid, the grid probability is subtracted by a corresponding number of non-occupation factors r free The method comprises the steps of carrying out a first treatment on the surface of the And finally obtaining the local probability grid map fusing the n point cloud ray tracing results.
Further, the fifth step comprises the following sub-steps:
(5.1) converting the local probability grid map into a binary map by using a binarization operation, and removing noise by using an opening and closing operation of image morphology;
and (5.2) acquiring outline information of the occupied area by using an edge extraction method, performing edge smoothing, and then simplifying and dividing an outline polygon to be used as static environment information for describing a road around the unmanned vehicle.
Further, the main laser sensor is arranged at the middle position of the top of the vehicle head, and the number of the auxiliary laser sensors is 3, and the auxiliary laser sensors are respectively arranged at the middle positions of the left side of the vehicle head, the right side of the vehicle head and the right side of the vehicle tail.
Further, in the step (4.1), the given offset amount in front of the vehicle is larger than the given offset amount in rear of the vehicle, so that the static environment information of the front area of the unmanned vehicle is more emphasized.
Further, the installation position of the auxiliary laser sensor is lower than that of the main laser sensor, so that ground information around the unmanned vehicle is easier to acquire, and the blind area range is reduced.
According to the road static environment description method based on the multi-point cloud collaborative fusion, ground segmentation is performed by utilizing the multi-point cloud collaboration, and obstacle point clouds are extracted. And then, fusing the results of the multi-point cloud ray tracing model to obtain a local probability grid map updated in real time. Finally, a plurality of polygons are obtained through an image correlation method and used for describing the road static environment. Compared with the prior art, the beneficial effects are as follows:
(1) The road static environment description method provided by the invention has the advantages that the algorithm is reasonable and easy to understand, and the number of laser sensors, the calibration method, the barrier contour extraction, the polygon segmentation and other operations can be flexibly changed according to specific situations.
(2) Compared with a target detection method based on deep learning, the method provided by the invention has good detection rate on regular obstacles, has obvious advantages on the detection effect of irregular obstacles, and does not need to spend a great deal of manpower and time for data acquisition labeling and model training.
(3) Compared with a general obstacle detection method based on multi-sensor fusion, the method only adopts one sensor, namely a laser sensor, so that different operations such as data processing and obstacle fusion are not required to be carried out on various sensors.
(4) Compared with a general obstacle detection method based on image drivable region segmentation, the method provided by the invention has good stability and generalization, the conditions of missing detection and false detection are less, and the position of an obstacle can be accurately estimated.
(5) Compared with the traditional general obstacle detection method based on the occupied grid map, the method has the advantages of lower memory occupation and calculation cost, easy deployment in a real-vehicle environment, small time delay and capability of describing the road static environment in real time.
(6) Compared with the method for detecting the universal obstacle by using a single laser sensor, the method for detecting the universal obstacle by using the multi-point cloud disclosed by the invention has the advantages that the ground is segmented in a coordinated mode by using the multi-point cloud, the ground can be segmented more accurately, and the point cloud of the obstacle is extracted. In addition, a plurality of multi-line laser sensors are adopted, so that the blind area is small, the range is wide, static obstacles in the surrounding environment of the unmanned vehicle can be detected in 360 degrees in all directions, and the defect that a single laser detection has a large-range blind area is further overcome.
Drawings
FIG. 1 is a flow chart of a method for describing static road environment in the present invention.
Fig. 2 is a schematic diagram of the relative pose transformation relationship of each laser sensor.
Fig. 3 is an example of a multipoint cloud coverage area and a spatiotemporal alignment effect.
Fig. 4 is a static environment description example obtained by the coordinated fusion of multiple point clouds.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
A flow chart of the road static environment description method based on the multipoint cloud collaborative fusion in one embodiment of the invention is shown in fig. 1, and firstly, ground segmentation is carried out by utilizing the multipoint cloud collaboration to extract obstacle point clouds. And then, fusing the results of the multi-point cloud ray tracing model to obtain a probability grid map updated in real time. Finally, a plurality of polygons are obtained through an image correlation method and used for describing the road static environment. The method specifically comprises the following steps:
step one: and installing and calibrating a plurality of multi-line laser sensors on the body of the unmanned vehicle, and ensuring that the installed multi-line laser sensors complete full coverage of 360-degree environment around the unmanned vehicle. Fig. 2 shows a schematic diagram of the relative pose transformation relationship of each laser sensor. The method specifically comprises the following substeps:
(1.1) installing and calibrating a multi-line laser sensor on a vehicle body as a main laser sensor;
in the running process of the unmanned vehicle, the influence of the front obstacle information on the planning control is higher than that of the left side, the right side and the rear side. The main laser sensor can be arranged at the middle position of the top of the vehicle head and used for collecting denser and farther-distance point cloud information, and the static environment description range of the unmanned vehicle is ensured to meet the actual driving requirement. The calibration of the main laser sensor mainly refers to the process of obtaining the coordinate conversion relation from the main laser sensor to the integrated navigation. In this embodiment, the calibration of the main laser sensor is performed by using the classical hand-eye calibration method ax=xb. Wherein X is the coordinate transformation matrix from the main laser sensor to the integrated navigation, A is the coordinate transformation of the two movements of the main laser sensor, and B is the coordinate transformation of the two movements of the integrated navigation. And the coordinate conversion relation X from the main laser sensor to the integrated navigation can be solved by using a least square method.
(1.2) installing and calibrating the multi-line laser sensors at other n positions of the vehicle body to serve as auxiliary laser sensors.
The main laser sensor is arranged at the right middle position of the top of the vehicle head, so that the requirement of the unmanned vehicle for better observing the information in front of the vehicle can be met, but the main laser sensor can be arranged on the left side and the right side of the vehicle body, more points in the rear side area can be reflected back on the vehicle body of the vehicle, and accordingly larger blind areas are formed on the left side, the right side and the rear side of the unmanned vehicle. Therefore, the laser sensors can be arranged at the left front part, the right front part and the right rear part of the vehicle head as auxiliary sensors, and the mounting height is lower than that of the main laser sensor for collecting moreThe method is close to the point cloud information of the unmanned vehicle, reduces the range of blind areas, and is favorable for more accurately detecting the ground. The auxiliary laser sensor is low in installation position and mainly used for covering the blind area range around the unmanned vehicle, so that the number of lines can be lower than that of the main laser sensor, and the cost is reduced. The calibration of the auxiliary laser sensor is mainly used for acquiring the conversion relation between the laser sensors. The embodiment adopts a generalized iterative closest point (Generalized Iterative Closest Point, GICP) point cloud registration method for solving the conversion relation from the auxiliary laser sensor to the main laser sensor. For each auxiliary laser sensor, in a stationary state of the vehicle, the main laser sensor and the auxiliary laser sensor are used for simultaneously acquiring the surrounding environment information of the unmanned vehicle. Then, a conversion relation can be roughly determined as a calibration initial value according to the installation positions and the installation angles of the main laser sensor and the auxiliary laser sensor. Then, selecting the time stamp smaller than the specific time threshold t min And taking the point cloud acquired by the main laser sensor as a target point cloud, taking the point cloud of the auxiliary laser sensor as a matching point cloud, and solving the coordinate conversion relation between the auxiliary laser sensor and the main laser sensor by using a GICP method in combination with the pose information of the vehicle.
Step two: and (3) acquiring point cloud information of the road environment around the unmanned vehicle by using the calibrated multi-line laser sensor in the step one, performing time synchronization and space alignment on the plurality of point cloud information, and filtering by using the self-vehicle point cloud to obtain a current laser scanning point cloud array. Fig. 3 illustrates an example of a multipoint cloud coverage area and a spatio-temporal alignment effect. The system comprises a main laser sensor point cloud information, a left blind-supplement laser sensor point cloud information, a right blind-supplement laser sensor point cloud information, a rear blind-supplement laser sensor point cloud information and fusion point cloud information obtained by time synchronization and space alignment of the 4 multi-line laser sensors. As can be seen from fig. 3, the main laser sensor has a large number of blind areas on the left, right and rear sides of the vehicle body, and a plurality of blind-supplement laser sensors are installed to cover the blind areas.
The method specifically comprises the following substeps:
(2.1) acquiring a Main laser sensorThe time stamp of the (b) is used as the time stamp of the current frame, and the point cloud information of the time stamp is stored into the current laser scanning point cloud array; then, according to the current frame time stamp, sequentially searching the point cloud information time of all the auxiliary laser sensors, if the existing time difference of a certain auxiliary laser sensor is smaller than a set threshold t min Storing the point cloud information into a current laser scanning point cloud array; if the point cloud does not exist, a null point cloud with the point number of 0 is stored in the current laser scanning point cloud array, so that the (n+1) multi-line laser sensors are prevented from forming a strong coupling relation.
And (2.2) converting the point cloud information of the auxiliary laser sensor in the current laser scanning point cloud array to a coordinate system of the main laser sensor, and performing self-vehicle point cloud filtering on the point cloud information in the current laser scanning point cloud array by utilizing rectangular filtering. The point cloud of the self-vehicle is removed, so that the phenomenon that the unmanned vehicle cannot move due to the fact that the point cloud of the self-vehicle is scanned to be mistakenly detected as an obstacle can be prevented. On the basis, the remote point cloud can be further removed, so that the number of the point clouds in subsequent processing is effectively reduced.
Step three: and D, performing ground segmentation on the current laser scanning point cloud array obtained in the step two by utilizing the cooperation of the multiple point clouds, and converting the current laser scanning point cloud array into a two-dimensional obstacle point cloud array and a two-dimensional ground point cloud array corresponding to the current frame. The method specifically comprises the following substeps:
and (3.1) uniformly dividing the positions of the laser sensors into p sectors around the central axis by 360 degrees uniformly and uniformly by taking the positions of the laser sensors as circle centers, and dividing each sector into q annular areas along the radial direction according to a given resolution, thereby constructing a sector grid map. The sector grid map accords with the point cloud generation characteristic of the laser sensor, and the grid closer to the center of the circle is divided more finely, so that obstacle information closer to the unmanned vehicle can be conveniently obtained in a fine granularity mode. Then, each laser point in the current laser scanning point cloud array is projected into a corresponding sector grid in sequence. For multiple laser points falling on the same fan grid, the laser point striking the ground should have the smallest height value. Thus, the laser spot with the smallest z-value (z-axis perpendicular to the ground) in each sector grid is found, and then the successive grids in a given direction are successively scannedThe grid extracts the ground using the region growing method, resulting in the ground expression f=ax+b. Where f is the ground height of the grid area, x is the distance of the laser points, and a and b are the expression coefficients. The method of projecting the laser points to the fan-shaped grids and selecting only one point with the minimum z value for each fan-shaped grid can greatly reduce the time complexity of ground fitting and reduce the calculation cost and the memory consumption. Moreover, the method is less affected by the rising number of point clouds, and the final calculation cost is mainly related to the resolution of the sector grid map. Finally, calculating the height difference between all laser points and the ground expression of the grid area where the laser points are positioned as an initial ground clearance h 0
(3.2) for the point cloud scanned by the main laser sensor, sequentially converting coordinates of all laser points and projecting the converted coordinates into a fan-shaped grid map corresponding to the auxiliary laser sensor, and then calculating the height difference according to the ground expression of the grid area. Thus, the laser spot scanned by each main laser sensor has (n+1) height differences. These height differences are weighted differently and calculated using the formula h=a 0 h 0 +a 1 h 1 +…+a n h n (wherein a 0 To a n Is the weight, h 1 To h n For height difference) a final ground clearance h is calculated;
and for the point cloud scanned by the auxiliary laser sensor, sequentially converting coordinates of all laser points, projecting the converted coordinates into fan-shaped grid maps corresponding to the other auxiliary laser sensors, and calculating the height difference according to the ground expression of the grid area. Thus, the laser spot of each auxiliary laser sensor gets n height differences. These height differences are weighted differently and calculated using the formula h=a 0 h 0 +a 1 h 1 +…+a n-1 h n-1 (wherein a 0 To a n-1 Is the weight, h 1 To h n-1 For height difference) the final ground clearance h is calculated.
(3.3) according to a given threshold value d of the ground clearance max Each laser spot is assigned a label attribute and according to a given altitude threshold h max And removing the high altitude point. When excitedThe final ground height h of the light spot is greater than d max Then it is given an obstacle point label. Otherwise, it is given a ground point label. Dividing each group of obstacle point clouds and ground point clouds into k obstacle point queues and k ground point queues according to the set angle resolution alpha, and then selecting the obstacle point closest to the obstacle point and the ground point farthest from the obstacle point by using a sorting algorithm to obtain a final two-dimensional obstacle point cloud array and a final two-dimensional ground point cloud array. Therefore, the dimension of the point to be processed is further reduced, and the efficiency of subsequent operation is improved.
Step four: combining the two-dimensional obstacle point cloud array and the two-dimensional ground point cloud array obtained in the third step with pose information of a vehicle to construct a local probability grid map, and then fusing the results of the multi-point cloud ray tracing model to update the local probability grid map. The method specifically comprises the following substeps:
(4.1) for unmanned vehicles traveling in a wide environment, the use of global probability grid maps is a luxury. Therefore, the method of the invention uses the local probability grid map, and only describes the static environment in a certain range around the unmanned vehicle. Firstly, according to the current pose information of the vehicle, the pose information of each laser sensor in a vehicle body coordinate system is calculated through coordinate transformation, and then the point coordinates in the two-dimensional obstacle point cloud array and the two-dimensional ground point cloud array are converted into the vehicle body coordinate system. And determining the central position of the local probability grid map according to the given offset by taking the center of the current vehicle body as the origin. Here, since the unmanned vehicle is more concerned with an obstacle in front of the vehicle, a given offset amount in front of the vehicle is larger than a given offset amount in rear of the vehicle.
(4.2) for each laser sensor, updating the probability of unoccupied areas and occupied areas using the ray tracing model with the position in the vehicle body coordinate system as a starting point and the positions in the vehicle body coordinate system of the two-dimensional obstacle point cloud and the two-dimensional ground point cloud belonging to the laser sensor as end points in sequence. In the probability updating stage of the local probability grid map, the local probability grid is changed according to the central position change of the local probability grid map at the current moment and the last momentThe map is translated while preserving the overlapping area. When a point in the two-dimensional obstacle point cloud array is taken as an end point, passing through grids from the start point to the end point of the light (excluding the grid where the end point is located), subtracting the unoccupied factor r from the grid probability in sequence free For the grid where the end point is, the grid probability increases by an occupancy factor r occupy . When the point in the two-dimensional ground point cloud array is used as the end point, the grid passing through from the light starting point to the end point is used, and the non-occupation factor r is subtracted from the grid probability free . In the updating process, if the ray tracing models of a plurality of laser sensors perform occupation operation on the same grid, the grid probability is superimposed by the occupation factors r of corresponding numbers occupy The method comprises the steps of carrying out a first treatment on the surface of the If the ray tracing models of a plurality of laser sensors do non-occupation operation on the same grid, the grid probability is subtracted by a corresponding number of non-occupation factors r free The method comprises the steps of carrying out a first treatment on the surface of the And finally obtaining the local probability grid map fusing the n point cloud ray tracing results.
Step five: and (3) according to the local probability grid map obtained in the step four, obtaining a binary map describing the occupation information through binarization, denoising the binary map through morphological operation, and finally extracting, simplifying and segmenting the obstacle outline. Fig. 4 illustrates an example of a static environment description obtained by multipoint cloud collaborative fusion. Wherein, the figure (a) shows the point cloud information of the current frame, the figure (b) shows the detection effect example of the method of the invention, and the figure (c) is the detection frame corresponding to the current frame. The fifth step specifically comprises the following substeps:
(5.1) probability grid map can represent the probability of an obstacle occupying each location on the map, playing a vital role in the method of the invention. However, probability grid maps may be noisy. For this reason, it is necessary to convert the probability grid map into a binary map through a binarization operation to effectively remove noise and improve usability of the map. The binarization operation may use a threshold-based approach to increase the probability of obstacle occupation above a certain threshold p max The pixel value of the region where the grid is located is set to 1, and otherwise, is set to 0. After the probability grid map is binarized, morphological operations such as open operation and close operation are needed to be performed on the binary mapNoise is removed and image quality is improved in one step. By the closed operation, gaps and holes existing in the original grid map can be filled, so that the map is more continuous and complete. Through open operation, small noise points existing in the original grid map can be removed, and false detection of obstacles is prevented;
and (5.2) acquiring outline information of the occupied area by using a Canny edge detection method, performing edge smoothing, and then simplifying and dividing an outline polygon to be used as static environment information for describing a road around the unmanned vehicle.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the technical solutions according to the embodiments of the present invention.

Claims (9)

1. The road static environment description method based on the multipoint cloud collaborative fusion is characterized by comprising the following steps of:
step one: installing and calibrating a plurality of multi-line laser sensors on the body of the unmanned vehicle, and ensuring that the installed multi-line laser sensors complete full coverage of 360-degree environment around the unmanned vehicle;
step two: acquiring point cloud information of the road environment around the unmanned vehicle by using the calibrated multi-line laser sensor in the first step, performing time synchronization and space alignment on the plurality of point cloud information, and filtering by using the self-vehicle point cloud to obtain a current laser scanning point cloud array;
step three: performing ground segmentation on the current laser scanning point cloud array obtained in the second step by utilizing multi-point cloud cooperation, and converting the current laser scanning point cloud array into a two-dimensional obstacle point cloud array and a two-dimensional ground point Yun Shuzu corresponding to the current frame;
step four: combining the two-dimensional obstacle point cloud array and the two-dimensional ground point cloud array obtained in the third step with pose information of a vehicle to construct a local probability grid map; then fusing the results of the multi-point cloud ray tracing model to update the local probability grid map;
the fourth step comprises the following sub-steps:
(4.1) according to the current pose information of the vehicle, calculating the pose information of each laser sensor in a vehicle body coordinate system through coordinate transformation, and converting the point coordinates in the two-dimensional obstacle point cloud array and the two-dimensional ground point cloud array into the vehicle body coordinate system; the center of the current vehicle body is taken as an origin, and the center position of the local probability grid map is determined according to a given offset;
(4.2) for each laser sensor, taking the position in the vehicle body coordinate system of the laser sensor as a starting point, sequentially taking the positions of points in the vehicle body coordinate system of the two-dimensional obstacle point cloud and the two-dimensional ground point cloud belonging to the laser sensor as end points, updating the probability of a non-occupied area and an occupied area by using a ray tracing model, and in the updating process, if the ray tracing models of a plurality of laser sensors perform the occupation operation on the same grid, superposing the grid probabilities of the two-dimensional obstacle point cloud and the two-dimensional ground point cloud by corresponding number of occupation factors; if the ray tracing models of the plurality of laser sensors do non-occupation operation on the same grid, subtracting non-occupation factors of corresponding numbers from the grid probability; finally obtaining a local probability grid map fused with the n point cloud ray tracing results;
step five: and (3) according to the local probability grid map obtained in the step four, obtaining a binary map describing the occupation information through binarization, denoising the binary map through morphological operation, and finally extracting, simplifying and segmenting the obstacle outline.
2. The method for describing the road static environment based on the coordinated fusion of the multiple point clouds according to claim 1, wherein in the second step, after time synchronization and space alignment are performed on the multiple point clouds, the self-vehicle point cloud filtering and the remote point cloud filtering are performed, so that the current laser scanning point cloud array is obtained.
3. The method for describing the static road environment based on the coordinated multi-point cloud fusion according to claim 1, wherein the first step comprises:
installing and calibrating a multi-line laser sensor on a vehicle body as a main laser sensor;
and respectively installing and calibrating a multi-line laser sensor at other n positions of the vehicle body as an auxiliary laser sensor.
4. The method for describing the road static environment based on the coordinated multi-point cloud fusion according to claim 3, wherein the second step comprises the following sub-steps:
(2.1) acquiring a time stamp of the main laser sensor as a current frame time stamp, and storing point cloud information of the time stamp into a current laser scanning point cloud array; then, according to the current frame time stamp, sequentially searching the point cloud information of all the auxiliary laser sensors, and if the time difference of a certain auxiliary laser sensor is smaller than the point cloud information of the set threshold value, storing the point cloud information into a current laser scanning point cloud array; if not, storing a null point cloud with the point number of 0 in the current laser scanning point cloud array;
and (2.2) converting the point cloud information of the auxiliary laser sensor in the current laser scanning point cloud array to a coordinate system of the main laser sensor, and performing self-vehicle point cloud filtering and remote point cloud filtering on the point cloud information in the current laser scanning point cloud array by utilizing rectangular filtering.
5. The method for describing the static road environment based on the coordinated multi-point cloud fusion according to claim 4, wherein the third step comprises the following sub-steps:
(3.1) respectively taking the positions of the laser sensors as circle centers, constructing a sector grid map, and sequentially projecting each laser point in the current laser scanning point cloud array into a corresponding sector grid; finding out the laser point with the minimum z value in each fan-shaped grid, and extracting the ground by using a region growing method to obtain a ground expression; then, calculating the height difference between all laser points and the ground expression of the area where the laser points are located, and taking the height difference as the initial ground clearance height;
(3.2) for the point cloud scanned by the main laser sensor, sequentially converting coordinates of all laser points and projecting the converted coordinates into a sector grid map corresponding to the auxiliary laser sensor, and then calculating the height difference according to the ground expression of the area; thereby, each laser point scanned by the main laser sensor obtains (n+1) height differences; different weights are given to the height differences, and the final ground clearance height is calculated;
for the point cloud scanned by the auxiliary laser sensor, sequentially converting coordinates of all laser points and projecting the converted coordinates into fan-shaped grid maps corresponding to other auxiliary laser sensors, and calculating a height difference according to a ground expression of the area; thereby, the laser point of each auxiliary laser sensor obtains n height differences; different weights are given to the height differences, and the final ground clearance height is calculated;
(3.3) according to a given threshold value d of the ground clearance max Each laser spot is assigned a label attribute and according to a given altitude threshold h max Removing high altitude points; according to the set angular resolutionαDividing each group of obstacle point cloud and ground point cloud intokObstacle point queueskAnd selecting the obstacle points closest to the ground points and the ground points farthest from the ground points by using a sequencing algorithm to obtain a final two-dimensional obstacle point cloud array and a final two-dimensional ground point cloud array.
6. The method for describing the static road environment based on the coordinated multi-point cloud fusion according to claim 1, wherein the fifth step comprises the following sub-steps:
(5.1) converting the local probability grid map into a binary map by using a binarization operation, and removing noise by using an opening and closing operation of image morphology;
and (5.2) acquiring outline information of the occupied area by using an edge extraction method, performing edge smoothing, and then simplifying and dividing an outline polygon to be used as static environment information for describing a road around the unmanned vehicle.
7. The road static environment description method based on the multipoint cloud collaborative fusion according to claim 3, wherein the main laser sensors are arranged at the middle position of the top of the vehicle head, and the auxiliary laser sensors are 3 and are respectively arranged at the middle positions of the left side of the vehicle head, the right side of the vehicle head and the right side of the vehicle tail.
8. The method for describing the static environment of the road based on the coordinated multi-point cloud fusion according to claim 1, wherein in the step (4.1), the given offset in front of the vehicle is larger than the given offset in back of the vehicle, so that the static environment information of the front area of the unmanned vehicle is more emphasized.
9. The method for describing the road static environment based on the coordinated multi-point cloud fusion according to claim 7, wherein the installation position of the auxiliary laser sensor is lower than that of the main laser sensor, so that the ground information around the unmanned vehicle is easier to obtain, and the blind area range is reduced.
CN202310215469.6A 2023-03-08 2023-03-08 Road static environment description method based on multipoint cloud collaborative fusion Active CN116434183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310215469.6A CN116434183B (en) 2023-03-08 2023-03-08 Road static environment description method based on multipoint cloud collaborative fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310215469.6A CN116434183B (en) 2023-03-08 2023-03-08 Road static environment description method based on multipoint cloud collaborative fusion

Publications (2)

Publication Number Publication Date
CN116434183A CN116434183A (en) 2023-07-14
CN116434183B true CN116434183B (en) 2023-11-14

Family

ID=87082298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310215469.6A Active CN116434183B (en) 2023-03-08 2023-03-08 Road static environment description method based on multipoint cloud collaborative fusion

Country Status (1)

Country Link
CN (1) CN116434183B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN112581612A (en) * 2020-11-17 2021-03-30 上汽大众汽车有限公司 Vehicle-mounted grid map generation method and system based on fusion of laser radar and look-around camera
CN115236673A (en) * 2022-06-15 2022-10-25 北京踏歌智行科技有限公司 Multi-radar fusion sensing system and method for large vehicle
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109781119B (en) * 2017-11-15 2020-01-21 百度在线网络技术(北京)有限公司 Laser point cloud positioning method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN112581612A (en) * 2020-11-17 2021-03-30 上汽大众汽车有限公司 Vehicle-mounted grid map generation method and system based on fusion of laser radar and look-around camera
CN115236673A (en) * 2022-06-15 2022-10-25 北京踏歌智行科技有限公司 Multi-radar fusion sensing system and method for large vehicle
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 Local static environment vectorization description method for ground unmanned vehicle

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CVR-LSE: Compact Vectorization Representation of Local Static Environments for Unmanned Ground Vehicles;Haiming Gao 等;arXiv;第1-13页 *
NEGL: Lightweight and Efficient Neighborhood Encoding-Based Global Localization for Unmanned Ground Vehicles;Haiming Gao 等;《IEEE Transactions on Vehicular Technology》;第72卷(第6期);第7111-7122页 *
基于三维激光雷达的道路边界提取和障碍物检测算法;王灿 等;《模式识别与人工智能》;第33卷(第04期);第353-362页 *
基于四线激光雷达的无人车障碍物检测算法;王海 等;《中国机械工程》;第29卷(第15期);第1884-1889页 *

Also Published As

Publication number Publication date
CN116434183A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
Zhang et al. Vehicle tracking and speed estimation from roadside lidar
CN108445480B (en) Mobile platform self-adaptive extended target tracking system and method based on laser radar
CN107045629B (en) Multi-lane line detection method
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN109948661B (en) 3D vehicle detection method based on multi-sensor fusion
CN113379805B (en) Multi-information resource fusion processing method for traffic nodes
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
CN114842438B (en) Terrain detection method, system and readable storage medium for automatic driving automobile
Sohn et al. Data fusion of high-resolution satellite imagery and LiDAR data for automatic building extraction
Sohn et al. Using a binary space partitioning tree for reconstructing polyhedral building models from airborne lidar data
CN106199558A (en) Barrier method for quick
CN108828621A (en) Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN112801022A (en) Method for rapidly detecting and updating road boundary of unmanned mine card operation area
CN113009453B (en) Mine road edge detection and mapping method and device
CN112464812A (en) Vehicle-based sunken obstacle detection method
CN111640323A (en) Road condition information acquisition method
CN114488190A (en) Laser radar 3D point cloud ground detection method
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114782729A (en) Real-time target detection method based on laser radar and vision fusion
CN115113206B (en) Pedestrian and obstacle detection method for assisting driving of underground rail car
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN115330969A (en) Local static environment vectorization description method for ground unmanned vehicle
CN118411507A (en) Semantic map construction method and system for scene with dynamic target
CN114740493A (en) Road edge detection method based on multi-line laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant