WO2023232165A1 - 多雷达数据融合的障碍物检测方法与系统 - Google Patents

多雷达数据融合的障碍物检测方法与系统 Download PDF

Info

Publication number
WO2023232165A1
WO2023232165A1 PCT/CN2023/109851 CN2023109851W WO2023232165A1 WO 2023232165 A1 WO2023232165 A1 WO 2023232165A1 CN 2023109851 W CN2023109851 W CN 2023109851W WO 2023232165 A1 WO2023232165 A1 WO 2023232165A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
lidar
target
obj
feature
Prior art date
Application number
PCT/CN2023/109851
Other languages
English (en)
French (fr)
Inventor
谢国涛
曹昌
秦晓辉
徐彪
秦兆博
王晓伟
秦洪懋
边有钢
胡满江
丁荣军
Original Assignee
湖南大学无锡智能控制研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖南大学无锡智能控制研究院 filed Critical 湖南大学无锡智能控制研究院
Publication of WO2023232165A1 publication Critical patent/WO2023232165A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/495Counter-measures or counter-counter-measures using electronic or electro-optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present invention relates to the technical field of radar data fusion, and in particular to a method and system for obstacle detection in a dust scene using multi-radar data fusion.
  • mining dump trucks In mining transportation scenarios, mining dump trucks have the characteristics of large load capacity and good stability, so mining dump trucks have become important transportation equipment in mines. In previous production scenarios, mining dump trucks were often driven by truck drivers with rich driving experience. However, due to harsh working environments, remote locations, and high work requirements, it has become increasingly difficult to recruit mining dump truck drivers. Therefore, through the implementation of mining Using automatic driving of dump trucks to reduce the dependence of mine transportation on manual labor is of great significance for promoting the development of future mine production and transportation. The accurate and stable detection of obstacles on the road is of great significance to the automatic driving of mining dump trucks. Accurate and stable obstacle detection can allow mining dump trucks to plan safe driving paths in advance to avoid collisions with obstacles. safety incident.
  • obstacle detection in current mining environments still faces many challenges, such as unstructured road environments, sensor noise caused by dust, and low brightness in night driving environments.
  • the unstructured road surface is rugged, making it difficult to directly extract road surface information, which hinders the extraction of obstacles; dust can easily cause the sensor to generate noise data, and the dust characteristic data can be incorrectly extracted, causing the obstacle detection algorithm to produce misdetections; the brightness at night is biased. If it is low, the camera image quality will be low and it will be difficult to extract effective feature data, thus causing the image obstacle detection algorithm to miss detection. Therefore, the obstacle detection system in the mining environment is gradually advancing in the direction of multi-sensor fusion. By fusing the data of multiple sensors, it can solve the problem of false detection and missed detection that may occur when a single sensor faces challenging mining scenarios, and realize the goal of mining Accurate and stable detection of obstacles in the environment.
  • the purpose of the present invention is to provide a multi-radar data fusion obstacle detection method and system in a dust scene, which can accurately and stably detect obstacle targets in a dust scene.
  • the present invention provides a multi-radar data fusion obstacle detection method, which includes include:
  • Step S1 obtain multi-radar data
  • Step S2 Convert the lidar point cloud in the multi-radar data into a matrixed point cloud depth map
  • Step S3 Calculate the point cloud slope characteristics on the point cloud depth map, obtain non-ground feature point cloud clusters based on connected domain search, and use point cloud clusters whose number of feature point clouds in a single connected domain exceeds the threshold as obstacles.
  • Point cloud clustering calculating the ground height, connectivity and surface dispersion of the non-ground characteristic point cloud clusters;
  • Step S4 Convert the coordinates of the millimeter-wave radar obstacle detection data in the multi-radar data to the lidar coordinate system, and calculate based on the distance and/or intersection ratio between the lidar target and the millimeter-wave radar target. Associated eigenvalues of a single lidar;
  • Step S5 Calculate the probability that the point cloud clustering is a dust point cloud clustering, and obtain the point cloud clustering positions of non-dust obstacles.
  • the point cloud slope feature ⁇ in step S3 is formed by the straight line formed by the upper and lower adjacent obstacle feature points p 1 and p 2 in the point cloud depth map and the xy plane of the lidar coordinate system. The angle between is obtained, as shown in equation (3):
  • (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are the coordinates of points p 1 and p 2 in the lidar coordinate system respectively, and l 1 and 2 are p 1 and p 2 distance on the xy plane.
  • the connected domain search in step S3 includes searching in the up-down direction and left-right direction of the point cloud depth map, and the calculation formula of the Euclidean space geometric distance threshold l2 thresh between p 1 and p 2 is set as the formula ( 5), set the connected domain clustering judgment conditions between p 1 and p 2 as equations (6) and (7):
  • k is the distance threshold coefficient
  • is the angle formed by the lines connecting p 1 and p 2 to the geometric center of the lidar respectively
  • max_d is the point between p 1 and p 2 that is far away from the geometric center of the lidar and
  • the distance between lidar origins b is the preset fixed offset coefficient, (row 1 , col 1 ), (row 2 , col 2 ) are p 1 and p 2 respectively.
  • step S4 the method for calculating the associated characteristic value of a single lidar based on the distance and/or intersection ratio between the lidar target and the millimeter wave radar target includes:
  • Step S41 determine whether the lidar target and the millimeter wave radar target are related, if so, proceed to step S42;
  • Step S42 use Equation (16) or Equation (17) to calculate the associated characteristic value of a single lidar:
  • single_relevancy and obj_relevancy both represent the associated characteristic value of a single lidar
  • rect A represents the target rectangular frame of the lidar
  • rect B represents the target rectangular frame of the millimeter wave radar
  • represents the intersection
  • represents the union
  • rect C represents the ability to The minimum rectangular frame surrounding rect A and rect B
  • num obj_rele is the number of millimeter wave radar targets successfully associated with the current lidar target
  • single_relevancy n represents the associated feature value of the nth lidar
  • max_single_relevancy represents the maximum correlation of the current lidar Eigenvalues.
  • step S41 specifically includes:
  • Step S411 determine whether the lidar target and the millimeter-wave radar target overlap. If so, the two can be directly considered to be successfully associated. If the two targets do not overlap, proceed to step S412;
  • Step S412 determine whether the distance between the millimeter wave radar target and the lidar target is less than the association distance threshold connect_distance_thresh. If so, it is considered that the millimeter wave radar target and the lidar target are successfully associated.
  • k 1 , k 2 , k 3 , and k 4 are the scaling coefficients of different features respectively, obj_height, obj_ratio, obj_discrete is the ground height, connectivity and surface dispersion of the non-ground feature point cloud cluster obtained through step S3, and obj_relevancy is the associated feature value of a single lidar.
  • the present invention also provides a multi-radar data fusion obstacle detection system, which includes:
  • a radar data acquisition unit used to acquire multiple radar data
  • a point cloud depth map conversion unit configured to convert the lidar point cloud in the multi-radar data into a matrixed point cloud depth map
  • a point cloud feature calculation unit is used to calculate the point cloud slope characteristics on the point cloud depth map, and obtain non-ground feature point cloud clusters based on connected domain search, and classify points whose number of feature point clouds in a single connected domain exceeds a threshold.
  • Cloud clustering is used as obstacle point cloud clustering to calculate the ground height, connectivity and surface dispersion of the non-ground characteristic point cloud clusters;
  • a correlation feature calculation unit which is used to convert the coordinates of the millimeter wave radar obstacle detection data in the multi-radar data to the lidar coordinate system, and calculate the coordinates according to the distance between the lidar target and the millimeter wave radar target and/or Intersection and union ratio, calculating the associated characteristic value of a single lidar;
  • a probability calculation unit is used to calculate the probability that the point cloud cluster is a dust point cloud cluster, and to obtain the point cloud cluster positions of non-dust obstacles.
  • the point cloud slope feature ⁇ in the point cloud feature calculation unit is formed by the straight line formed by the upper and lower adjacent obstacle feature points p 1 and p 2 in the point cloud depth map and the lidar coordinate system.
  • the angle between the xy plane is obtained, as shown in equation (3):
  • (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are the coordinates of points p 1 and p 2 in the lidar coordinate system respectively, and l 1 and 2 are p 1 and p 2 distance on the xy plane.
  • the connected domain search in the point cloud feature calculation unit includes searching in the up and down directions and left and right directions of the point cloud depth map, and the calculation formula of the Euclidean space geometric distance threshold l2 thresh between p 1 and p 2 is set
  • the connected domain clustering judgment conditions between p 1 and p 2 are set to formula (6) and formula (7):
  • k is the distance threshold coefficient
  • is the angle formed by the lines connecting p 1 and p 2 to the geometric center of the lidar respectively
  • max_d is the point between p 1 and p 2 that is far away from the geometric center of the lidar and
  • the distance between lidar origins b is the preset fixed offset coefficient
  • (row 1 , col 1 ) are the coordinates of p 1 and p 2 in the point cloud depth map respectively.
  • dust_probability k 1 ⁇ obj_height+k 2 ⁇ obj_ratio +k 3 ⁇ obj_discrete+k 4 ⁇ obj_relevancy, #(19)
  • k 1 , k 2 , k 3 , and k 4 are the scaling coefficients of different features respectively, and obj_height, obj_ratio, and obj_discrete are the ground height and connectivity of the non-ground feature point cloud clustering obtained through the step S3. and surface dispersion, obj_relevancy is the associated characteristic value of a single lidar.
  • This invention does not rely on deep learning, so it does not require additional annotation of data and additional computing resources, and is less difficult to deploy and apply;
  • the sensors are lidar and millimeter-wave radar, which do not need to provide additional light illumination and can work normally at night to meet the night transportation needs of mines;
  • Figure 1 is a schematic diagram of sensor installation according to an embodiment of the present invention.
  • Figure 2 is a flow chart of an obstacle detection method provided by an embodiment of the present invention.
  • Figure 3 is a schematic diagram of point cloud slope calculation provided by an embodiment of the present invention.
  • Figure 4 is a schematic diagram of setting the Euclidean spatial geometric distance threshold provided by an embodiment of the present invention.
  • Figure 5 is a schematic diagram of target association judgment provided by an embodiment of the present invention.
  • the traditional obstacle detection method based on a single sensor often has serious problems in the mining environment.
  • the camera is underexposed in a weak light environment and the imaging effect is poor. Therefore, the obstacle detection method based on the camera has poor detection effect at night and misses detection.
  • the efficiency is high and it is difficult to meet the needs of the low-light transportation environment at night in mines; lidar has a short wavelength and is easily affected by dust, rain, snow and other suspended matter to produce noise data. Therefore, the obstacle detection method based on lidar is not suitable for use in mine dust environments. It is extremely easy to produce false detections; millimeter wave radar data has low latitude and little information, and it is difficult to achieve stable obstacle detection based solely on millimeter wave radar.
  • the present invention intends to realize accurate and stable detection of obstacles in mining transportation scenarios by fusing lidar and millimeter wave radar data, combined with the characteristics of lidar data being rich, millimeter wave radar not being affected by dust, and both being not affected by light. .
  • Step S1 obtain multi-radar data.
  • Multi-radar data includes lidar point clouds obtained by lidar and millimeter-wave radar obstacle detection data obtained by millimeter-wave radar.
  • the installation method of lidar and millimeter wave radar is shown in Figure 1.
  • the direction pointed by the arrow is the forward direction of the vehicle.
  • Lidar and millimeter wave radar are both installed on the front of the vehicle, where The laser radar is installed at the top, and the millimeter-wave radar is installed at the bottom.
  • the angle difference between the two in the forward direction is small, and the detection area overlap between the two is high.
  • the installation position of the millimeter-wave radar is 30cm and less than 1m above the ground, and the laser radar is installed at the bottom. It is installed within a range of 1m to 2m above the millimeter wave radar, which is conducive to the fusion of sensor information.
  • Step S2 Convert the lidar point cloud in the multi-radar data into a matrixed point cloud depth picture.
  • the lidar point cloud in the multi-radar data is converted into a matrixed point cloud depth map using the projection model shown in the following equations (1) and (2):
  • ⁇ 0 and ⁇ 0 are respectively the horizontal and vertical starting angles of the point cloud matrix
  • (x, y, z) are the laser points at The three-dimensional spatial coordinates in the lidar coordinate system where the lidar scanning center and direction are determined.
  • ⁇ and ⁇ are the horizontal and vertical angular resolutions of the point cloud matrix respectively. The matrix resolution can be determined by referring to the angular resolution of the lidar scanning.
  • Step S3 Calculate the point cloud slope characteristics on the point cloud depth map, and obtain non-ground feature point cloud clusters based on connected domain search, and select non-ground features whose number of feature point clouds in a single connected domain exceeds the point cloud quantity threshold point_num obj_thresh .
  • Point cloud clustering is used as obstacle point cloud clustering to calculate the ground height, connectivity and surface dispersion of the non-ground feature point cloud clustering.
  • the point cloud number threshold point_num obj_thresh is set according to the minimum obstacle size to be detected.
  • the point cloud slope feature ⁇ in step S3 is formed by the straight line formed by the upper and lower adjacent points p 1 and p 2 in the point cloud depth map and the lidar coordinates is obtained from the angle between the xy plane, as shown in equation (3):
  • (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are the coordinates of points p 1 and p 2 in the lidar coordinate system respectively, and l 1 and 2 are p 1 and p 2The distance on the xy plane is shown in equation (4).
  • the method for obtaining the obstacle feature point cloud in obstacle point cloud clustering includes: setting the threshold ⁇ 0 based on the maximum slope angle of the road surface and actual debugging experience. If it is greater than the threshold ⁇ 0 , the point can be considered is the obstacle feature point.
  • the connected domain search in step S3 includes searching in the up and down directions of the point cloud depth map. and search in the left and right directions, the calculation formula of the Euclidean spatial geometric distance threshold l2 thresh between p 1 and p 2 is set as formula (5), and the connected domain clustering judgment condition between p 1 and p 2 is set as formula ( 6) and formula (7):
  • k is the distance threshold coefficient, usually between 1 and 2 according to the actual debugging range
  • is the angle formed by p 1 and p 2 respectively and the line connecting the geometric center of the lidar
  • max_d is p 1 and p 2
  • b is the preset fixed bias coefficient
  • the bias coefficient is usually set to be slightly larger than the lidar measurement accuracy
  • (row 1 , col 1 ) are the coordinates of p 1 and p 2 in the point cloud depth map respectively.
  • the height of the point cloud clustering from the ground in step S3 is to find the distance from the lowest point of the point cloud clustering to the ground plane.
  • the random sampling consistency algorithm and the least squares method are combined to perform ground plane fitting.
  • the random sampling consistency algorithm first takes three points each time to determine a plane equation, and then finds all points whose distance from the plane is less than the threshold as the internal points of the model. Set the number of iterations num_iter according to the required model accuracy, and iteratively sample And determine the model num_iter times, and finally select the plane equation corresponding to the largest number of points in the model as the random sampling consistency model.
  • the characteristic point cloud connectivity degree obj_ratio is the ratio of the number of non-clustered feature points near the point cloud cluster point_num no_obj to the number of feature points of the obstacle itself point_num obj , as shown in the following equation (12), which represents the vicinity of the current point cloud cluster
  • the proportion of feature point clouds that cannot be correctly connected and searched is used to characterize the proportion of noise point clouds near point cloud clusters.
  • Point clouds in a rectangular area and then use Euclidean clustering to obtain point clouds in the same cluster as the current cluster point cloud, and finally remove point clouds that have been included in other point cloud clusters, and the remaining point clouds are Non-cluster feature point clouds near clusters. Then the calculation formula of point cloud connectivity of point cloud clustering features is:
  • point_num no_obj is the number of non-clustered feature points near the point cloud cluster
  • point_num obj is The number of characteristic points of the obstacle itself.
  • step S3 the surface dispersion is used to characterize the degree of continuity and smoothness of the point cloud clustering surface.
  • x left , x mid , and x right are the values of three adjacent points in the x-axis direction on the point cloud depth map.
  • the point cloud clustering surface dispersion obj_discrete can be expressed as the following formula (15):
  • Step S4 Convert the coordinates of the millimeter-wave radar obstacle detection data in the multi-radar data to the lidar coordinate system, and calculate the distance and/or intersection ratio of a single lidar based on the distance and/or intersection over Union. associated eigenvalues.
  • the correlation feature calculation in step S4 includes two parts, namely target-level correlation judgment and correlation feature value calculation.
  • the target-level correlation judgment uses the one-to-many correlation algorithm between lidar targets and millimeter-wave radar targets as shown in the figure.
  • the correlation strategies include overlap judgment and distance judgment. If the lidar target overlaps with the millimeter-wave radar target, the two can be directly considered to be successfully associated. If the two targets do not overlap, set the correlation distance threshold to connect_distance_thresh. If the distance between the millimeter-wave radar target and the lidar target is less than The threshold connect_distance_thresh is considered to be successfully associated with the millimeter wave radar target and the lidar target.
  • the association distance threshold is usually set to two to three times the size of the minimum obstacle to be detected.
  • the correlation characteristic value single_relevancy of a single lidar is calculated according to the intersection ratio between the lidar target and the millimeter-wave radar target, as shown in the following equation (16):
  • rect A represents the target rectangular frame of the lidar
  • rect B represents the target rectangular frame of the millimeter wave radar
  • represents the intersection
  • represents the union
  • rect C represents the smallest rectangular frame that can surround rect A and rect B.
  • a lidar target may have multiple millimeter wave radar targets associated with it, so the associated characteristic value obj_relevancy of a single lidar can be uniformly expressed as the following formula (17):
  • num obj_rele is the number of millimeter wave radar targets successfully associated with the current lidar target
  • single_relevancy n represents the associated characteristic value of the nth lidar. For a lidar target with which no millimeter-wave radar target is successfully associated, its correlation characteristic value is 0.
  • Step S5 Calculate the probability that the point cloud clustering is a dust point cloud clustering, and obtain the point cloud clustering positions of non-dust obstacles.
  • k 1 , k 2 , k 3 , and k 4 are the scaling coefficients of different features respectively.
  • the scaling coefficients of different features are set according to their importance in dust judgment, and the final probability value is scaled to around 1, obj_height , obj_ratio and obj_discrete are the ground height, connectivity and surface dispersion of the non-ground feature point cloud cluster obtained through step S3, and obj_relevancy is the associated feature value of a single lidar.
  • linear function in equation (19) can also be replaced by a quadratic function or a cubic function to achieve unification of features.
  • Embodiments of the present invention also provide a multi-radar data fusion obstacle detection system, which includes a radar data acquisition unit, a point cloud depth map conversion unit, a point cloud feature calculation unit, an associated feature calculation unit and a probability calculation unit, wherein:
  • the radar data acquisition unit is used to acquire multiple radar data.
  • the point cloud depth map conversion unit is used to convert the lidar point cloud in the multi-radar data into a matrixed point cloud depth map.
  • the point cloud feature calculation unit is used to calculate the point cloud slope characteristics on the point cloud depth map, and obtain non-ground feature point cloud clustering based on connected domain search, and cluster point clouds whose number of feature point clouds exceeds a threshold in a single connected domain.
  • the class is used as an obstacle point cloud cluster, and the ground height, connectivity and surface dispersion of the non-ground feature point cloud cluster are calculated.
  • the associated feature calculation unit is used to convert the coordinates of the millimeter-wave radar obstacle detection data in the multi-radar data to the lidar coordinate system, and calculate the associated feature value of a single lidar based on the distance and/or intersection ratio.
  • the probability calculation unit is used to calculate the probability that the point cloud cluster is a dust point cloud cluster, and obtain the point cloud cluster positions of non-dust obstacles.
  • the point cloud slope feature in the point cloud feature calculation unit is the angle ⁇ between the straight line formed by the upper and lower adjacent points p 1 and p 2 in the point cloud depth map and the xy plane of the lidar coordinate system. Obtained, as shown in formula (3):
  • (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are the coordinates of points p 1 and p 2 in the lidar coordinate system respectively, and l 1 and 2 are p 1 and p 2 distance on the xy plane.
  • the connected domain search in the point cloud feature calculation unit includes searching in both the longitudinal and transverse directions of the point cloud depth map.
  • the calculation formula of the Euclidean spatial geometric distance threshold l2 thresh between p 1 and p 2 is set as Eq. (5), set the connected domain clustering judgment conditions between the two obstacle feature points as equations (6) and (7):
  • k is the distance threshold coefficient
  • is the angle formed by the lines connecting p 1 and p 2 to the lidar origin
  • max_d is the distance between the points in p 1 and p 2 that are far away from the lidar and the lidar origin.
  • distance b is the preset fixed offset coefficient
  • (row 1 , col 1 ) are the coordinates of p 1 and p 2 in the point cloud depth map respectively.
  • dust_probability k 1 ⁇ obj_height+k 2 ⁇ obj_ratio +k 3 ⁇ obj_discrete+k 4 ⁇ obj_relevancy, #(19)
  • k 1 , k 2 , k 3 , and k 4 are the scaling coefficients of different features respectively, and obj_height, obj_ratio, and obj_discrete are the ground height and connectivity of the non-ground feature point cloud clustering obtained through the step S3. Degree and surface dispersion, obj_relevancy is the associated characteristic value of a single lidar.
  • step S7 point cloud clusters with dust_probability less than 1 are output as real obstacle point cloud clusters.
  • the present invention combines the different characteristics of lidar and millimeter wave radar and proposes a multi-radar data fusion based on
  • the obstacle detection method and system in the dust scene achieves stable and accurate detection of obstacles in the dust scene through feature calculation and sensor echo characteristic fusion, and can also work normally at night, which is very important for realizing unmanned operation in the mining scene.
  • Driving is of great significance, and it is also of great significance for autonomous driving in rainy, snowy, and dusty urban environments.
  • the present invention can also be applied to other carrying equipment or engineering vehicles such as trucks and excavators.
  • trucks and excavators In addition to the application scenarios in mines, it can also be used in other environments such as cities and villages. It also has the function of dealing with dust, rain and snow and other scenarios.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本发明公开了一种多雷达数据融合的障碍物检测方法与系统,其包括:步骤S1,获取多雷达数据;步骤S2,将多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图;步骤S3,计算点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个连通域内特征点云数量超过阈值的点云聚类作为障碍物点云聚类,计算非地面特征点云聚类离地高度、连通度和表面离散度;步骤S4,将多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值;步骤S5,计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置。

Description

多雷达数据融合的障碍物检测方法与系统 技术领域
本发明涉及雷达数据融合技术领域,特别是关于一种多雷达数据融合的扬尘场景下障碍物检测方法与系统。
背景技术
在矿山运输场景中,矿用自卸车具有载重大、稳定性好等特点,因此矿用自卸车成为了矿山中的重要运输装备。在以往的生产场景中矿用自卸车往往由具有丰富驾驶经验的卡车司机来驾驶,但由于工作环境恶劣、位置偏远、工作要求高等原因,矿用自卸车司机招聘愈发困难,因此通过实现矿用自卸车的自动驾驶来减少矿山运输对人工的依赖程度,对于促进未来矿山生产、运输的发展具有重要意义。而实现道路中障碍物的精准、稳定检测对于实现矿用自卸车自动驾驶具有重要意义,精准、稳定的障碍物检测能够让矿用自卸车提前规划安全行驶路径,避免与障碍物发生碰撞而发生安全事故。
然而,目前矿山环境下的障碍物检测还面临着诸多挑战,例如非结构化的道路环境、扬尘引起的传感器噪声、夜间行驶环境亮度偏低等问题。非结构化的道路路面崎岖不平,难以直接提取道路地面信息,对障碍物提取产生了阻碍;扬尘易使传感器产生噪声数据,错误提取扬尘特征数据,使障碍物检测算法产生误检;夜间亮度偏低则易使摄像头图像质量偏低,难以提取出有效特征数据,从而使图像障碍物检测算法产生漏检。因此矿山环境下的障碍物检测系统正逐步向着多传感器融合的方向推进,通过融合多个传感器的数据,解决单一传感器在面临具有挑战的矿山场景时可能出现的误检、漏检问题,实现矿山环境下障碍物的精准、稳定的检测。
发明内容
本发明的目的在于提供一种多雷达数据融合的扬尘场景下障碍物检测方法与系统,其能够精准、稳定检测扬尘场景下障碍物目标。
为实现上述目的,本发明提供一种多雷达数据融合的障碍物检测方法,其包 括:
步骤S1,获取多雷达数据;
步骤S2,将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图;
步骤S3,计算所述点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个所述连通域内特征点云数量超过阈值的点云聚类作为障碍物点云聚类,计算所述非地面特征点云聚类离地高度、连通度和表面离散度;
步骤S4,将所述多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值;
步骤S5,计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置。
进一步地,所述步骤S3中的所述点云坡度特征α通过所述点云深度图中上、下相邻两障碍物特征点p1、p2形成的直线与激光雷达坐标系xy平面之间的角度获得,如式(3)所示:
式中,(x1,y1,z1)、(x2,y2,z2)分别为p1、p2点在激光雷达坐标系中的坐标,l1,2为p1、p2在所述xy平面上的距离。
进一步地,所述步骤S3中连通域搜索包括对所述点云深度图的上下方向和左右方向的搜索,将p1、p2之间欧式空间几何距离阈值l2thresh的计算公式设置为式(5),将p1、p2之间的连通域聚类判断条件设置为式(6)和式(7):


式中,k为距离阈值系数,β为p1、p2分别与激光雷达的几何中心连线形成的夹角,max_d为p1、p2中与激光雷达的几何中心距离较远的点与激光雷达原点之间的距离,b为预设的固定偏置系数,(row1,col1)、(row2,col2)分别为p1、p2在 所述点云深度图中的坐标。
进一步地,步骤S4中根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值的方法包括:
步骤S41,判断激光雷达目标与毫米波雷达目标是否关联,若是,则进入步骤S42;
步骤S42,采用式(16)或式(17)计算单个激光雷达的关联特征值:


式中,single_relevancy、obj_relevancy均表示单个激光雷达的关联特征值,rectA表示激光雷达的目标矩形框,rectB表示毫米波雷达的目标矩形框,∩表示交集,∪表示并集,rectC表示能够将rectA、rectB包围的最小矩形框,numobj_rele为与当前激光雷达目标成功关联的毫米波雷达目标数量,single_relevancyn表示第n个激光雷达的关联特征值,max_single_relevancy表示当前激光雷达的最大关联特征值。
进一步地,步骤S41具体包括:
步骤S411,判断激光雷达目标与毫米波雷达目标是否产生重叠,若是,则可直接认为两者成功关联,若两目标未发生重叠,则进入步骤S412;
步骤S412,判断毫米波雷达目标与激光雷达目标之间的距离是否小于关联距离阈值connect_distance_thresh,若是,则认为该毫米波雷达目标与该激光雷达目标成功关联。
进一步地,所述步骤S5中,利用下式(19)计算该点云聚类属于扬尘点云聚类的概率dust_probability:
dust_probability=k1×obj_height+k2×obj_ratio
+k3×obj_discrete+k4×obj_relevancy,#(19)
式中,k1、k2、k3、k4分别为不同特征的缩放系数,obj_height、obj_ratio、 obj_discrete为通过所述步骤S3获得的所述非地面特征点云聚类离地高度、连通度和表面离散度,obj_relevancy为单个激光雷达的关联特征值。
本发明还提供一种多雷达数据融合的障碍物检测系统,其包括:
雷达数据获取单元,其用于获取多雷达数据;
点云深度图转换单元,其用于将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图;
点云特征计算单元,其用于计算所述点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个所述连通域内特征点云数量超过阈值的点云聚类作为障碍物点云聚类,计算所述非地面特征点云聚类离地高度、连通度和表面离散度;
关联特征计算单元,其用于将所述多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值;
概率计算单元,其用于计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置。
进一步地,所述点云特征计算单元中的所述点云坡度特征α通过所述点云深度图中上、下相邻两障碍物特征点p1、p2形成的直线与激光雷达坐标系xy平面之间的角度获得,如式(3)所示:
式中,(x1,y1,z1)、(x2,y2,z2)分别为p1、p2点在激光雷达坐标系中的坐标,l1,2为p1、p2在所述xy平面上的距离。
进一步地,所述点云特征计算单元中连通域搜索包括对所述点云深度图的上下方向和左右方向的搜索,将p1、p2之间欧式空间几何距离阈值l2thresh的计算公式设置为式(5),将p1、p2之间的连通域聚类判断条件设置为式(6)和式(7):


式中,k为距离阈值系数,β为p1、p2分别与激光雷达的几何中心连线形成的夹角,max_d为p1、p2中与激光雷达的几何中心距离较远的点与激光雷达原点之间的距离,b为预设的固定偏置系数,(row1,col1)、(row2,col2)分别为p1、p2在所述点云深度图中的坐标。
进一步地,所述概率计算单元中,利用下式(19)计算该点云聚类属于扬尘点云聚类的概率dust_probability:
dust_probability=k1×obj_height+k2×obj_ratio
+k3×obj_discrete+k4×obj_relevancy,#(19)
式中,k1、k2、k3、k4分别为不同特征的缩放系数,obj_height、obj_ratio、obj_discrete为通过所述步骤S3获得的所述非地面特征点云聚类离地高度、连通度和表面离散度,obj_relevancy为单个激光雷达的关联特征值。
本发明由于采取以上技术方案,其具有以下优点:
1、本发明不依赖于深度学习,因此不需要额外标注数据以及额外增加计算资源,部署与应用难度较低;
2、传感器为激光雷达与毫米波雷达,不需要提供额外的光线照明,夜间也可正常工作,满足矿山夜间运输需求;
3、通过多传感器数据融合有效减少扬尘、雨雪等产生的误检,在矿山扬尘场景下障碍物检测功能依然具有较高的稳定性与准确性。
附图说明
图1为本发明实施例提供的传感器安装示意图。
图2为本发明实施例提供的障碍物检测方法流程图。
图3为本发明实施例提供的点云坡度计算示意图。
图4为本发明实施例提供的欧式空间几何距离阈值设置示意图。
图5为本发明实施例提供的目标关联判断示意图。
具体实施方式
在附图中,使用相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面结合附图对本发明的实施例进行详细说明。
在本发明的描述中,术语“中心”、“纵向”、“横向”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为 基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明保护范围的限制。
在不冲突的情况下,本发明各实施例及各实施方式中的技术特征可以相互组合,并不局限于该技术特征所在的实施例或实施方式中。
下面结合附图以及具体实施例对本发明做进一步的说明,需要指出的是,下面仅以一种最优化的技术方案对本发明的技术方案以及设计原理进行详细阐述,但本发明的保护范围并不仅限于此。
本文涉及下列术语,为便于理解,对其含义说明如下。本领域技术人员应当理解,下列术语也可能有其它名称,但在不脱离其含义的情形下,其它任何名称都应当被认为与本文所列术语一致。
传统基于单一传感器的障碍物检测方法在矿山环境下往往具有严重的问题,例如,摄像头在弱光照环境下曝光不足,成像效果差,因此基于摄像头的障碍物检测方法在夜间检测效果差,漏检率高,难以满足矿山夜间的弱光照运输环境使用需求;激光雷达波长较短,易受扬尘、雨、雪等悬浮物影响产生噪声数据,因此基于激光雷达的障碍物检测方法在矿山扬尘环境下极易产生误检;毫米波雷达数据纬度低、信息量少,单一基于毫米波雷达难以实现稳定的障碍物检测。因此本发明意图通过融合激光雷达与毫米波雷达数据,结合激光雷达数据丰富、毫米波雷达不受扬尘影响与两者均不受光照影响的特性,实现矿山运输场景下障碍物的精准、稳定检测。
本发明实施例提供的多雷达数据融合的障碍物检测方法包括:
步骤S1,获取多雷达数据。
多雷达数据包括由激光雷达获取的激光雷达点云与由毫米波雷达获取的毫米波雷达障碍物检测数据。
本实施例中,激光雷达和毫米波雷达的安装方式如图1所示,箭头所指方向为车辆行驶正方向,激光雷达(Lidar)与毫米波雷达(Radar)均安装于车辆前部,其中激光雷达安装靠上,毫米波雷达安装靠下,二者在正方向角度差较小,二者的检测区域重合度较高,例如:毫米波雷达安装位置高于地面30cm且小于1m,激光雷达则安装于毫米波雷达上方1m至2m的范围内,这样有利于传感器信息的融合。
步骤S2,将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度 图。
本实施例中,使用如下式(1)和式(2)所示的投影模型将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图:

式中,分别为点云投影后在点云深度图上所处位置横、纵坐标,θ0、φ0分别为点云矩阵化横、纵起始角度,(x、y、z)为激光点在以激光雷达扫描中心及方向确定的激光雷达坐标系下的三维空间坐标,Δθ、Δφ分别为点云矩阵化横、纵角度分辨率,该矩阵化分辨率可以通过参考激光雷达扫描角度分辨率确定。
步骤S3,计算所述点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个所述连通域内特征点云数量超过点云数量阈值point_numobj_thresh的非地面点云聚类作为障碍物点云聚类,计算所述非地面特征点云聚类离地高度、连通度和表面离散度。
其中,点云数量阈值point_numobj_thresh根据所要检测的最小障碍物大小设置得到。
在一个实施例中,如图3所示,步骤S3中的所述点云坡度特征α通过所述点云深度图中上、下相邻两点p1、p2形成的直线与激光雷达坐标系xy平面之间的角度获得,如式(3)所示:
式中,(x1,y1,z1)、(x2,y2,z2)分别为p1、p2点在激光雷达坐标系中的坐标,l1,2为p1、p2在所述xy平面上的距离,如式(4)所示。
于是,障碍物点云聚类中的障碍物特征点云的获取方法包括:根据所在道路路面的最大坡度角和实际调试经验可以设定阈值α0,如果大于该阈值α0即可认为该点是障碍物特征点。
在一个实施例中,步骤S3中连通域搜索包括对所述点云深度图的上下方向 和左右方向的搜索,将p1、p2之间欧式空间几何距离阈值l2thresh的计算公式设置为式(5),将p1、p2之间的连通域聚类判断条件设置为式(6)和式(7):


式中,k为距离阈值系数,通常根据实际调试范围为1-2之间,β为p1、p2分别与与激光雷达的几何中心连线形成的夹角,max_d为p1、p2中与激光雷达的几何中心距离较远的点与激光雷达的几何中心之间的距离,b为预设的固定偏置系数,偏置系数通常设置为略大于激光雷达测量精度,(row1,col1)、(row2,col2)分别为p1、p2在所述点云深度图中的坐标。
在一个实施例中,步骤S3中点云聚类离地高度为求点云聚类最低点到地面平面距离,在本方法中结合随机采样一致性算法与最小二乘法进行地面平面拟合。随机采样一致性算法首先每次取三个点确定一个平面方程,然后找出所有与该平面距离小于阈值的点作为该次模型内点,根据所需要的的模型精度设置迭代次数num_iter,迭代采样并确定模型num_iter次,最终选择模型内点数量最多的所对应的平面方程作为随机采样一致性模型。
设最小二乘法待求解地面平面方程x、y、z三轴系数分别为A、B、C,则在激光雷达坐标系下最小二乘法所求得的地面平面方程可以表示为Ax+By+Cz+1=0,因此点与模型之间的误差可以表示为f(x,y,z)=Ax+By+Cz+1,点云与模型误差的平方和可以表示为(xi,yi,zi)表示第i个点在激光雷达坐标系中的坐标。
根据导数的性质可以将求解误差平方和S最小的问题转化为求解如下方程(8):
将最终随机采样一致性模型内点集代入上式中得到如下公式(9):
将式(9)化简,可以得到如下表达式(10):
然后找到点云聚类中z轴值最小的点,代入公式(11)即可得到点云聚类离地高度:
步骤S3中特征点云连通度obj_ratio为点云聚类附近非聚类特征点数目point_numno_obj与障碍物本身特征点数目point_numobj之比,如下式(12)所示,表示当前点云聚类附近无法被正确连通搜索的特征点云比例,用于表征点云聚类附近噪音点云比例。
使用点云深度图搜索和欧式聚类结合的方式对点云聚类附近的点云进行搜索,以点云聚类在点云深度图上横纵坐标的最大值以及最小值为范围获得位于该矩形区域内的点云,然后使用欧式聚类获得与当前聚类点云处于同一聚类的点云,最后去除其中已经被包含在其他点云聚类的点云,最终剩余的则是点云聚类附近非聚类特征点云。则点云聚类特征点云连通度计算公式为:
式中,point_numno_obj为点云聚类附近非聚类特征点数目,point_numobj为 障碍物本身特征点数目。
步骤S3中表面离散度用于表征点云聚类表面连续、平滑程度。点云聚类单个点表面离散度point_discrete的计算公式如式(13)所示:
point_discrete=huber(|2×xmid-xleft-xright|),#(13)
式中,xleft、xmid、xright为在点云深度图上相邻的三个点在x轴方向上的值。则点云聚类表面离散度obj_discrete可以表示为下式(15):
步骤S4,将所述多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据距离和/或交并比(英文全称为Intersection overUnion),计算单个激光雷达的关联特征值。
步骤S4中关联特征计算包括两部分,分别是目标级关联判断以及关联特征值计算。目标级关联判断使用如图所示的激光雷达目标与毫米波雷达目标的一对多关联算法,关联策略包含重叠判断及距离判断两种。若激光雷达目标与毫米波雷达目标产生重叠,则可直接认为两者成功关联,若两目标未发生重叠,则设置关联距离阈值为connect_distance_thresh,若毫米波雷达目标与激光雷达目标之间的距离小于阈值connect_distance_thresh,则认为该毫米波雷达目标与该激光雷达目标成功关联,该关联距离阈值通常设置为所要检测的最小障碍物大小的两到三倍。
对关联成功的激光雷达与毫米波雷达,根据激光雷达目标与毫米波雷达目标之间的交并比,如下式(16)所示,计算单个激光雷达的关联特征值single_relevancy:
式中,rectA表示激光雷达的目标矩形框,rectB表示毫米波雷达的目标矩形框,∩表示交集,∪表示并集,rectC表示能够将rectA、rectB包围的最小矩形框。
在一个实施例中,激光雷达目标可能有多个毫米波雷达目标与其关联,因此单个激光雷达的关联特征值obj_relevancy可以统一表示为下式(17):

式中,numobj_rele为与当前激光雷达目标成功关联的毫米波雷达目标数量,single_relevancyn表示第n个激光雷达的关联特征值。对于无毫米波雷达目标与之成功关联的激光雷达目标,其关联特征值为0。
步骤S5,计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置。
在一个实施中,利用下式(19)计算该点云聚类属于扬尘点云聚类的概率dust_probability:
dust_probability=k1×obj_height+k2×obj_ratio
+k3×obj_discrete+k4×obj_relevancy,#(19)
式中,k1、k2、k3、k4分别为不同特征的缩放系数,根据不同特征在扬尘判断中的重要程度设置不同特征的缩放系数,并将最终概率值缩放到1附近,obj_height、obj_ratio、obj_discrete为通过所述步骤S3获得的所述非地面特征点云聚类离地高度、连通度和表面离散度,obj_relevancy为单个激光雷达的关联特征值。
需要说明的是,也可以将式(19)中的一次函数替换成二次函数或三次函数,实现特征的统一。
本发明实施例还提供一种多雷达数据融合的障碍物检测系统,其包括雷达数据获取单元、点云深度图转换单元、点云特征计算单元、关联特征计算单元和概率计算单元,其中:
雷达数据获取单元用于获取多雷达数据。
点云深度图转换单元用于将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图。
点云特征计算单元用于计算所述点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个所述连通域内特征点云数量超过阈值的点云聚类作为障碍物点云聚类,计算所述非地面特征点云聚类离地高度、连通度和表面离散度。
关联特征计算单元用于将所述多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据距离和/或交并比,计算单个激光雷达的关联特征值。
概率计算单元用于计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置。
所述点云特征计算单元中的所述点云坡度特征通过所述点云深度图中上、下相邻两点p1、p2形成的直线与激光雷达坐标系xy平面之间的角度α获得,如式(3)所示:
式中,(x1,y1,z1)、(x2,y2,z2)分别为p1、p2点在激光雷达坐标系中的坐标,l1,2为p1、p2在所述xy平面上的距离。
所述点云特征计算单元中连通域搜索包括对所述点云深度图的纵向和横向两个方向的搜索,将p1、p2之间欧式空间几何距离阈值l2thresh的计算公式设置为式(5),将两障碍物特征点之间的连通域聚类判断条件设置为式(6)和式(7):


式中,k为距离阈值系数,β为p1、p2分别与激光雷达原点连线形成的夹角,max_d为p1、p2中与激光雷达距离较远的点与激光雷达原点之间的距离,b为预设的固定偏置系数,(row1,col1)、(row2,col2)分别为p1、p2在所述点云深度图中的坐标。
所述概率计算单元中,利用下式(19)计算该点云聚类属于扬尘点云聚类的概率dust_probability:
dust_probability=k1×obj_height+k2×obj_ratio
+k3×obj_discrete+k4×obj_relevancy,#(19)
式中,k1、k2、k3、k4分别为不同特征的缩放系数,obj_height、obj_ratio、obj_discrete为通过所述步骤S3获得的所述非地面特征点云聚类离地高度、连通 度和表面离散度,obj_relevancy为单个激光雷达的关联特征值。
步骤S7中将dust_probability小于1的点云聚类作为真实障碍物点云聚类进行输出。
现有发明或依赖单一传感器不够稳定,或无法有效应对扬尘问题,或无法工作于夜间环境,因此本发明融合激光雷达与毫米波雷达的不同特性,提出了一种基于一种多雷达数据融合的扬尘场景下的障碍物检测方法与系统,通过特征计算以及传感器回波特性融合实现了扬尘场景下的障碍物稳定、精确检测,且在夜间也能正常工作,对于实现矿山场景下的无人驾驶具有重要意义,对于城市环境下雨雪、扬尘场景的自动驾驶也具有重要意义。
本发明除了适用于矿用自卸车,还可以适用于如卡车、挖掘机等其他运载装备或工程车辆。应用场景除了矿山外,也可应用于如城市、乡村等其他环境下,同样具有应对扬尘、雨雪等场景的功能。
最后需要指出的是:以上实施例仅用以说明本发明的技术方案,而非对其限制。本领域的普通技术人员应当理解:可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (9)

  1. 一种多雷达数据融合的障碍物检测方法,其特征在于,包括:
    步骤S1,获取多雷达数据;
    步骤S2,将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图;
    步骤S3,计算所述点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个所述连通域内特征点云数量超过阈值的点云聚类作为障碍物点云聚类,计算所述非地面特征点云聚类离地高度、连通度和表面离散度;
    步骤S4,将所述多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值;
    步骤S5,计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置;
    步骤S4中根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值的方法包括:
    步骤S41,判断激光雷达目标与毫米波雷达目标是否关联,若是,则进入步骤S42;
    步骤S42,采用式(16)或式(17)计算单个激光雷达的关联特征值:


    式中,single_relevancy、obj_relevancy均表示单个激光雷达的关联特征值,rectA表示激光雷达的目标矩形框,rectB表示毫米波雷达的目标矩形框,∩表示交集,∪表示并集,rectC表示能够将rectA、rectB包围的最小矩形框, numobj_rele为与当前激光雷达目标成功关联的毫米波雷达目标数量,single_relevancyn表示第n个激光雷达的关联特征值,max_single_relevancy表示当前激光雷达的最大关联特征值。
  2. 如权利要求1所述的多雷达数据融合的障碍物检测方法,其特征在于,所述步骤S3中的所述点云坡度特征α通过所述点云深度图中上、下相邻两障碍物特征点p1、p2形成的直线与激光雷达坐标系xy平面之间的角度获得,如式(3)所示:
    式中,(x1,y1,z1)、(x2,y2,z2)分别为p1、p2点在激光雷达坐标系中的坐标,l1,2为p1、p2在所述xy平面上的距离。
  3. 如权利要求1所述的多雷达数据融合的障碍物检测方法,其特征在于,所述步骤S3中连通域搜索包括对所述点云深度图的上下方向和左右方向的搜索,将p1、p2之间欧式空间几何距离阈值l2thresh的计算公式设置为式(5),将p1、p2之间的连通域聚类判断条件设置为式(6)和式(7):


    式中,k为距离阈值系数,β为p1、p2分别与激光雷达的几何中心连线形成的夹角,max_d为p1、p2中与激光雷达的几何中心距离较远的点与激光雷达原点之间的距离,b为预设的固定偏置系数,(row1,col1)、(row2,col2)分别为p1、p2在所述点云深度图中的坐标。
  4. 如权利要求1所述的多雷达数据融合的障碍物检测方法,其特征在于,步骤S41具体包括:
    步骤S411,判断激光雷达目标与毫米波雷达目标是否产生重叠,若是,则可直接认为两者成功关联,若两目标未发生重叠,则进入步骤S412;
    步骤S412,判断毫米波雷达目标与激光雷达目标之间的距离是否小于关联距离阈值connect_distance_thresh,若是,则认为该毫米波雷达目标与该激 光雷达目标成功关联。
  5. 如权利要求1所述的多雷达数据融合的障碍物检测方法,其特征在于,所述步骤S5中,利用下式(19)计算该点云聚类属于扬尘点云聚类的概率dust_probability:
    dust_probability=k1×obj_height+k2×obj_ratio
    +k3×obj_discrete+k4×obj_relevancy,#(19)
    式中,k1、k2、k3、k4分别为不同特征的缩放系数,obj_height、obj_ratio、obj_discrete为通过所述步骤S3获得的所述非地面特征点云聚类离地高度、连通度和表面离散度,obj_relevancy为单个激光雷达的关联特征值。
  6. 一种多雷达数据融合的障碍物检测系统,其特征在于,包括:
    雷达数据获取单元,其用于获取多雷达数据;
    点云深度图转换单元,其用于将所述多雷达数据中的激光雷达点云转换为矩阵化后的点云深度图;
    点云特征计算单元,其用于计算所述点云深度图上点云坡度特征,并根据连通域搜索获得非地面特征点云聚类,将单个所述连通域内特征点云数量超过阈值的点云聚类作为障碍物点云聚类,计算所述非地面特征点云聚类离地高度、连通度和表面离散度;
    关联特征计算单元,其用于将所述多雷达数据中的毫米波雷达障碍物检测数据的坐标转换到激光雷达坐标系下,并根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值;
    概率计算单元,其用于计算点云聚类为灰尘点云聚类的概率,并获得非扬尘障碍物的点云聚类位置;
    关联特征计算单元中根据激光雷达目标与毫米波雷达目标之间的距离和/或交并比,计算单个激光雷达的关联特征值的方法包括:
    判断激光雷达目标与毫米波雷达目标是否关联,若是,则采用式(16)或式(17)计算单个激光雷达的关联特征值:


    式中,single_relevancy、obj_relevancy均表示单个激光雷达的关联特征值,rectA表示激光雷达的目标矩形框,rectB表示毫米波雷达的目标矩形框,∩表示交集,∪表示并集,rectC表示能够将rectA、rectB包围的最小矩形框,numobj_rele为与当前激光雷达目标成功关联的毫米波雷达目标数量,single_relevancyn表示第n个激光雷达的关联特征值,max_single_relevancy表示当前激光雷达的最大关联特征值。
  7. 如权利要求6所述的多雷达数据融合的障碍物检测系统,其特征在于,所述点云特征计算单元中的所述点云坡度特征α通过所述点云深度图中上、下相邻两障碍物特征点p1、p2形成的直线与激光雷达坐标系xy平面之间的角度获得,如式(3)所示:
    式中,(x1,y1,z1)、(x2,y2,z2)分别为p1、p2点在激光雷达坐标系中的坐标,l1,2为p1、p2在所述xy平面上的距离。
  8. 如权利要求6所述的多雷达数据融合的障碍物检测系统,其特征在于,所述点云特征计算单元中连通域搜索包括对所述点云深度图的上下方向和左右方向的搜索,将p1、p2之间欧式空间几何距离阈值l2thresh的计算公式设置为式(5),将p1、p2之间的连通域聚类判断条件设置为式(6)和式(7):

    式中,k为距离阈值系数,β为p1、p2分别与激光雷达的几何中心连线形成的夹角,max_d为p1、p2中与激光雷达的几何中心距离较远的点与激光雷达原点之间的距离,b为预设的固定偏置系数,(row1,col1)、(row2,col2)分别为p1、p2在所述点云深度图中的坐标。
  9. 如权利要求6所述的多雷达数据融合的障碍物检测系统,其特征在于,所述概率计算单元中,利用下式(19)计算该点云聚类属于扬尘点云聚类的概率dust_probability:
    dust_probability=k1×obj_height+k2×obj_ratio
    +k3×obj_discrete+k4×obj_relevancy,#(19)
    式中,k1、k2、k3、k4分别为不同特征的缩放系数,obj_height、obk_ratio、obj_discrete为通过点云特征计算单元获得的所述非地面特征点云聚类离地高度、连通度和表面离散度,obj_relevancy为单个激光雷达的关联特征值。
PCT/CN2023/109851 2022-06-01 2023-07-28 多雷达数据融合的障碍物检测方法与系统 WO2023232165A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210623207.9 2022-06-01
CN202210623207.9A CN114994684B (zh) 2022-06-01 2022-06-01 多雷达数据融合的扬尘场景下障碍物检测方法与系统

Publications (1)

Publication Number Publication Date
WO2023232165A1 true WO2023232165A1 (zh) 2023-12-07

Family

ID=83031757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/109851 WO2023232165A1 (zh) 2022-06-01 2023-07-28 多雷达数据融合的障碍物检测方法与系统

Country Status (2)

Country Link
CN (1) CN114994684B (zh)
WO (1) WO2023232165A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789198A (zh) * 2024-02-28 2024-03-29 上海几何伙伴智能驾驶有限公司 基于4d毫米波成像雷达实现点云退化检测的方法
CN117872354A (zh) * 2024-03-11 2024-04-12 陕西欧卡电子智能科技有限公司 一种多毫米波雷达点云的融合方法、装置、设备及介质
CN117872310A (zh) * 2024-03-08 2024-04-12 陕西欧卡电子智能科技有限公司 基于雷达的水面目标跟踪方法、装置、设备及介质
CN118032605A (zh) * 2024-04-11 2024-05-14 北京路凯智行科技有限公司 矿山路面扬尘检测方法和检测系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114994684B (zh) * 2022-06-01 2023-05-12 湖南大学无锡智能控制研究院 多雷达数据融合的扬尘场景下障碍物检测方法与系统
CN115453570A (zh) * 2022-09-13 2022-12-09 北京踏歌智行科技有限公司 一种多特征融合的矿区粉尘滤除方法
CN116071550B (zh) * 2023-02-09 2023-10-20 安徽海博智能科技有限责任公司 一种激光雷达灰尘点云过滤方法
CN115877373B (zh) * 2023-02-20 2023-04-28 上海几何伙伴智能驾驶有限公司 结合激光雷达信息实现点云雷达聚类参数设计的方法
CN116125466B (zh) * 2023-03-02 2023-07-04 武汉理工大学 船舶人员隐藏威胁性物品携带检测方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111537994A (zh) * 2020-03-24 2020-08-14 江苏徐工工程机械研究院有限公司 一种无人矿卡障碍物检测方法
CN111880196A (zh) * 2020-06-29 2020-11-03 安徽海博智能科技有限责任公司 一种无人矿车抗干扰方法、系统及计算机设备
CN112083441A (zh) * 2020-09-10 2020-12-15 湖南大学 激光雷达和毫米波雷达深度融合的障碍物检测方法及系统
CN113296120A (zh) * 2021-05-24 2021-08-24 福建盛海智能科技有限公司 一种障碍物检测方法及终端
CN114994684A (zh) * 2022-06-01 2022-09-02 湖南大学无锡智能控制研究院 多雷达数据融合的扬尘场景下障碍物检测方法与系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3525000B1 (en) * 2018-02-09 2021-07-21 Bayerische Motoren Werke Aktiengesellschaft Methods and apparatuses for object detection in a scene based on lidar data and radar data of the scene
CN109271944B (zh) * 2018-09-27 2021-03-12 百度在线网络技术(北京)有限公司 障碍物检测方法、装置、电子设备、车辆及存储介质
CN110244322B (zh) * 2019-06-28 2023-04-18 东南大学 基于多源传感器的路面施工机器人环境感知系统及方法
CN113192091B (zh) * 2021-05-11 2021-10-22 紫清智行科技(北京)有限公司 一种基于激光雷达与相机融合的远距离目标感知方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111537994A (zh) * 2020-03-24 2020-08-14 江苏徐工工程机械研究院有限公司 一种无人矿卡障碍物检测方法
CN111880196A (zh) * 2020-06-29 2020-11-03 安徽海博智能科技有限责任公司 一种无人矿车抗干扰方法、系统及计算机设备
CN112083441A (zh) * 2020-09-10 2020-12-15 湖南大学 激光雷达和毫米波雷达深度融合的障碍物检测方法及系统
CN113296120A (zh) * 2021-05-24 2021-08-24 福建盛海智能科技有限公司 一种障碍物检测方法及终端
CN114994684A (zh) * 2022-06-01 2022-09-02 湖南大学无锡智能控制研究院 多雷达数据融合的扬尘场景下障碍物检测方法与系统

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789198A (zh) * 2024-02-28 2024-03-29 上海几何伙伴智能驾驶有限公司 基于4d毫米波成像雷达实现点云退化检测的方法
CN117789198B (zh) * 2024-02-28 2024-05-14 上海几何伙伴智能驾驶有限公司 基于4d毫米波成像雷达实现点云退化检测的方法
CN117872310A (zh) * 2024-03-08 2024-04-12 陕西欧卡电子智能科技有限公司 基于雷达的水面目标跟踪方法、装置、设备及介质
CN117872354A (zh) * 2024-03-11 2024-04-12 陕西欧卡电子智能科技有限公司 一种多毫米波雷达点云的融合方法、装置、设备及介质
CN117872354B (zh) * 2024-03-11 2024-05-31 陕西欧卡电子智能科技有限公司 一种多毫米波雷达点云的融合方法、装置、设备及介质
CN118032605A (zh) * 2024-04-11 2024-05-14 北京路凯智行科技有限公司 矿山路面扬尘检测方法和检测系统

Also Published As

Publication number Publication date
CN114994684A (zh) 2022-09-02
CN114994684B (zh) 2023-05-12

Similar Documents

Publication Publication Date Title
WO2023232165A1 (zh) 多雷达数据融合的障碍物检测方法与系统
CN110119698B (zh) 用于确定对象状态的方法、装置、设备和存储介质
Qian et al. Rf-lio: Removal-first tightly-coupled lidar inertial odometry in high dynamic environments
CN108868268A (zh) 基于点到面距离和互相关熵配准的无人车位姿估计方法
CN111413983A (zh) 一种无人驾驶车辆的环境感知方法及控制端
Kuramoto et al. Mono-camera based 3D object tracking strategy for autonomous vehicles
CN111735445A (zh) 融合单目视觉与imu的煤矿巷道巡检机器人系统及导航方法
CN113850102A (zh) 基于毫米波雷达辅助的车载视觉检测方法及系统
WO2023283987A1 (zh) 无人系统的传感器安全性检测方法、设备及存储介质
Heng Automatic targetless extrinsic calibration of multiple 3D LiDARs and radars
Bai et al. Stereovision based obstacle detection approach for mobile robot navigation
Cao et al. Accurate localization of autonomous vehicles based on pattern matching and graph-based optimization in urban environments
TW202020734A (zh) 載具、載具定位系統及載具定位方法
Cheng et al. Underwater localization and mapping based on multi-beam forward looking sonar
CN115423958A (zh) 一种基于视觉三维重建的矿区可行驶区域边界更新方法
Wu et al. Environment perception technology for intelligent robots in complex environments: A Review
TWI680898B (zh) 近距離障礙物之光達偵測裝置及其方法
Tian et al. Vision-based mapping of lane semantics and topology for intelligent vehicles
CN113759385A (zh) 一种激光雷达和相机融合测距方法及系统
Linfeng et al. One estimation method of road slope and vehicle distance
Liu et al. Vehicle detection and tracking with 2d laser range finders
Zou et al. Active pedestrian detection for excavator robots based on multi-sensor fusion
Li et al. A new visual sensing system for motion state estimation of lateral localization of intelligent vehicles
Jin et al. A Hybrid Model for Object Detection Based on Feature-Level Camera-Radar Fusion in Autonomous Driving
Chen et al. Research on localization method of driverless car based on fusion of GNSS and laser SLAM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23815341

Country of ref document: EP

Kind code of ref document: A1