CN116977970A - Road drivable area detection method based on fusion of laser radar and millimeter wave radar - Google Patents

Road drivable area detection method based on fusion of laser radar and millimeter wave radar Download PDF

Info

Publication number
CN116977970A
CN116977970A CN202311017989.2A CN202311017989A CN116977970A CN 116977970 A CN116977970 A CN 116977970A CN 202311017989 A CN202311017989 A CN 202311017989A CN 116977970 A CN116977970 A CN 116977970A
Authority
CN
China
Prior art keywords
point cloud
laser radar
gray
road
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311017989.2A
Other languages
Chinese (zh)
Inventor
蒋建春
王章琦
曾素华
苏云龙
余浩
孙裕琛
夏云俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202311017989.2A priority Critical patent/CN116977970A/en
Publication of CN116977970A publication Critical patent/CN116977970A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a road drivable area detection method based on fusion of a laser radar and a millimeter wave radar, and belongs to the fields of vehicle-road cooperation and intelligent traffic. According to the method, point cloud data of the laser radar are processed through a self-adaptive DBSCAN clustering algorithm, so that intra-class consistency and inter-class difference of clustering results can be improved; the electronic fence of the drivable area of the road is constructed by the self-adaptive threshold segmentation method based on the discipline method, so that the problem that the traditional fixed global threshold segmentation method cannot consider the situation of each point cloud image and the segmentation effect is poor can be avoided; the point cloud coordinate system is converted into a polar coordinate system through sector coding so as to express the position and direction information of points, and the problem of different sparseness under different distances can be solved through selecting the radius; and the attention mechanism is used for fusion and dynamic weight adjustment in a pseudo image fusion mode so as to improve the environmental adaptability.

Description

Road drivable area detection method based on fusion of laser radar and millimeter wave radar
Technical Field
The invention belongs to the field of vehicle-road coordination and intelligent traffic, and relates to a road drivable area detection method based on laser radar and millimeter wave radar fusion.
Background
In recent years, with the rapid development of automobile related fields such as new energy technology and computer communication technology, the development of traditional automobiles towards intelligentization, electric and networking is promoted. Along with the improvement of the living standard of people and the continuous improvement of the requirements on the performance of automobiles, people pay more and more attention to the safety and smoothness of the vehicles, and the perception of road environment is one of the keys for ensuring the safety and smoothness of the vehicles.
The road environment perception is a key link of the mature development of the vehicle-road cooperative technology, and the environment perception layer mainly utilizes various sensors to provide traffic environment information for vehicles, including lane marks, signal lamps, identification plates and the like, and information such as barrier profile information, positions and relative distances of vehicles and barriers. The intelligent vehicle road cooperation control method is a primary challenge of vehicle road cooperation, provides basis for global path planning, driving behavior decision and motion planning of the intelligent vehicle, and realizes control of the vehicle by combining with a bottom-layer executing mechanism. The intelligent automobile perception hardware system is based on biological sensory products, whether a camera or a radar is adopted, and the intelligent automobile perception hardware system is mounted at the automobile end, so that a 'blind zone' phenomenon is necessarily generated, even if the system is re-intelligent, a fast and accurate decision can be made only in a visual range, and the phenomenon that a human driver such as a 'ghost probe' is difficult to avoid is caused, and the phenomenon is also difficult to avoid for single-car intelligence. In severe weather and lane changing scenes, traffic accidents are easy to occur due to insufficient perception of the bicycle on the environment.
The road side camera can well complete the functions of part object recognition, classification, detection and the like, but because of the lack of depth information, the position and distance of a vehicle or an obstacle cannot be accurately mastered, and a more accurate road drivable area cannot be established. Therefore, a strategy of fusion of the laser radar and the millimeter wave radar is used in the establishment of the road side drivable region, and the data of the laser radar and the millimeter wave radar can be mutually complemented, so that the detection accuracy is improved. The lidar may provide high-precision point cloud data for detecting the shape and position of an object, while the millimeter wave radar may provide high-precision speed and distance measurements for detecting the motion state of an object. By combining the two data, obstacles, pedestrians and the like on the road can be detected more accurately, the robustness of the road side drivable region in different environments can be improved, for example, in rainy and snowy weather, a laser radar can be interfered to lose precision, and a millimeter wave radar can be better adapted to the environment.
Road environment perception is taken as a general basic technology, and plays an important role in the directions of automatic driving, vehicle-road coordination and the like. However, the perception of the vehicle-mounted intelligent road drivable area has remained the mainstream solution so far. In the future, the automatic driving vehicle needs to sense and acquire the road drivable area in real time, and the limitation of bicycle sensing causes that the drivable area cannot be accurately acquired, so that the vehicle cannot effectively generate a local path. In addition, in bad weather and lane changing scenes, traffic accidents are easy to happen due to insufficient perception of the bicycle on the environment.
Therefore, in order to solve the insufficient needs of single vehicle and the needs of vehicle-road cooperation, provide road environment perception service free from environmental constraint, ensure travel safety and advance cooperative development, a road exercisable area detection technology based on laser radar and millimeter wave radar fusion is needed, and accurate drivable area information is provided for a vehicle-network intelligent traffic scene, so that vehicle-road cooperation and subsequent unmanned driving are realized.
Disclosure of Invention
In view of the above, the present invention aims to provide a method for detecting a road drivable region based on fusion of a laser radar and a millimeter wave radar, which improves the recognition rate of the road drivable region and the adaptability under different weather environments by fusion of real-time data of the laser radar and the millimeter wave radar.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a road drivable area detection method based on fusion of a laser radar and a millimeter wave radar comprises the following steps:
s1, clustering all points according to the condition that the single frame data of laser point clouds at the road side are too much, increasing the system burden, judging whether the same target is affiliated to the problem that the same target is difficult to adapt to different scene areas or not by using the space density of the point clouds in the widely applied clustering, and designing a road area point cloud extraction and self-adaptive clustering method to extract and cluster the point cloud data of the laser radar;
then, a lane line and a road surface in the point cloud data are segmented by adopting an improved local self-adaptive threshold segmentation method based on a discipline method, so that an electronic fence is constructed and obtained, and the method comprises the following steps: firstly, dividing each scanning line in a ground point cloud picture into neighborhood blocks with equal size, then traversing all the neighborhood blocks, and finally determining the optimal threshold value of each neighborhood block by a discipline method according to the reflection intensity value distribution of different neighborhood blocks so as to construct the electronic fence of the road area. Therefore, the relation between the points and the surrounding can be improved, the threshold selection accuracy is improved, more accurate road boundaries are obtained, and then a reasonable electronic fence is constructed.
S2, aiming at the problems that a single-size voxel cannot adapt to uneven distribution of point density under different distances of point clouds, multiple computing and memory resources are consumed, the real-time performance of network detection is reduced, and the speed of the point clouds cannot be fused with that of a millimeter wave radar recognition object better, a sector-based point cloud and millimeter wave coding method is designed, and a sector coding mode is used for replacing pilar coding.
S3, aiming at the problem that computing capacity of road side equipment is insufficient, when target-level fusion is adopted, a laser radar point cloud is distorted due to bad weather, so that detection is distorted, and difference between sensor data cannot be adapted, designing a pointpilar feature layer-based attention fusion model, calculating importance of different features in a current scene by referring to an attention mechanism, and then weighting and fusing the different features according to the importance, so that fusion features are obtained; and obtaining a final fusion result by referring to the concept of the residual error module. According to the fusion characteristics of detection mainly using the laser radar and auxiliary using the millimeter wave radar, the weight is dynamically adjusted, and the environmental adaptability of the model is improved.
Further, in step S1, the extraction and clustering methods are as follows: firstly, filtering laser radar original point cloud data to extract effective point cloud data, and selecting an interested region through a self-adaptive DBSCAN clustering algorithm; the parameters of the clustering algorithm are corrected through a sigmoid function. The parameter corrected by the sigmoid function is the growth radius of the cluster, and the correction relation is as follows:
E′ ps =E ps ×f(r i )
wherein E 'is' ps Representing corrected radius parameters, E ps Representing an initial radius parameter;
wherein k is r 、b r 、ε r And r 0 Are all algorithm model parameters, and the optimal f (r) is obtained by enumerating different values of the parameters i );r i Representing seed points searching for the same cluster point.
Further, in step S1, the improved local adaptive thresholding method based on the discriminant method specifically includes: classifying the point cloud data obtained after extraction and clustering according to the scanning lines, carrying out gray level conversion on the reflection intensity data in each scanning line, calculating the global average value of gray level values in all the scanning lines, finding out the gray level value larger than the global average value, and calculating the average value of the gray level values to obtain the secondary gray level average value in the class; defining a threshold selection interval by taking the intra-class secondary gray average value as an initial threshold value, wherein a threshold value th exists in the threshold selection interval, the threshold value th divides the image into a foreground image and a background image, and the inter-class variance is calculated by combining the probability and the average gray value of the foreground image and the background image and the global gray average value of which the gray value is larger than the intra-class secondary gray average value; the threshold th corresponding to the maximum inter-class variance is marked as an optimal threshold, data with gray values larger than the optimal threshold are marked as lane line point cloud data, and data with gray values smaller than the optimal threshold are marked as road surface point cloud data.
The calculation formula of the inter-class variance is as follows:
σ 2 =P 11G ) 2 +P 22G ) 2
in sigma 2 Representing inter-class variance, P 1 Representing the probability of background image, P 2 Probability, μ representing foreground image 1 Represents the average gray value, μ representing the background image 2 Representing the average gray value, mu, of the foreground image G The global gray average value representing the gray value greater than the intra-class secondary gray average value.
Further, in step S2, sector coding is performed on the millimeter wave Lei Dadian cloud data and the laser radar point cloud data processed in step S1, where the process is as follows: the sector coding is completed under a polar coordinate system, a circular ring detection area is arranged in the point cloud data, the circular ring detection area is divided into a plurality of sector ring areas, and the sector ring areas are further divided into a plurality of grids along the radial direction; and converting a coordinate system of points in the point cloud data into a polar coordinate system, and dividing points falling in a grid interval into the grids, so that polar coordinate cylinder elements are generated in a three-dimensional space. The inner arc length is equal to the diameter length of the grid, the diameter length represents the length of the straight line edge of the grid along the radial direction, and the inner arc length represents the length of the shorter arc edge in the grid. And after the polar coordinate cylinder element is generated in the three-dimensional space, converting the point cloud column into a pseudo image and extracting a feature map.
Further, in step S3, after the feature graphs of the lidar and the millimeter wave radar are input into the feature layer attention fusion model, convolution and normalization processing are performed respectively, and then attention fusion is performed:
in which W is q 、W v Respectively represent laser radar characteristic diagrams X 1 Weight matrix W obtained through convolution learning k Representing millimeter wave radar feature map X r The weight matrix learned through convolution is Q, V respectively represented by a laser radar matrix, and K represents a millimeter wave radar matrix; and performing matrix multiplication on the laser radar matrix V and the millimeter wave radar matrix K, and performing matrix multiplication on the matrix multiplication result and the laser radar matrix Q to obtain a fusion result O. Multiplying the fusion result O by a proportional coefficient, and then multiplying the result O by the original laser radar characteristic diagram X 1 And adding to obtain a final fusion result.
The invention has the beneficial effects that: according to the invention, through the fusion of the data of the laser radar and the data of the millimeter wave radar, the advantages of the two data can be combined to realize more accurate detection of the drivable region, and the robustness of the road side drivable region perception in different environments is improved; in addition, for the fusion of two data, the invention designs a pseudo image fusion mode, and uses an attention mechanism to dynamically adjust the weight of the fusion, thereby improving the environmental adaptability of the recognition model.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in the following preferred detail with reference to the accompanying drawings, in which:
FIG. 1 is a frame construction diagram of the method of the present invention;
FIG. 2 is a schematic diagram of a straight-through filtering process;
FIG. 3 is a schematic diagram of the overall structure of a local adaptive threshold segmentation algorithm based on the discipline method;
FIG. 4 is a schematic diagram of sector-based point cloud and millimeter wave encoding;
FIG. 5 is a schematic diagram of a attention fusion model based on a pointpilar feature layer.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the illustrations provided in the following embodiments merely illustrate the basic idea of the present invention by way of illustration, and the following embodiments and features in the embodiments may be combined with each other without conflict.
Referring to fig. 1 to 5, a method for detecting a road drivable area based on fusion of a laser radar and a millimeter wave radar is provided. The method comprises the steps of firstly processing collected data of a laser radar and a millimeter wave radar, and then fusing two radar data by adopting a attention fusion model based on a pointpilar feature layer to detect a road side travelable area.
As shown in fig. 2, for the collected data of the laser radar, the method filters out points outside corresponding heights or radii to obtain effective point clouds according to the characteristics of the road side point clouds, further selects a region of interest (ROI), and obtains an effective point cloud P through radius filtering and height filtering pth The following formula must be satisfied:
in (x) i ,y i ,z i ) As an effective point cloud P pth Element r of (2) th Is a radius threshold value, h th Is a height threshold value, x 1 、x 2 、y 1 、y 2 The coordinate ranges of the x direction and the y direction of the road area, respectively, wherein the x-axis direction is the same as the road direction. Considering that the point cloud data collected at the road side also has a certain visual field blind area, and meanwhile, the long-distance point cloud is difficult to provide enough characteristics for target detection, so that the point cloud in a certain short-distance range and outside the long-distance range has smaller significance for clustering and target detection, and the point cloud in the middle area from the near to the far is a core area for clustering and target detection, and the parameters of a clustering algorithm need to be adaptively corrected. Sigmoid is selected as a correction coefficient of the search radius which can be reached by the density in the cluster growth process:
wherein k is r 、b r 、ε r And r 0 For algorithm model parameters, the optimal f (r) is obtained by enumerating different values of the parameters i ) Through f (r i ) The growth radius of the cluster is modified.
The sigmoid function is not obviously changed in a smaller and larger range, is quickly increased in a middle range, is very beneficial to dividing point cloud clusters belonging to the same target, and can be used for simply and easily processing point cloud data which are acquired by a road side installation laser radar and have the characteristics of short-distance blind areas, long-distance no effective characteristics, sparsity increase along with the increase of the distance, and the like. The correction formula of the cluster growth radius is as follows:
E′ ps =E ps ×f(r i ) (3)
wherein E 'is' ps For the corrected radius parameter E ps Is the initial radius parameter. r is (r) i To search seed points of the same cluster point, obtain the cluster of points belonging to the same objectMoreover, the self-adaption is performed, and meanwhile, points other than the searching radius can be prevented from being lost.
After the extraction and clustering of the road side point cloud are completed, the road side point cloud is segmented through an improved local self-adaptive threshold segmentation method based on a discipline method, so that an electronic fence is constructed. As shown in fig. 3, the improved local adaptive threshold segmentation method based on the discriminant method is generally structured, and segmentation is performed through adaptive threshold selection of gray values. In the steps of road surface point cloud extraction and road edge data filtering, the finally obtained road surface point cloud data are classified according to the scanning lines. On the basis, the gray level conversion formula is carried out on the reflection intensity data in each scanning line respectively as follows:
in the formula, a grayvalue j Intensity for the value of the converted point j gray scale in the scanning line i j For the reflection intensity in the scan line, max_intensity i The gray scale is represented by the value of the maximum reflection intensity in the line.
Calculating a global average ave of gray values in all scanning lines:
in the formula, clusterize i For the number of points in the scan line, n represents the number of scan lines. The global average value of each scanning line after the reflection intensity is subjected to gray level conversion can be obtained through the formula (5). Finding all gray values larger than the global average value, calculating to obtain an average ave1 of the gray values, and calculating the probability of each gray level:
where num represents the number of all points in the point cloud data.
The intensity of lane line data is obviously highAfter road surface data and road edge data filtering, the data point carrying the maximum intensity in each scanning line is basically lane line point cloud data. If the threshold selection is still starting from 0, a large number of useless computations are caused, resulting in an endless extension of the processing time. Therefore, the intra-class secondary gray level average value ave1 is used as an initial threshold value, and the threshold value selection interval is narrowed to [ ave1, 255]. The presence threshold th divides all pixels in the image into two classes, a foreground image with a gray value greater than th and a background image with a gray value less than th. After setting the initial threshold, calculating the probability P of the background image and the foreground image respectively 1 、P 2 Average gray value v of background image and foreground image 1 、v 2 And a global gray average v having a gray value greater than ave1 G
Finally, calculating the inter-class variance
Comparing the magnitudes of all the inter-class variances in the classes, and finding out the maximum inter-class variance and the threshold th corresponding to the maximum inter-class variance, so that the threshold with the maximum inter-class variance is the optimal threshold. And screening the gray value of the data in the scanning line, marking the data with the gray value larger than the optimal threshold value as candidate lane line point cloud data, and marking the data with the gray value smaller than the optimal threshold value as road surface point cloud data, so as to obtain the electronic fence.
Aiming at the problem of different sparsity of points at different distances, a sector voxel coding processing module is designed, the position and direction information of the points can be expressed through converting sector codes into polar coordinates, and the problem of different sparsity at different distances can be solved through selecting the radius.
Sector-based point cloud and millimeter wave coding is carried out on laser radar and millimeter wave radar data, and firstly, an inner diameter R is set according to the installation position of the roadside end laser radar in To an outer diameter R out The angle theta of each sector is determined, and the ring detection range is divided into m (m=2pi/theta) sector ring areas according to the angle theta. Dividing the sector ring region into n grids along the radial direction as shown in fig. 4, and keeping the radial length r of the grids i Equal to the inner arc length l i . Wherein the first inner arc length l 1 Expressed as:
the ith inner arc length is then expressed as:
the point cloud space can be divided into grids of (m, n) by sector coding, wherein m represents the number of sectors and n represents the number of sectors per cell. Because sector encoding is done in a polar coordinate system, the planar polar coordinates of the points need to be calculated before assigning voxels to the point cloud:
and converting the points in the Cartesian coordinate system into a polar coordinate system, dividing the point cloud data points falling in a polar coordinate interval into polar coordinate grids after converting the point cloud data of the polar coordinates, generating polar coordinate cylinder elements in a three-dimensional space, and determining the polar coordinate cylinder where the points are located according to coordinates (rho, theta). After the point cloud columns are distributed, the point cloud columns are converted into pseudo images by using the same pointpilar method, and point cloud target detection is carried out by using the same feature extraction method and target detection method.
The feature map of the lidar and millimeter wave radar was fused using a pointpilar feature-layer based attention fusion model, as shown in fig. 5. Firstly, changing feature maps of a laser radar and a millimeter wave radar into the same dimension, and carrying out convolution processing and normalization processing of 1 multiplied by 1 before fusion, wherein the attention fusion formula of the laser radar and the millimeter wave radar is expressed as follows:
wherein W is q 、W v Is a laser radar characteristic diagram X 1 Weight matrix W learned by 1X 1 convolution k Is millimeter wave radar feature map X r The laser radar matrix Q, V and the millimeter wave radar matrix K are obtained through the weight matrix learned by 1×1 convolution. And performing matrix multiplication on the laser radar matrix V and the millimeter wave radar matrix K, and performing matrix multiplication on the matrix multiplication result and the laser radar matrix Q to obtain a fusion result O, so that the numerical distribution of the laser radar matrix can be changed, and the laser radar matrix is helped to pay attention to a positive sample target for attention correction. Summing the attention corrected lidar matrix and the feature map of the original input lidar:
y=X 1 +λO (17)
in the formula, lambda is a proportionality coefficient, the concept of a residual error module is used for reference, and the fusion result O is multiplied by the proportionality coefficient lambda and is added with a laser radar characteristic diagram to obtain a final output result y. Wherein the initial value of lambda is set to 0, the weight coefficient is increased through training learning, the physical meaning of the weight coefficient can be regarded as that the influence of attention mechanism is 0 at the beginning, and the influence of attention in output is gradually increased along with the progress of training.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.

Claims (5)

1. A road drivable area detection method based on fusion of a laser radar and a millimeter wave radar is characterized by comprising the following steps of: the method comprises the following steps:
s1, extracting and clustering point cloud data of a laser radar, and then dividing lane lines and pavements in the point cloud data by adopting an improved local self-adaptive threshold segmentation method based on a law method, so as to construct and obtain an electronic fence;
the extraction and clustering modes are as follows: firstly, filtering laser radar original point cloud data to extract effective point cloud data, and selecting an interested region through a self-adaptive DBSCAN clustering algorithm; correcting parameters of the clustering algorithm through a sigmoid function;
the improved local self-adaptive threshold segmentation method based on the discipline method specifically comprises the following steps: classifying the point cloud data obtained after extraction and clustering according to the scanning lines, carrying out gray level conversion on the reflection intensity data in each scanning line, calculating the global average value of gray level values in all the scanning lines, finding out the gray level value larger than the global average value, and calculating the average value of the gray level values to obtain the secondary gray level average value in the class; defining a threshold selection interval by taking the intra-class secondary gray average value as an initial threshold value, wherein a threshold value th exists in the threshold selection interval, the threshold value th divides the image into a foreground image and a background image, and the inter-class variance is calculated by combining the probability and the average gray value of the foreground image and the background image and the global gray average value of which the gray value is larger than the intra-class secondary gray average value; marking a threshold th corresponding to the maximum inter-class variance as an optimal threshold, marking data with gray values larger than the optimal threshold as lane line point cloud data, and marking data with gray values smaller than the optimal threshold as road point cloud data;
s2, respectively carrying out sector coding on millimeter wave Lei Dadian cloud data and the laser radar point cloud data processed in the step S1, dividing a space into grids through the sector coding, generating polar coordinate cylinder elements in a three-dimensional space, then converting the point cloud cylinders into pseudo images and extracting feature images;
and S3, carrying out weighted fusion on the feature images of the laser radar and the millimeter wave radar by adopting a pointpilar feature layer-based attention fusion model, multiplying the fusion result by a proportion coefficient, and then adding the result with the laser radar feature image to obtain a detection result of the road drivable region.
2. The road drivable area detection method as set forth in claim 1, characterized in that: in step S1, the parameter corrected by the sigmoid function is the growth radius of the cluster, and the correction relationship is as follows:
E′ ps =E ps ×f(r i )
wherein E 'is' ps Representing corrected radius parameters, E ps Representing an initial radius parameter;
wherein k is r 、b r 、ε r And r 0 Are all algorithm model parameters, and the optimal f (r) is obtained by enumerating different values of the parameters i );r i Representing seed points searching for the same cluster point.
3. The road drivable area detection method as set forth in claim 1, characterized in that: in step S1, in the improved local adaptive threshold segmentation method based on the discipline method, the calculation formula of the inter-class variance is as follows:
σ 2 =P 11G ) 2 +P 22G ) 2
in sigma 2 Representing inter-class variance, P 1 Representing the probability of background image, P 2 Probability, μ representing foreground image 1 Represents the average gray value, μ representing the background image 2 Representing the average gray value, mu, of the foreground image G The global gray average value representing the gray value greater than the intra-class secondary gray average value.
4. The road drivable area detection method as set forth in claim 1, characterized in that: in step S2, the sector coding is completed under a polar coordinate system, which includes the steps of: setting a ring detection area in the point cloud data, dividing the ring detection area into a plurality of sector ring areas, and dividing the sector ring areas into a plurality of grids along the radial direction; converting a coordinate system of points in the point cloud data into a polar coordinate system, and dividing points falling in a grid interval into the grids, so as to generate polar coordinate cylinder elements in a three-dimensional space; the length of the grid in the radial direction is equal to the inner arc length of the grid.
5. The road drivable area detection method as set forth in claim 1, characterized in that: in step S3, after the feature graphs of the laser radar and the millimeter wave radar are input into the feature layer attention fusion model, convolution and normalization processing are performed respectively, and then attention fusion is performed:
in which W is q 、W v Respectively represent laser radar characteristic diagrams X 1 Weight matrix W obtained through convolution learning k Representing millimeter wave radar feature map X r The weight matrix learned by convolution is Q, V respectively represented by a laser radar matrix, and K represents millimeter wavesA radar matrix; and performing matrix multiplication on the laser radar matrix V and the millimeter wave radar matrix K, and performing matrix multiplication on the matrix multiplication result and the laser radar matrix Q to obtain a fusion result O.
CN202311017989.2A 2023-08-14 2023-08-14 Road drivable area detection method based on fusion of laser radar and millimeter wave radar Pending CN116977970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311017989.2A CN116977970A (en) 2023-08-14 2023-08-14 Road drivable area detection method based on fusion of laser radar and millimeter wave radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311017989.2A CN116977970A (en) 2023-08-14 2023-08-14 Road drivable area detection method based on fusion of laser radar and millimeter wave radar

Publications (1)

Publication Number Publication Date
CN116977970A true CN116977970A (en) 2023-10-31

Family

ID=88481413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311017989.2A Pending CN116977970A (en) 2023-08-14 2023-08-14 Road drivable area detection method based on fusion of laser radar and millimeter wave radar

Country Status (1)

Country Link
CN (1) CN116977970A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291845A (en) * 2023-11-27 2023-12-26 成都理工大学 Point cloud ground filtering method, system, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291845A (en) * 2023-11-27 2023-12-26 成都理工大学 Point cloud ground filtering method, system, electronic equipment and storage medium
CN117291845B (en) * 2023-11-27 2024-03-19 成都理工大学 Point cloud ground filtering method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111694010B (en) Roadside vehicle identification method based on fusion of vision and laser radar
CN110244322B (en) Multi-source sensor-based environmental perception system and method for pavement construction robot
Han et al. Research on road environmental sense method of intelligent vehicle based on tracking check
CN109858460B (en) Lane line detection method based on three-dimensional laser radar
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN113345237A (en) Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data
US20100097455A1 (en) Clear path detection using a vanishing point
LU504084B1 (en) A non-rigid registration method of vehicle-borne LiDAR point clouds by fusing multiple road features
CN112257522B (en) Multi-sensor fusion environment sensing method based on environment characteristics
CN108876805B (en) End-to-end unsupervised scene passable area cognition and understanding method
CN115267815B (en) Road side laser radar group optimization layout method based on point cloud modeling
CN114782729A (en) Real-time target detection method based on laser radar and vision fusion
CN202134079U (en) Unmanned vehicle lane marker line identification and alarm device
CN112597839B (en) Road boundary detection method based on vehicle-mounted millimeter wave radar
CN113269889B (en) Self-adaptive point cloud target clustering method based on elliptical domain
CN116977970A (en) Road drivable area detection method based on fusion of laser radar and millimeter wave radar
CN112666573B (en) Detection method for retaining wall and barrier behind mine unloading area vehicle
Kühnl et al. Visual ego-vehicle lane assignment using spatial ray features
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN114120150B (en) Road target detection method based on unmanned aerial vehicle imaging technology
Bao et al. Unpaved road detection based on spatial fuzzy clustering algorithm
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination