CN117553807B - Automatic driving navigation method and system based on laser radar - Google Patents

Automatic driving navigation method and system based on laser radar Download PDF

Info

Publication number
CN117553807B
CN117553807B CN202410046437.2A CN202410046437A CN117553807B CN 117553807 B CN117553807 B CN 117553807B CN 202410046437 A CN202410046437 A CN 202410046437A CN 117553807 B CN117553807 B CN 117553807B
Authority
CN
China
Prior art keywords
features
representing
point cloud
local
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410046437.2A
Other languages
Chinese (zh)
Other versions
CN117553807A (en
Inventor
周彦
刘经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN202410046437.2A priority Critical patent/CN117553807B/en
Publication of CN117553807A publication Critical patent/CN117553807A/en
Application granted granted Critical
Publication of CN117553807B publication Critical patent/CN117553807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Processing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application provides an automatic driving navigation method and system based on a laser radar, and belongs to the technical field of navigation. The voxel features and the point features are fused to obtain richer semantic features, wherein local feature extraction is performed based on a K nearest neighbor algorithm and an attention mechanism, the point features can be directly utilized for processing, a 3D topological structure is not required to be changed, and information loss is avoided; during local feature extraction, an attention mechanism is employed to obtain enhanced point features. In addition, in the up-sampling process, a three-dimensional attention mechanism is introduced, and the characteristics of different layers are automatically selected through the attention mechanism, so that the characteristics with more abundant information are obtained, the network performance is improved, and the segmentation precision is improved. Through the mode, the navigation accuracy can be improved.

Description

Automatic driving navigation method and system based on laser radar
Technical Field
The application belongs to the technical field of navigation, and particularly relates to an automatic driving navigation method and system based on a laser radar.
Background
In the field of autopilot, lidar is a common positioning and navigation system. The laser radar scans the targets of the space points to form laser point clouds, each laser point cloud comprises three-dimensional coordinates of the space points and laser reflection intensity, and accurate three-dimensional structural information of the target is displayed after the data of the laser point clouds are processed. Compared with the traditional camera, the laser point cloud is used as one of three-dimensional data representation forms, can better express complex scenes and geometric shapes of objects, and has unique advantages in the expression of object space relations and topological relations.
In the related art, the processing method for the laser point cloud is generally as follows: the unordered laser point cloud is divided into a series of voxels which occupy a certain space, then the voxels are sent into a three-dimensional convolutional neural network to gradually perform feature learning of voxel levels, and finally semantic labels which are the same as the voxels are matched for all points in each grid of voxels. However, because the outdoor laser radar point cloud has disorder and density inconsistency, the traditional three-dimensional voxelization method regards the point cloud as a uniform one and divides the point cloud through a uniform cube, and ignores the density inconsistency characteristic of the outdoor point cloud, so that geometric information is inevitably lost in the voxelization process, the segmentation precision of the laser point cloud is low, and the navigation accuracy is affected.
Therefore, it is necessary to provide an automatic driving navigation method and system based on laser radar to solve the above-mentioned problems in the background art.
Disclosure of Invention
The application provides an automatic driving navigation method and system based on a laser radar, which can improve the navigation accuracy.
In order to solve the technical problems, the technical scheme of the application is as follows:
an automatic driving navigation method based on a laser radar comprises the following steps:
s1: acquiring a laser point cloud;
s2: redistributing the laser point cloud by adopting cylindrical segmentation to obtain cylindrical characteristics; carrying out feature extraction on the laser point cloud by adopting a multi-layer perceptron to obtain point features; fusing the cylindrical features and the point features to obtain voxel features;
s3: performing asymmetric three-dimensional convolution operation on the voxel characteristics, and sequentially performing multiple downsampling and multiple upsampling, wherein each upsampling process is supervised by adopting a three-dimensional attention mechanism;
s4: three local feature extractions are performed based on the K-nearest neighbor algorithm and the attention mechanism, wherein:
the input features extracted from the first local features are the point features, and the output features are fused with the output features sampled at the last time in a jump connection mode to obtain first fusion features;
the input features extracted by the local features of the second time are the first fusion features, and the output features are fused with the output features of the first time in a jump connection mode to obtain second fusion features;
the input features extracted from the third local feature are the second fusion features, and the output features are fused with the output features sampled last time in a jump connection mode to obtain final fusion features;
s5: and carrying out semantic segmentation on the final fusion features, and planning an optimal navigation route according to the semantic segmentation result.
Preferably, in step S4, the process of extracting local features based on the K-nearest neighbor algorithm and the attention mechanism specifically includes the following steps:
s21: for any center point cloud i, a fixed number is collected for the cloud based on a K nearest neighbor algorithmkAnd for each neighboring point, performing a position coding, wherein, the j (j=1, 2····k) th coding position of adjacent pointExpressed as:
in the method, in the process of the invention,representing a multi-layer perceptron; />Representing coordinates of the center point cloud i; />Representing coordinates of the j-th adjacent point; />Representing a splicing operation; />Euclidean distance representing center point cloud i and the jth adjacent point;
s22: point cloud characteristics of j-th adjacent pointAnd coding position->Splicing to obtain enhanced featuresExpressed as:
s23: traversingkAdjacent points willkThe feature combination after the enhancement of the adjacent points is taken as the local feature of the central point cloud i, and is expressed as follows:
s24: is a local featureWeights are calculated for each spatial position of the model (C), important local features are automatically learned based on an attention mechanism, and finally output local features are obtainedT i Expressed as:
;
in the method, in the process of the invention,weights representing spatial locations; />Representation->Operating; />Representing element-wise multiplication.
Preferably, in step S3, the three-dimensional attention mechanism generates a final output feature by calculating weights of the high-level features and the low-level features, and then weighting and summing the weights with the input features to generate the final output featureFExpressed as:
in the method, in the process of the invention,represents an attention weight; />Representing upsampling; />Representation of use->Activating a function; />Representing a convolution of 3x3x 3; />Representation normalization; />Representing high-level features; />Representing low-level features;
in the method, in the process of the invention,computation procedure representing three-dimensional attention mechanism, +.>Representing the input of a three-dimensional attention mechanism.
Preferably, in step S3, the number of times of downsampling is performed is four, and the number of times of upsampling is performed is three.
The application also provides an autopilot navigation system based on laser radar, including:
and the acquisition module is used for: for acquiring a laser point cloud;
and a voxel feature extraction module: the laser point cloud is redistributed by adopting cylindrical segmentation to obtain cylindrical characteristics; carrying out feature extraction on the laser point cloud by adopting a multi-layer perceptron to obtain point features; fusing the cylindrical features and the point features to obtain voxel features;
three-dimensional asymmetric convolution network: the method comprises the steps of performing asymmetric three-dimensional convolution operation on voxel characteristics, sequentially performing downsampling and upsampling for a plurality of times, and supervising the upsampling process by adopting a three-dimensional attention mechanism each time;
the local feature extraction module is used for: three local feature extractions are performed based on a K-nearest neighbor algorithm, wherein:
the input features extracted from the first local features are the point features, and the output features are fused with the output features sampled at the last time in a jump connection mode to obtain first fusion features;
the input features extracted by the local features of the second time are the first fusion features, and the output features are fused with the output features of the first time in a jump connection mode to obtain second fusion features;
the input features extracted from the third local feature are the second fusion features, and the output features are fused with the output features sampled last time in a jump connection mode to obtain final fusion features;
and a route planning module: the method is used for carrying out semantic segmentation on the final fusion features, and planning an optimal navigation route according to the semantic segmentation result.
The beneficial effects of this application lie in:
(1) The invention provides a 3D attention feature fusion block, which uses an attention mechanism to determine weights of high-level features and low-level features, and weights and semantic features of different layers are weighted and summed to obtain richer semantic features, so that the segmentation precision is improved;
(2) Because the point cloud in the driving scene has sparsity and density inconsistency, the 3D topological structure is inevitably changed in the voxel division process, and the geometric information is lost. Aiming at the problems, the invention provides a local feature aggregation module which directly adopts input original point cloud data without losing any information. Meanwhile, a simple K nearest neighbor algorithm is adopted, under the condition of ensuring efficiency, the nearest K adjacent point coordinates are obtained, the characteristics are aggregated, and the point characteristics with more abundant information are finally generated through a strong attention mechanism.
Drawings
FIG. 1 is a flow chart of an autonomous driving navigation method based on laser radar provided in the present application;
FIG. 2 shows a network architecture diagram of a local feature extraction module;
fig. 3 shows a network architecture diagram of a three-dimensional attention mechanism.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1-3, the present application provides an automatic driving navigation method based on laser radar, which includes the following steps:
s1: a laser point cloud is acquired.
S2: redistributing the laser point cloud by adopting cylindrical segmentation to obtain cylindrical characteristics; carrying out feature extraction on the laser point cloud by adopting a multi-layer perceptron to obtain point features; and fusing the cylindrical features and the point features to obtain voxel features.
S3: and carrying out asymmetric three-dimensional convolution operation on the voxel characteristics, and sequentially executing multiple downsampling and multiple upsampling, wherein each upsampling process is supervised by adopting a three-dimensional attention mechanism.
In step S3, the three-dimensional attention mechanism generates a final output feature by calculating weights of the high-level features and the low-level features, and then weighting and summing the weights with the input features to generate the final output featureFExpressed as:
in the method, in the process of the invention,represents an attention weight; />Representing upsampling; />Representation of use->Activating a function; />Representing a convolution of 3x3x 3; />Representation normalization; />Indicating high layerFeatures; />Representing low-level features;
in the method, in the process of the invention,computation procedure representing three-dimensional attention mechanism, +.>An input representing a three-dimensional attention mechanism; />Representing a splicing operation; />Representation->And (3) operating.
For semantic segmentation of an autopilot, some parts (such as roads and buildings) need advanced semantic features, some parts (such as pedestrians and traffic signs) need more detailed features, and a high-level feature map contains more semantic information, and a low-level feature map displays more details, so that the application automatically selects features of different levels through a three-dimensional attention mechanism, the high-level semantic features keep the same channel number as the bottom-level semantic features through up-sampling operation instead of using simple feature map stitching, the semantic information of different levels can be effectively fused, and the features with more abundant information are obtained, so that model performance is improved, and segmentation precision is improved. The network architecture of the three-dimensional attention mechanism (3 DAFFB) is shown in fig. 3.
The number of times of downsampling is four, and the number of times of upsampling is three.
S4: three local feature extractions are performed based on the K-nearest neighbor algorithm and the attention mechanism, wherein:
the input features extracted from the first local features are the point features, and the output features are fused with the output features sampled at the last time in a jump connection mode to obtain first fusion features;
the input features extracted by the local features of the second time are the first fusion features, and the output features are fused with the output features of the first time in a jump connection mode to obtain second fusion features;
the input features extracted from the third local feature are the second fusion features, and the output features are fused with the output features sampled last time in a jump connection mode to obtain final fusion features;
because of the density inconsistency characteristic of the outdoor point cloud, geometric information is inevitably lost in the voxelization process, local feature extraction is directly carried out on the original unordered 3D point cloud, no information loss exists, a complex local structure is automatically reserved by gradually increasing the receiving field of each point, and enhanced point features are obtained. Finally, the point features and the voxel features are fused, so that features with richer semantic information can be obtained, and the performance of the network is improved.
The process for extracting the local features based on the K nearest neighbor algorithm and the attention mechanism specifically comprises the following steps:
s21: for any center point cloud i, a fixed number is collected for the cloud based on a K nearest neighbor algorithmkAnd for each neighboring point, performing a position coding, wherein, the j (j=1, 2····k) th coding position of adjacent pointExpressed as:
in the method, in the process of the invention,representing a multi-layer perceptron; />Representing coordinates of the center point cloud i; />Representing coordinates of the j-th adjacent point; />Representing a splicing operation; />Euclidean distance representing center point cloud i and the jth adjacent point;
s22: point cloud characteristics of j-th adjacent pointAnd coding position->Splicing to obtain enhanced featuresExpressed as:
s23: traversingkAdjacent points willkThe feature combination after the enhancement of the adjacent points is taken as the local feature of the central point cloud i, and is expressed as follows:
s24: is a local featureWeights are calculated for each spatial position of the model (C), important local features are automatically learned based on an attention mechanism, and finally output local features are obtainedT i Expressed as:
;
In the method, in the process of the invention,weights representing spatial locations; />Representation->Operating; />Representing element-wise multiplication.
Given the input point cloud data, the K nearest point features of the center point are aggregated, so that the corresponding point features always know the relative point space positions. The local space coding block can explicitly observe the local geometric pattern, so that the whole network is finally benefited, the complex local structure is effectively learned, and the characteristic of rich information is finally generated.
S5: and carrying out semantic segmentation on the final fusion features, and planning an optimal navigation route according to the semantic segmentation result.
The application also provides an autopilot navigation system based on laser radar, including:
and the acquisition module is used for: for acquiring a laser point cloud;
and a voxel feature extraction module: the laser point cloud is redistributed by adopting cylindrical segmentation to obtain cylindrical characteristics; carrying out feature extraction on the laser point cloud by adopting a multi-layer perceptron to obtain point features; fusing the cylindrical features and the point features to obtain voxel features;
three-dimensional asymmetric convolution network: the method comprises the steps of performing asymmetric three-dimensional convolution operation on voxel characteristics, sequentially performing downsampling and upsampling for a plurality of times, and supervising the upsampling process by adopting a three-dimensional attention mechanism each time;
local feature extraction module (LFAB): three local feature extractions are performed based on the K-nearest neighbor algorithm and the attention mechanism, wherein:
the input features extracted from the first local features are the point features, and the output features are fused with the output features sampled at the last time in a jump connection mode to obtain first fusion features;
the input features extracted by the local features of the second time are the first fusion features, and the output features are fused with the output features of the first time in a jump connection mode to obtain second fusion features;
the input features extracted from the third local feature are the second fusion features, and the output features are fused with the output features sampled last time in a jump connection mode to obtain final fusion features;
and a route planning module: the method is used for carrying out semantic segmentation on the final fusion features, and planning an optimal navigation route according to the semantic segmentation result.
Example 1
In this embodiment, the data sets adopted for the navigation network model training are SemanticKITTI and nuScens reference data sets. Dividing the data set into a training set, a verification set and a test set, wherein the training set is used for executing a training process so as to learn and update parameters of the navigation network model and enable the navigation network model to better fit data; the verification set is used for adjusting the super parameters of the navigation network model and evaluating the performance of the navigation network model; the test set is used to test the performance of the navigation network model.
A comparison test is constructed, the performance of the navigation network model and other models in the field provided by the application on a nuScenes verification set and a SemanticKITTI verification set respectively is evaluated, an index adopted by the evaluation is an average intersection ratio (mIoU), and the calculation process is expressed as follows:
in the formula, TP is expressed as a real example, namely, a model is predicted as a positive example, and the model is actually a negative example; FN is expressed as false counter-examples, i.e. the model predicts against examples, actually positive examples; FP is represented as a false positive, i.e. the model predicts as a positive, and actually as a negative; k represents the number of categories.
Referring to table 1, table 1 shows performance comparison data of the navigation network model provided in the present application and other models in the art on a SemanticKITTI validation set.
TABLE 1 Performance versus data for multiple models on SemanticKITTI validation set
Wherein (·) after the model name indicates the type of the input data of the model network, and L indicates that the input data is only laser radar data; l+c represents fusion data of laser radar data and camera data.
Table 1 shows that from the type of input data, the method provided by the present application achieves performance gains in mlou due to modeling of three-dimensional geometric information, in spite of the input of single-mode radar data, compared to projection-based method 2D space such as RandLANet, rangeNet ++, sequenezesegv 2, sequenezesegv 3, salsaNext, and the like. The performance of the present application is superior to the MinkowskiNet, SPVNAS and Cylinder3D et al 3D convolution methods because of the point features incorporated in our method. Finally, we propose methods that are superior to those based on multi-view fusion.
Referring to table 2, table 2 shows performance comparison data of the navigation network model provided in the present application with other models in the art on nuScenes verification set.
Table 2 performance comparison data for multiple models on nuScenes validation set
As can be seen from table 2, compared with other models, the method proposed in the present application achieves excellent performance on nuScenes validation set, obtaining about 4% -15% performance gain; furthermore, the method proposed in the present application achieves better results than the most advanced multiview fusion method (2 dpass), with a 0.6% improvement in mIoU compared to it. For some objects needing more detailed semantic features, such as pedestrians and traffic cones, the proposed method achieves good performance, which also shows the effectiveness of the proposed method, and the difficulty of an automatic driving scene can be effectively solved.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (4)

1. An automatic driving navigation method based on a laser radar is characterized by comprising the following steps:
s1: acquiring a laser point cloud;
s2: redistributing the laser point cloud by adopting cylindrical segmentation to obtain cylindrical characteristics; carrying out feature extraction on the laser point cloud by adopting a multi-layer perceptron to obtain point features; fusing the cylindrical features and the point features to obtain voxel features;
s3: performing asymmetric three-dimensional convolution operation on the voxel characteristics, and sequentially performing multiple downsampling and multiple upsampling, wherein each upsampling process is supervised by adopting a three-dimensional attention mechanism;
s4: three local feature extractions are performed based on the K-nearest neighbor algorithm and the attention mechanism, wherein:
the input features extracted from the first local features are the point features, and the output features are fused with the output features sampled at the last time in a jump connection mode to obtain first fusion features;
the input features extracted by the local features of the second time are the first fusion features, and the output features are fused with the output features of the first time in a jump connection mode to obtain second fusion features;
the input features extracted from the third local feature are the second fusion features, and the output features are fused with the output features sampled last time in a jump connection mode to obtain final fusion features;
s5: carrying out semantic segmentation on the final fusion features, and planning an optimal navigation route according to the semantic segmentation result;
in step S4, the process of extracting local features based on the K-nearest neighbor algorithm and the attention mechanism specifically includes the following steps:
s21: for any center point cloud i, a fixed number is collected for the cloud based on a K nearest neighbor algorithmkAnd performing position coding for each adjacent point, wherein the coding position of the j-th adjacent point is as follows,j=1,2···k,/>Expressed as:
in the method, in the process of the invention,representing a multi-layer perceptron; />Representing coordinates of the center point cloud i; />Representing coordinates of the j-th adjacent point; />Representing a splicing operation; />Euclidean distance representing center point cloud i and the jth adjacent point;
s22: point cloud characteristics of j-th adjacent pointAnd coding position->Splicing to obtain enhanced features->Expressed as:
s23: traversingkAdjacent points willkThe feature combination after the enhancement of the adjacent points is taken as the local feature of the central point cloud i, and is expressed as follows:
s24: is a local featureWeights are calculated for each spatial position of the model (C), important local features are automatically learned based on an attention mechanism, and finally output local features are obtainedT i Expressed as:
;
in the method, in the process of the invention,weights representing spatial locations; />Representation->Operating; />Representing element-wise multiplication.
2. The lidar-based autopilot navigation method of claim 1 wherein in step S3, the three-dimensional attention mechanism generates a final output feature by calculating weights for the high-level features and the low-level features, and then weighting and summing the weights with the input features to generate the final output featureFExpressed as:
in the method, in the process of the invention,represents an attention weight; />Representing upsampling; />Representation of use->Activating a function; />Representing a convolution of 3x3x 3; />Representation normalization; />Representing high-level features; />Representing low-level features;
in the method, in the process of the invention,computation procedure representing three-dimensional attention mechanism, +.>Representing the input of a three-dimensional attention mechanism.
3. The laser radar-based autopilot navigation method of claim 1 wherein in step S3, the number of downsampling is four and the number of upsampling is three.
4. An autonomous driving navigation system based on lidar, comprising:
and the acquisition module is used for: for acquiring a laser point cloud;
and a voxel feature extraction module: the laser point cloud is redistributed by adopting cylindrical segmentation to obtain cylindrical characteristics; carrying out feature extraction on the laser point cloud by adopting a multi-layer perceptron to obtain point features; fusing the cylindrical features and the point features to obtain voxel features;
three-dimensional asymmetric convolution network: the method comprises the steps of performing asymmetric three-dimensional convolution operation on voxel characteristics, sequentially performing downsampling and upsampling for a plurality of times, and supervising the upsampling process by adopting a three-dimensional attention mechanism each time;
the local feature extraction module is used for: three local feature extractions are performed based on a K-nearest neighbor algorithm, wherein:
the input features extracted from the first local features are the point features, and the output features are fused with the output features sampled at the last time in a jump connection mode to obtain first fusion features;
the input features extracted by the local features of the second time are the first fusion features, and the output features are fused with the output features of the first time in a jump connection mode to obtain second fusion features;
the input features extracted from the third local feature are the second fusion features, and the output features are fused with the output features sampled last time in a jump connection mode to obtain final fusion features;
the process for extracting the local features based on the K nearest neighbor algorithm and the attention mechanism specifically comprises the following steps:
s21: for any center point cloud i, a fixed number is collected for the cloud based on a K nearest neighbor algorithmkAnd performing position coding for each adjacent point, wherein the coding position of the j-th adjacent point is as follows,j=1,2···k,/>Expressed as:
in the method, in the process of the invention,representing a multi-layer perceptron; />Representing coordinates of the center point cloud i; />Representing coordinates of the j-th adjacent point; />Representing a splicing operation; />Euclidean distance representing center point cloud i and the jth adjacent point;
s22: point cloud characteristics of j-th adjacent pointAnd coding position->Splicing to obtain enhanced features->Expressed as:
s23: traversingkAdjacent points willkThe feature combination after the enhancement of the adjacent points is taken as the local feature of the central point cloud i, and is expressed as follows:
s24: is a local featureIs calculated for each spatial position of (a)Weight, based on the attention mechanism, automatically learning important local features to obtain final output local featuresT i Expressed as:
;
in the method, in the process of the invention,weights representing spatial locations; />Representation->Operating; />Representing element-by-element multiplication;
and a route planning module: the method is used for carrying out semantic segmentation on the final fusion features, and planning an optimal navigation route according to the semantic segmentation result.
CN202410046437.2A 2024-01-12 2024-01-12 Automatic driving navigation method and system based on laser radar Active CN117553807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410046437.2A CN117553807B (en) 2024-01-12 2024-01-12 Automatic driving navigation method and system based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410046437.2A CN117553807B (en) 2024-01-12 2024-01-12 Automatic driving navigation method and system based on laser radar

Publications (2)

Publication Number Publication Date
CN117553807A CN117553807A (en) 2024-02-13
CN117553807B true CN117553807B (en) 2024-03-22

Family

ID=89813345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410046437.2A Active CN117553807B (en) 2024-01-12 2024-01-12 Automatic driving navigation method and system based on laser radar

Country Status (1)

Country Link
CN (1) CN117553807B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020202305A1 (en) * 2020-02-24 2021-08-26 Robert Bosch Gesellschaft mit beschränkter Haftung Method for recognizing the surroundings of a vehicle and method for training a fusion algorithm for a vehicle system
CN116030255A (en) * 2023-01-17 2023-04-28 云南大学 System and method for three-dimensional point cloud semantic segmentation
CN116681958A (en) * 2023-08-04 2023-09-01 首都医科大学附属北京妇产医院 Fetal lung ultrasonic image maturity prediction method based on machine learning
CN116824585A (en) * 2023-07-04 2023-09-29 重庆大学 Aviation laser point cloud semantic segmentation method and device based on multistage context feature fusion network
WO2023230996A1 (en) * 2022-06-02 2023-12-07 Oppo广东移动通信有限公司 Encoding and decoding method, encoder, decoder, and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020202305A1 (en) * 2020-02-24 2021-08-26 Robert Bosch Gesellschaft mit beschränkter Haftung Method for recognizing the surroundings of a vehicle and method for training a fusion algorithm for a vehicle system
WO2023230996A1 (en) * 2022-06-02 2023-12-07 Oppo广东移动通信有限公司 Encoding and decoding method, encoder, decoder, and readable storage medium
CN116030255A (en) * 2023-01-17 2023-04-28 云南大学 System and method for three-dimensional point cloud semantic segmentation
CN116824585A (en) * 2023-07-04 2023-09-29 重庆大学 Aviation laser point cloud semantic segmentation method and device based on multistage context feature fusion network
CN116681958A (en) * 2023-08-04 2023-09-01 首都医科大学附属北京妇产医院 Fetal lung ultrasonic image maturity prediction method based on machine learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王鲁光.基于点云数据的车辆识别方法研究.2021,第22-27页. *
钦耀.基于深度学习的点云语义分割算法研究.2023,第32-37页. *

Also Published As

Publication number Publication date
CN117553807A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN111797716A (en) Single target tracking method based on Siamese network
CN110659664B (en) SSD-based high-precision small object identification method
CN109492596B (en) Pedestrian detection method and system based on K-means clustering and regional recommendation network
CN111046767B (en) 3D target detection method based on monocular image
CN113706581B (en) Target tracking method based on residual channel attention and multi-level classification regression
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN112365511B (en) Point cloud segmentation method based on overlapped region retrieval and alignment
CN104156943B (en) Multi objective fuzzy cluster image change detection method based on non-dominant neighborhood immune algorithm
CN114387265A (en) Anchor-frame-free detection and tracking unified method based on attention module addition
CN113627440A (en) Large-scale point cloud semantic segmentation method based on lightweight neural network
CN116954233A (en) Automatic matching method for inspection task and route
CN117689731B (en) Lightweight new energy heavy-duty battery pack identification method based on improved YOLOv model
CN117553807B (en) Automatic driving navigation method and system based on laser radar
CN115964640B (en) Improved template matching-based secondary target grouping method
Buck et al. Ignorance is bliss: flawed assumptions in simulated ground truth
CN116229217A (en) Infrared target detection method applied to complex environment
CN113887536B (en) Multi-stage efficient crowd density estimation method based on high-level semantic guidance
CN115861944A (en) Traffic target detection system based on laser radar
CN115719485A (en) Road side traffic target detection method based on category guidance
CN115079117A (en) Three-dimensional target detection method with positioning precision prediction
CN114972429A (en) Target tracking method and system for cloud edge collaborative self-adaptive inference path planning
CN112802343A (en) Universal virtual sensing data acquisition method and system for virtual algorithm verification
CN111369124A (en) Image aesthetic prediction method based on self-generation global features and attention
CN117092612B (en) Automatic driving navigation method based on laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant