CN113723365B - Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method - Google Patents

Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method Download PDF

Info

Publication number
CN113723365B
CN113723365B CN202111153281.0A CN202111153281A CN113723365B CN 113723365 B CN113723365 B CN 113723365B CN 202111153281 A CN202111153281 A CN 202111153281A CN 113723365 B CN113723365 B CN 113723365B
Authority
CN
China
Prior art keywords
point
target
sample
training
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111153281.0A
Other languages
Chinese (zh)
Other versions
CN113723365A (en
Inventor
杜兰
李增辉
于增雨
廖荀
王纯鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202111153281.0A priority Critical patent/CN113723365B/en
Publication of CN113723365A publication Critical patent/CN113723365A/en
Application granted granted Critical
Publication of CN113723365B publication Critical patent/CN113723365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a target feature extraction and classification method based on millimeter wave Lei Dadian cloud data, which solves the technical problems of high consumption of storage resources and computing resources and low recognition rate. The realization method comprises the following steps: collecting target data, and preprocessing to obtain a target level point cloud data set; separating the single-point and multi-point samples to generate single-point and multi-point sample sets for training and testing; respectively extracting single-point and multi-point training feature sets and training corresponding classifiers; obtaining a single-frame test sample probability prediction vector; and fusing multi-frame sample probability prediction vectors to realize classification. According to the invention, single-point and multi-point targets are respectively processed, multi-domain characteristics of the multi-point targets are extracted, and multi-frame samples are subjected to decision fusion. The invention improves the single-point target recognition rate and the overall recognition rate, and has less consumption of computing resources and storage resources and good instantaneity. The road surface target classification method can be applied to road surface target classification tasks in automatic driving.

Description

Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method
Technical Field
The invention belongs to the technical field of radar signal processing, and further relates to target feature extraction and classification, in particular to a target feature extraction and classification method based on millimeter wave Lei Dadian cloud data. The road moving targets can be classified in real time in a complex environment.
Background
When the radar resolution is high enough and the target volume is large, richer point clouds can be generated for the target, the shape, speed distribution and RCS distribution of the radar target point clouds can better reflect the actual geometric shape, motion state, motion gesture, surface material and other attributes of the target, but the measured value of the radar is also influenced by the relative positions of the radar and the target and other measured values, so that the target can be classified by extracting the relevant characteristics such as the shape, the speed distribution, the RCS distribution, the relative positions, the combination of different measured values and the like of the point clouds.
In an actual application scene, the method is limited by radar performance and complexity of the scene, a large number of target radar point clouds are extremely sparse, and the accuracy of measured values is low, so that the difficulty of subsequent feature extraction and classification is directly caused; meanwhile, the single-point targets occupy a non-negligible proportion, so that the classification of the single-point targets is unavoidable. On the one hand, the target point cloud formed by the single-point targets cannot represent information related to shapes, and on the other hand, cannot represent statistical information such as speed, RCS distribution and the like, so that the classification capability of the single-point targets needs to be improved while the classification capability of the multi-point targets is ensured as much as possible.
The university of western electronic technology proposes a millimeter wave radar echo-based target feature extraction method in its patent application (patent application number: CN202011029844. X, publication number: CN 112505648A). The method comprises the steps of firstly generating an original target range-Doppler RD diagram based on target actual measurement data of a millimeter wave radar, removing ground clutter in the original target range-Doppler RD diagram by using a CLEAN algorithm, carrying out target detection on the RD diagram after impurity removal by using an improved unit average CFAR algorithm, then carrying out target tracking on the obtained continuous multi-frame RD diagram by using a Kalman filtering method, selecting candidate areas on each RD diagram according to the obtained tracking tracks, extracting single-frame and multi-frame features based on the obtained candidate areas, and finally sending the extracted features into a classifier to obtain a classification result. Taking common road surface target classification as an example, the multi-frame RD data adopted by the method is closer to original radar echo, the information loss is less, the characteristics of the target on distance distribution and speed distribution can be reflected more fully, and higher recognition rate can be obtained on common road surface target classification tasks, but the method needs to extract the characteristics of the multi-frame data, and the storage and calculation resources of the vehicle-mounted equipment are occupied by the multi-frame RD map data, so that the real-time requirement of an automatic driving system is difficult to meet.
Zhao Z et al in Point Cloud Features-Based KERNEL SVM for Human-Vehicle Classification IN MILLIMETER WAVE RADAR propose a method for extracting target related features in a target point cloud slice according to physical characteristics of an object, wherein the method comprises the steps of widening targets in different directions, mean variance of speeds, mean variance of radar scattering sectional areas and the like, and target classification is completed by adopting a support vector machine. Taking common pavement targets as an example, the method can classify the common pavement targets, and obtain better classification performance on classification of pedestrians and automobiles under the condition of extracting a small number of features, but only consider classification of the pedestrians and automobiles at a short distance under a simple scene, the extracted small number of features are insufficient to reflect comprehensive physical characteristics of the common pavement targets under a real complex scene, and the extracted features are not suitable for the condition that the targets have only a single reflection point.
In the prior art, the RD data is large in occupation of storage and calculation resources, the types and the quantity of features extracted by the traditional method are small when the point cloud data are adopted, and the target characteristics are insufficiently described, so that the target recognition rate is low.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a millimeter wave Lei Dadian cloud data-based target feature extraction and classification method with higher recognition rate by processing single points and multiple points respectively.
The invention relates to a target feature extraction and classification method based on millimeter wave Lei Dadian cloud data, which is characterized in that a single-point target and a multi-point target are respectively processed, multi-domain feature extraction is carried out on the multi-point target, and decision fusion is carried out on multi-frame samples in the test process, and the method comprises the following steps:
1) Preprocessing a data set: the method comprises the steps that a data acquisition system of the millimeter wave radar acquires a scene-level point cloud data set with a category number, a tracking number and a scene number, point cloud data of a target of interest are reserved according to the category number, the number of the residual point cloud data is readjusted to obtain a residual scene-level point cloud data set, sliding windows and data fusion are carried out on the residual scene-level point cloud data set, each residual scene-level point cloud data set generates a plurality of target-level point cloud samples, the target-level point cloud samples with the same tracking number are endowed with frame numbers which are sequentially increased along with time, all the target-level point cloud samples form a target-level millimeter wave Lei Dadian cloud data set with the frame numbers, and the target-level millimeter wave Lei Dadian cloud data set is randomly divided into a training set and a testing set according to the scene number;
2) Single point/multi-point sample separation: dividing a target-level millimeter wave Lei Dadian cloud data set for training and testing into a training/testing sample set of single points and multiple points respectively, wherein the dividing criterion is the number of target points in a current sample, the sample is determined to be a single-point sample when the sample has only one target point, and the sample is determined to be a multiple-point sample when the sample has two or more target points;
3) Training sample feature extraction: for each single point training sample of the set of single point target training samples, each sample extracting a different type of feature from the radar's underlying measurements and combinations thereof; carrying out multi-domain feature extraction on each multi-point training sample of a multi-point target training sample set, wherein each sample is used for manually extracting five corresponding different types of features from five different domains respectively, and all the features are used for reflecting the shape, speed, gesture, material and position characteristics of a target; extracting 6-dimensional features from each single-point sample, generating a single-point training feature set overall, extracting 125-dimensional features from each multi-point sample, and generating a multi-point training feature set overall;
3.1 Single point training sample feature extraction: after 6-dimensional features are extracted from all single-point training samples, all single-point training sample features generate a single-point target training feature set Where fea i,j represents the j-th feature of the i-th single-point sample,/>A single-point feature matrix representing all feature components of all single-point training samples, the matrix dimension being N o×Mo,No representing the number of single-point samples, M o representing the feature dimension of the extracted features for the single-point samples;
3.2 Multi-point training sample feature extraction: extracting multi-domain features of all the multi-point training samples, and after 125-dimensional features are extracted, forming a multi-point target training feature set by all the multi-point training sample features Where fea i,j represents the j-th feature of the i-th single-point sample,/>A multi-point feature matrix representing all feature components of all single-point samples, wherein the matrix dimension is N l×Ml,Nl, the number of the multi-point samples is represented, and M l represents feature dimension of the extracted features of the multi-point samples;
4) Training a classifier: respectively training corresponding classifiers aiming at the single-point target and the multi-point target training feature set to obtain a single-point classifier for classifying the single-point target and a multi-point classifier for classifying the multi-point target;
4.1 Single point classifier training: feature set for single point training Inputting a random forest classifier for single-point target classification, wherein the output of the random forest classifier is the combination of the road surface target judgment results of each decision tree in the random forest; selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree satisfies one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value; after the ending condition is met, a single-point classifier for single-point target classification is obtained;
4.2 Multi-point classifier training: aggregating multipoint training features Inputting a random forest classifier for classifying the multi-point targets, wherein the output of the random forest classifier is the combination of the ground target judgment results of each decision tree in the random forest; selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree is one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value; after the finishing condition is met, a multi-point classifier for multi-point target classification is obtained;
5) Acquiring a probability prediction vector based on single frame samples: when the target millimeter wave Lei Dadian cloud test set is called for testing, each frame of samples in the target millimeter wave Lei Dadian cloud test set of the single point target are classified by a single point classifier, and each frame of samples in the target millimeter wave Lei Dadian cloud test set of the multi-point target are classified by a multi-point classifier; traversing all single-point and multi-point test samples respectively, extracting corresponding single-point target and multi-point target characteristics, and carrying out classification test by using corresponding single-point and multi-point classifiers respectively to obtain probability prediction vectors corresponding to each sample;
6) Fusing probability prediction vectors of multi-frame samples to classify: in single-point test or multi-point test, searching the probability prediction vector of the previous 2 frame samples adjacent to the current frame according to the frame number of the test sample, accumulating the probability prediction vector with the probability prediction vector obtained by the current frame, and obtaining the final probability prediction vector of the current frame after decision fusion, wherein the class corresponding to the maximum probability value in the final probability prediction vector of the current frame is the classification result of the sample; and traversing all single-point and multi-point test samples, fusing probability prediction vectors of the current frame and the samples of the adjacent previous 2 frames to obtain a final classification result of each sample, and finishing the test process to finish the extraction and classification of target features of millimeter wave radar point cloud data.
The invention solves the technical problems of larger consumption of storage resources and computing resources and lower recognition rate of the prior method.
Compared with the prior art, the invention has the following advantages:
Single-point target recognition rate is improved: the method and the device respectively process the single-point targets and the multi-point targets in a targeted manner, respectively extract the characteristics of different dimensions and different types for the single-point targets and the multi-point targets, and adopt different classifiers. Compared with the prior art which generally does not consider the situation of single-point targets, the split-and-treat strategy adopted by the invention can ensure the multi-point target recognition rate and simultaneously alleviate the problem of low single-point target recognition rate.
The overall recognition rate is improved: according to the method, five different types of corresponding features are manually extracted from five different domains including a shape domain, a speed domain, an energy domain, a position domain and a combination domain aiming at the multi-point target, and compared with the traditional method that the number and types of the extracted features are less, the multi-domain features extracted by the method can fully reflect the shape, the speed, the gesture, the material and the position characteristics of the target, and are beneficial to improving the recognition performance; the invention also carries out decision fusion on the probability prediction vectors of the multi-frame samples, and compared with the traditional method of classifying by using only single-frame samples, the decision fusion strategy adopted by the invention can further improve the recognition rate.
The consumption of the computing resource and the storage resource is less, and the real-time performance is good: compared with the prior art that RD data is adopted for classification, the data form adopted by the invention is sparse millimeter wave Lei Dadian cloud data, and occupies less computing resources and storage resources, and has better instantaneity.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and to specific embodiments.
Example 1
When the millimeter wave radar is used for classifying the pavement targets in the prior art, if data of similar images such as RD data are adopted for classification, the occupation of calculation resources and storage resources is large, and the real-time requirement of an automatic driving system is difficult to meet; the existing method for adopting the point cloud data is insufficient in feature extraction, and the characteristics of single-point data are not considered, so that the single-point target recognition rate is low. Therefore, the invention provides a target feature extraction and classification method based on millimeter wave Lei Dadian cloud data.
The invention relates to a target feature extraction and classification method based on millimeter wave Lei Dadian cloud data, which is used for respectively processing a single-point target and a multi-point target, extracting multi-domain features of the multi-point target, and carrying out decision fusion on multi-frame samples in the test process, wherein the method comprises the following steps of:
1) Preprocessing a data set: in order to perform subsequent feature extraction and classification work, the scene-level millimeter wave Lei Dadian cloud data must be preprocessed to generate corresponding target-level millimeter wave Lei Dadian cloud training sets and target-level millimeter wave Lei Dadian cloud test sets. The data acquisition system of the millimeter wave radar acquires a scene-level point cloud data set with a category number, a tracking number and a scene number, retains the point cloud data of the target of interest according to the category number, deletes the point cloud data of the target which is not of interest, and readjusts the numbers of the residual target point cloud data of interest to obtain the residual scene-level point cloud data set. And carrying out sliding window and data fusion on the residual scene grade point cloud data, generating a plurality of target grade point cloud samples by each residual scene grade point cloud data, endowing the target grade point cloud samples with the same tracking number with frame numbers which are sequentially increased along with time, forming a target grade millimeter wave Lei Dadian cloud data set with the frame numbers by all the target grade point cloud samples, and randomly dividing the target grade millimeter wave Lei Dadian cloud data set into a target grade millimeter wave Lei Dadian cloud training set and a target grade millimeter wave Lei Dadian cloud testing set according to the scene numbers.
The method specifically comprises the steps of obtaining scene-level millimeter wave Lei Dadian cloud data, reserving point cloud data of a target of interest and generating a target-level point cloud dataset:
1.1 Acquiring scene-level millimeter wave Lei Dadian cloud data: and acquiring a large number of scene-level millimeter wave Lei Dadian cloud data sets with point-by-point labels by using a data acquisition system carrying millimeter wave radar, wherein the data in the acquired point cloud data sets are provided with point-by-point class numbers, tracking numbers and scene numbers.
1.2 Point cloud data of the target of interest is retained: according to the category numbers in the point cloud data set, only the point cloud data belonging to the target of interest in the data set is reserved, other point cloud data are deleted, the numbers of the rest point cloud data are readjusted, the point clouds of the targets with the same category numbers are enabled to have the same category numbers, and the continuously observed same targets have the same tracking numbers.
1.3 Generating a target-level point cloud dataset: and carrying out sliding window on the reserved point cloud data at a certain time interval, carrying out motion compensation on all point clouds in the current time window to carry out data fusion, forming a target level point cloud sample by the target point clouds which belong to the same time window and have the same tracking number, wherein the generated sample still retains the class number, the tracking number and the scene number in the rest scene level point cloud data set, the samples with the same tracking number belong to the same target, the samples with the same tracking number are endowed with frame numbers which are sequentially increased along with time for distinguishing the samples with the same target at different moments, all the samples are regenerated, and finally, the regenerated all the samples are divided into target level millimeter wave Lei Dadian cloud data sets for training and testing according to the scene numbers.
2) Single point/multi-point sample separation: the classifier of the features and training extracted when the single-point/multi-point targets are uniformly processed is often not suitable for the single-point targets, and further, the single-point/multi-point targets are obviously reduced in recognition performance, the single-point/multi-point targets are respectively processed to improve the recognition performance of the single-point targets while the multi-point target recognition performance is maintained, therefore, the target-level millimeter wave Lei Dadian cloud data set for training and testing needs to be respectively divided into training and testing sample sets corresponding to the single point and the multi-point, the basis of dividing each target-level point cloud sample into a single-point or multi-point sample set is the point number of a target point in each target-level point cloud sample, when the sample has only one target point, the sample is defined as a single-point sample, and when the sample has two or more target points, the sample is defined as a multi-point sample, and the training/testing sample sets of the single-point target and the multi-point targets are respectively formed. In the invention, all target-level point cloud single-point samples are divided into corresponding single-point training and testing sample sets, the single-point training sample sets and the testing sample sets are collectively called target-level point cloud single-point sample sets, and all target-level point cloud multi-point samples are also divided into corresponding multi-point training and testing sample sets, and the multi-point training sample sets and the testing sample sets are collectively called target-level point cloud multi-point sample sets.
3) Training sample feature extraction: after the separation of the single point/multi point samples is completed in step 2) and the corresponding single point sample set and multi point sample set are generated, the invention trains the sample set for the single point target, and each sample extracts different types of characteristics from the basic measurement value of the radar and the combination thereof. Performing multi-domain feature extraction on a multi-point target training sample set, wherein each sample is used for manually extracting five different types of corresponding features from five different domains including a shape domain, a speed domain, an energy domain, a position domain and a combination domain, and all the features are used for reflecting the shape, the speed, the gesture, the material and the position characteristics of a target; and extracting 6-dimensional features from each single-point sample, generating a single-point training feature set overall, extracting 125-dimensional features from each multi-point sample, and generating a multi-point training feature set overall.
The method specifically comprises single-point training sample feature extraction and multi-point training sample feature extraction:
3.1 Single point training sample feature extraction: after 6-dimensional features are extracted from all single-point training samples, all single-point training sample features generate a single-point target training feature set Where fea i,j represents the j-th feature of the i-th single-point sample,/>A single point feature matrix representing all feature components of all single point training samples, a matrix dimension of N o×Mo,No representing the number of single point samples, and M o representing the feature dimension of the extracted features for the single point samples.
3.2 Multi-point training sample feature extraction: extracting multi-domain features of all the multi-point training samples, and after 125-dimensional features are extracted, forming a multi-point target training feature set by all the multi-point training sample featuresWhere fea i,j represents the j-th feature of the i-th single-point sample,/>A multi-point feature matrix representing all feature components of all single point samples, a matrix dimension of N l×Ml,Nl representing the number of multi-point samples, and M l representing the feature dimension of extracting features for a multi-point sample.
4) Training a classifier: after the extraction of different dimension features of different types of targets is completed in the step 3) of training sample feature extraction, the single-point classifier and the multi-point classifier are respectively trained according to the single-point target training feature set and the multi-point target training feature set to obtain the corresponding single-point classifier and the multi-point classifier by taking the differences of the dimensions and meanings of the extracted single-point features and the multi-point features into consideration, wherein the single-point classifier is specially used for classifying the single-point targets, and the multi-point classifier is specially used for classifying the multi-point targets.
Specifically comprises single-point classifier training and multi-point classifier training:
4.1 Single point classifier training: feature set for single point training Inputting a random forest classifier for single-point target classification, wherein the output of the random forest classifier is the combination of the road surface target judgment results of each decision tree in the random forest; selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree satisfies one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value; and obtaining the single-point classifier for classifying the single-point targets after the ending condition is met.
4.2 Multi-point classifier training: aggregating multipoint training featuresInputting a random forest classifier for classifying the multi-point targets, wherein the output of the random forest classifier is the combination of the ground target judgment results of each decision tree in the random forest; selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree is one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value; and obtaining the multi-point classifier for multi-point target classification after the ending condition is met.
5) Acquiring a probability prediction vector based on single frame samples: when the target millimeter wave Lei Dadian cloud test set is called for testing, each frame of samples in the target millimeter wave Lei Dadian cloud test set of the single point target are classified by a single point classifier, and each frame of samples in the target millimeter wave Lei Dadian cloud test set of the multi-point target are classified by a multi-point classifier. And traversing all the single-point test samples and the multi-point test samples respectively, extracting corresponding single-point target features and multi-point target features, and carrying out classification test by using corresponding single-point classifiers and multi-point classifiers respectively to obtain probability prediction vectors corresponding to all the single-point test samples and the multi-point test samples.
According to the method, different types of characteristics are extracted and different classifiers are trained for single-point and multi-point targets respectively, so that each frame of samples in a target-level millimeter wave Lei Dadian cloud test set of the single-point targets are required to be classified by the single-point classifier during testing, and each frame of samples in the target-level millimeter wave Lei Dadian cloud test set of the multi-point targets are classified by the multi-point classifier, so that probability prediction vectors corresponding to each sample are obtained.
The specific implementation of classifying test by traversing all single-point test samples and multi-point test samples comprises the following steps:
5.1 If the current test sample is a single-point sample, turning to step 5.2), otherwise turning to step 5.4);
5.2 Extracting the 6-dimensional features described in step 3.1) for the current single point sample;
5.3 Inputting the characteristics extracted by the current single-point sample into the trained single-point random forest classifier in the step 4.1), obtaining a judging result of each decision tree, and counting the output probability of each type of ground target to obtain a corresponding probability prediction vector, if the test sample is not traversed, turning to the step 5.1), otherwise, exiting the traversal cycle;
5.4 Extracting the 125-dimensional features described in step 3.2) for the current multi-sample;
5.5 Inputting the characteristics extracted by the current multi-sample into the trained multi-point random forest classifier in the step 4.2) to obtain the judgment result of each decision tree, and counting the output probability of each type of ground target to obtain a corresponding probability prediction vector. If the sample is not traversed at this time, the step is shifted to step 5.1), otherwise the traversing cycle is exited.
6) Fusing probability prediction vectors of multi-frame samples to classify: the decision fusion method is simple and convenient to calculate, and can improve the classification accuracy under the condition of not obviously increasing the occupation of system calculation and storage resources, so that the classification accuracy is improved by carrying out decision fusion on the probability prediction vectors of the multi-frame adjacent samples. In the single-point test or the multi-point test, the probability prediction vector of the previous 2 frame samples adjacent to the current frame is searched according to the frame number of the test sample, and is accumulated with the probability prediction vector obtained by the current frame, and the final probability prediction vector of the current frame is obtained after decision fusion. And traversing all single-point and multi-point test samples, fusing probability prediction vectors of the current frame and the samples of the adjacent previous 2 frames to obtain a final classification result of each sample, and finishing the test process to finish the extraction and classification of target features of millimeter wave radar point cloud data.
The existing method for classifying the pavement targets based on the millimeter wave radar comprises a method based on RD data and point cloud data, and the method based on the RD data needs to consume a large amount of storage and calculation resources, so that the real-time performance is poor; the types and the number of the features extracted by the existing method based on the point cloud data are small, the features are often insufficient, and the overall recognition performance and the single-point target recognition performance are poor.
In order to avoid massive consumption of storage resources and computing resources, the invention adopts point cloud data for classification; in order to fully extract the characteristics of the point cloud data and improve the identification performance, the single-point targets and the multi-point targets are respectively processed, wherein the single-point targets and the multi-point targets are respectively extracted with a plurality of different types of characteristics and different single-point and multi-point classifiers are trained. In order to further improve the recognition performance without significantly increasing the consumption of computing resources and storage resources, the invention performs decision fusion on the probability prediction vectors of the multi-frame samples to obtain more accurate classification results.
The point cloud data form adopted by the invention has lower occupation of storage resources and calculation resources and better instantaneity; the adopted scheme for respectively extracting and classifying the characteristics of the single-point targets and the multi-point targets is more suitable for road surface target classification tasks, and higher recognition performance can be obtained; the method for carrying out decision fusion on the probability prediction vectors of the multi-frame samples can further improve the identification performance under the condition of not obviously increasing the consumption of computing resources and storage resources.
Example 2
The method for extracting and classifying the target features based on millimeter wave Lei Dadian cloud data is the same as that of embodiment 1, and the specific steps for performing motion compensation on multi-frame data to perform data fusion when generating the target-level point cloud data set in step 1.3) of the invention are as follows:
1.3.1 Recording first frame point cloud data in the current time window and the position and posture information of the vehicle at the moment, wherein the current frame data is the first frame data.
1.3.2 Searching the nearest follow-up frame point cloud data according to nextstamp information of the current frame point cloud data, ending data fusion operation if the next frame data is not in the current time window, and otherwise, recording the current frame data and the vehicle posture information.
1.3.3 A translation vector T and a rotation matrix R from the current frame to the first frame are calculated by combining the vehicle attitude difference of the current frame and the first frame.
1.3.4 Calculating new coordinates after the current frame is transformed to the first frame based on
Wherein (x, y) is the original coordinates of the point cloud of the current frame, and (x ', y') is the coordinates of the point cloud after the data fusion of the point cloud and the first frame.
When the target-level point cloud data set is generated, a better data fusion effect can be achieved through motion compensation, and interference of the motion of the vehicle on the motion characteristics of the target is removed.
Example 3
The method for extracting and classifying the target features based on millimeter wave Lei Dadian cloud data is the same as that of the embodiment 1-2, and the method for extracting the features of the single-point training sample in the step 3.1) of the invention, wherein 6-dimensional features are extracted for the single-point target, and 6-dimensional features are extracted from 4 basic measurement values of the radar and combinations thereof for the single-point target, and the features are respectively as follows: sigma is the value of the radar cross-sectional area RCS of the target, v is the target radial velocity, R is the target-to-radar distance, θ is the azimuth angle of the target relative to the radar, sigma R is the combination of the RCS value and the distance, v back is the backward velocity, and the combination of the radar cross-sectional area value of the target and the target-to-radar distance in the feature sigma R and the backward velocity v back are expressed as follows:
the combined sigma R combination calculation formula of the target radar sectional area value and the target and radar distance in the characteristics is as follows:
σR=σ×R2
The calculation formula of the target backward velocity v back is as follows:
vback=v/cos(θ)
the other 4 features are sigma, v, R and theta, which are basic measurement values of the radar and can be directly obtained.
Compared with the method without considering the characteristics of the single-point targets in the prior art, the method has the advantages that the extracted characteristics can accurately describe the characteristics of the single-point targets by using a small amount of key characteristics, and the single-point target identification performance is improved while the resource occupation is low.
Example 4
The method for extracting and classifying target features based on millimeter wave Lei Dadian cloud data is the same as that of the embodiment 1-3, and the method for extracting the features of the multi-point training sample in the step 3.2) of the invention extracts 5 types of features including target shape related features, speed distribution related features, RCS distribution related features, position related features and combined features from the multi-point target features, and performs feature extraction on 35 groups of 125-dimensional features in total, and the specific implementation is as follows:
3.2.1 The extracted target shape-related features include the following 16 sets of 53-dimensional features in total:
feature 1 is xylambda1, xyLambda is the feature value of the covariance matrix of the x and y coordinates of the target point cloud, xyLambdaQuad is the second power of the feature value, xyLambdaQuad is the feature value.
Feature 2 majorlength and minorLength are the principal axis lengths of the 95% confidence ellipses corresponding to the target point cloud x and y coordinate covariance matrices.
Feature 3 nDetects, compactness, clusterWidth is the point number, compactness, width of the target point cloud. The compactness Compactness of the point cloud refers to the average distance between each point in the target point cloud and the center point, and the width clusterWidth of the point cloud refers to the farthest distance between any two points in the target point cloud.
Feature 4 is the average distance from each point of the target point cloud to the corresponding vector clusterWidth.
Feature 5: area recthull, perimeterRectHull, densityRectHull is the area, perimeter, density of the minimum bounding rectangle of the target point cloud.
And 6, the characteristics of the areaConvexHull and perimeterConvexHull, densityConvexHull, circularity are the area, the perimeter, the density and the roundness of the cloud convex hull of the target point.
And 7, radius circle fit and radiusCircleMin are radii of the optimal matching circle and the minimum bounding circle of the target point cloud.
Feature 8 is the correlation coefficient of x and y coordinates of each point of the target point cloud.
Feature 9 meancclusteringth is the average distance between points of the target point cloud.
And features 10 are spread X and madX, varX, stdX, skewX, kurtX, which are expansion, average deviation, variance, standard deviation, skewness and abundance of the cloud x coordinates of the target point.
Feature 11: spready, madY, varY, stdY, skewY, kurtY is expansion, mean deviation, variance, standard deviation, skewness, abundance of the target point cloud y coordinates.
Features 12 area xy, densityXY expand the area and density of the corresponding bounding rectangle in the x and y directions for the target point cloud.
Feature 13, spreadRange is the expansion of the target point cloud in the distance dimension, SQRTRANGESPREAD, LOGRANGESPREAD, QUADRANGESPREAD is the value of the target point cloud after the open square, logarithm and second power of the expansion SPREADRANGE of the target point cloud in the distance dimension.
Feature 14 is the expansion of the target point cloud in the angle dimension, SQRTANGLESPREAD, LOGANGLESPREAD, QUADANGLESPREAD is the value of the target point cloud after the open square, logarithm and second power of the expansion SPREADANGLE of the target point cloud in the angle dimension.
Feature 15: madRange, varRange, stdRange, skewRange, kurtRange is mean deviation, variance, standard deviation, skewness, abundance of target point cloud distance.
Feature 16 is that madangle, varAngle, stdAngle, skewAngle, kurtAngle is the mean deviation, variance, standard deviation, skewness, abundance of target point cloud azimuth.
3.2.2 The extracted velocity profile-related features include the following 4 sets of total 18-dimensional features:
Features 17:minVelocity、maxVelocity、meanVelocity、spreadVelocity、madVelocity、varVelocity、stdVelocity、skewVelocity、kurtVelocity are extremum, mean, spread, mean deviation, variance, standard deviation, skewness, abundance of the target point cloud radial velocity.
Feature 18 stdVelocotyUncomp is the standard deviation of the uncompensated speed of the target point cloud.
Feature 19 is mean velocity of the target point cloud, sqrtMeanVelocity, logMeanVelocity, quadMeanVelocity is square of the open of the target point cloud average velocity meanVelocity, taking the logarithm, and the second power.
Feature 20:v 1~v5 is a histogram coding of target point cloud velocity.
3.2.3 The extracted RCS distribution-related features include the following 3 sets of total 18-dimensional features:
And features 21:minRCS and maxRCS, meanRCS, spreadRCS, madRCS, varRCS, stdRCS, skewRCS, kurtRCS, sumRCS are extreme values, mean values, expansion, average deviation, variance, standard deviation, skewness, abundance sum and sum of the target point cloud RCS.
Feature 22 is mean RCS value of target point cloud, SQRTMEANRCS, LOGMEANRCS, QUADMEANRCS is the open square, logarithmic sum, and second power of target point cloud average RCS value meanRCS.
Features 23 RCS 1-RCS 5 are histogram encodings of the target point cloud RCS.
3.2.4 The extracted location-related features include the following 5 sets of total 14-dimensional features:
Feature 24: minRange, maxRange, meanRange is the extremum, mean of the cloud distance of the target point.
Features 25 are extreme values and average values of target point cloud azimuth angles, wherein the extreme values and the average values of the target point cloud azimuth angles are minAngle and maxAngle, meanAngle.
Feature 26: mean distance, meanOrientation is the position of the target point cloud center point relative to the radar.
And the characteristics 27 are that minX, maxX and meanX are extreme values and average values of x coordinates of the target point cloud.
And the features 28 are minY, maxY and means which are extreme values and average values of the cloud y coordinates of the target point.
3.2.5 The extracted combined features include the following 7 sets of 22-dimensional features in total:
feature 29 is xyvrlambda1, xyvrLambda2, xyvrLambda3, xyvrLambda is the feature value of the target point cloud x, y coordinates and compensation speed, RCS covariance matrix, xyvrLambdaQuad, xyvrLambdaQuad2, xyvrLambdaQuad3, xyvrLambdaQuad4 is the square of the feature value of the target point cloud x, y coordinates and compensation speed, RCS covariance matrix.
Feature 30, axisLength1, axisLength, axisLength, axisLength are the principal axis length of the target point cloud x, y coordinates, compensation speed, RCS covariance matrix corresponding to 95% confidence ellipse.
The feature 31 is that the spreadangle comp and nDetectsComp are the target point cloud distance to azimuth angle expansion and the compensation of the target point, wherein the calculation formula of the target point cloud distance to azimuth angle expansion spreadAngleComp is as follows:
spreadAngleComp=spreadAngle*meanDistance
the calculation formula of the compensation nDetectsComp of the target point cloud distance to the target point is as follows:
nDetectsComp=nDetects×meanDistance
feature 32 rVrlinear, ANGLEVRLINEARITY is the correlation coefficient of target point cloud speed and distance and azimuth.
Feature 33 is the correlation coefficient of the projection length and the speed of the majorVrlinear and minorVrLinearity on the principal axis of the 95% confidence ellipse corresponding to the x and y coordinate covariance matrix of each point of the target point cloud.
Feature 34 rVrSpread, ANGLEVRSPREAD is the ratio of target point cloud distance expansion and angle expansion relative to speed expansion.
Feature 35 majorvrspin, minorVrSpread is the ratio of the projected length of the target point cloud to the velocity spread on the principal axis of the x and y coordinate covariance matrix corresponding to the 95% confidence ellipse.
According to the method, multi-domain feature extraction is performed on the multi-point targets, five different types of corresponding features are manually extracted from five different domains including a shape domain, a speed domain, an energy domain, a position domain and a combination domain, and compared with the traditional method that the number and types of the extracted features are small, the multi-domain features extracted by the method fully reflect the characteristics of the target such as shape, speed, gesture, material and position, the recognition performance of target classification based on millimeter wave radar is improved, and the method is more suitable for road surface target classification tasks.
Example 5
The method for extracting and classifying target features based on millimeter wave Lei Dadian cloud data is the same as that of the embodiments 1-4, and the method in step 6) of the invention classifies the probability prediction vectors of the fusion multi-frame samples, namely, the probability prediction vectors of the current frame and the adjacent previous 2-frame samples are accumulated to obtain the final probability prediction vector of the current frame, each value in the prediction vector is the final output probability of each type of target, and the method in the invention carries out decision fusion on the probability prediction vectors obtained by the adjacent 2-frame samples, so that the recognition rate can be improved under the condition of not obviously increasing the consumption of computing resources and storage resources, and the final output probability prob final (c) of each type of targets is counted according to the following formula:
Wherein prob k (c) is the output probability of the previous k frames of the current frame to the c-th class target, k is the frame number of the frame participating in fusion, and k=0, 1,2; c is the class number of the object of interest, c=0, 1,2,3,4; the class corresponding to the maximum probability value in the final probability prediction vector of the current frame is the classification result of the sample.
Compared with the traditional method of classifying by using only single frame samples, the decision fusion strategy adopted by the invention improves the recognition rate under the condition of not obviously increasing the consumption of computing resources and storage resources.
The present invention will be described in further detail from the training and testing procedures
Example 6
The method for extracting and classifying the target features based on millimeter wave Lei Dadian cloud data is the same as that of the embodiments 1-5, and the method is divided into two processes of training and testing for explanation.
1) Training process
1.1 Acquiring a training scene-level point cloud data set: the method comprises the steps of acquiring scene-level target point cloud data based on a millimeter wave radar data acquisition system, and forming a training scene-level target point cloud data set, wherein the data set is provided with a point-by-point type number, a tracking number and a scene number.
1.2 Generating a training residual scene level point cloud dataset: and only reserving point cloud data belonging to pedestrians, pedestrian groups, two-wheelers, cars and large trucks in the data set according to class labels in the data set, readjusting the numbers of the residual point cloud data, enabling the target point clouds with the same class to have the same class numbers, continuously observing the same targets to have the same tracking numbers, and generating a training residual scene-level target point cloud data set by all reserved target point clouds.
1.3 Generating a training sample set: sliding window is carried out on the reserved point cloud data in the observation time at a certain time interval, motion compensation is carried out on all point clouds in the current time window to carry out data fusion, the target point clouds which belong to the same time window and have the same tracking number form a training sample, the training samples with the same tracking number in a plurality of continuous time windows have frame numbers which are sequentially increased along with time, and all the training samples form a training sample set.
1.4 Single point and multi-point training sample separation: according to the point cloud point number of each sample in the training sample set, the point cloud point number is divided into a single-point sample with only one target point and a multi-point sample with two or more target points, so that the single-point training sample set and the multi-point training sample set are respectively formed.
1.5 Single point training sample feature extraction: extracting 6-dimensional features from single-point samples, wherein all the single-point sample features form a single-point training feature setWhere fea i,j represents the j-th feature of i single-point samples,/>A matrix representing all feature components of all single point samples, a matrix dimension of N o×Mo,No representing the number of single point samples, and M o representing the feature dimension.
1.6 Single point classifier training: feature aggregationAnd inputting a random forest classifier for classifying the single-point targets, wherein the output of the random forest is the ground target judgment result of each decision tree in the random forest. Selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree satisfies one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value. And obtaining the trained single-point classifier after the ending condition is met.
1.7 Multi-point training sample feature extraction: 125-dimensional features are extracted for a multi-point target, and all multi-point sample features form a multi-point training feature setWhere fea i,j represents the j-th feature of i multi-point samples,/>A matrix representing all feature components of all the multi-point samples, the matrix dimension N l×Ml,Nl representing the number of multi-point samples and M l representing the feature dimension.
1.8 Multi-point classifier training: feature aggregationAnd inputting a random forest classifier for classifying the multi-point targets, wherein the output of the random forest is the ground target judgment result of each decision tree in the random forest. Selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree satisfies one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value. And obtaining the trained multipoint classifier after the ending condition is met.
2) Test procedure
2.1 Acquiring a scene-level point cloud data set for testing: and acquiring new scene-level target point cloud data based on the millimeter wave radar data acquisition system under a new scene to form a scene-level target point cloud data set for testing, wherein the data set is provided with a point-by-point type number, a tracking number and a scene number.
2.2 Generating a test residual scene level point cloud dataset: according to the category numbers in the data set, only the point cloud data of pedestrians, pedestrian groups, two-wheelers, cars and large trucks in the data set are reserved, the target point clouds with the same category numbers and the same targets with the same tracking numbers observed continuously form a training scene-level target point cloud data acquisition data set;
2.3 Generating a test sample set: sliding window data of reserved point clouds in observation time at a certain time interval, and performing motion compensation on all point clouds in the current time window to perform data fusion, wherein target point clouds which belong to the same time window and have the same tracking number form a test sample, and the test samples with the same tracking number in a plurality of continuous time windows have frame numbers which are sequentially increased along with time;
2.4 Classifying all test samples): and traversing all the test samples in the test sample set to classify the samples sample by sample, and repeating the following steps 2.5.1) to 2.5.6) until all the target classifications are completed.
2.5.1 If the current sample is a single point sample, go to step 2.5.2), otherwise go to step 2.5.4).
2.5.2 The method in the single-point training sample feature extraction described in the step 1.5) of the training process of the present example is adopted to extract 6-dimensional features from the current single-point sample.
2.5.3 The single-point sample classifier obtained in the single-point classifier training in the step 1.6) of the training process of the example is adopted to classify the characteristics extracted by the current single-point sample, the output probability of each type of pavement target is counted, the probability prediction vector of the current single-point sample is obtained, the step 2.5.6) is transferred to participate in decision fusion, and finally the classification result of the current frame sample is obtained.
2.5.4 The 125-dimensional characteristics of the current multi-sample are extracted by adopting the method in the multi-point training sample characteristic extraction in the step 1.7) of the training process.
2.5.5 The multi-point sample classifier obtained in the multi-point classifier training in the step 1.8) of the training process of the example is adopted to classify the characteristics extracted by the current multi-point sample, the output probability of each type of pavement target is counted, the probability prediction vector of the current multi-point sample is obtained, the step 2.5.6) is transferred to participate in decision fusion, and finally the classification result of the current frame sample is obtained.
2.5.6 Searching the previous 2 frame samples adjacent to the current frame according to the frame number of the target, inputting the probability prediction vector obtained by the classifier to carry out decision fusion to obtain the final probability prediction vector of the current frame sample, wherein the category corresponding to the maximum probability value in the probability prediction vector is the category of the sample predicted.
The invention solves the technical problems of larger consumption of storage resources and computing resources and lower recognition rate of the prior method. The implementation process is as follows: 1) Preprocessing a data set; 2) Single point/multi-point sample separation; 3) Training sample feature extraction; 4) Training a classifier; 5) Acquiring a probability prediction vector based on a single frame sample; 6) And fusing the probability prediction vectors of the multi-frame samples to classify. According to the invention, the single-point target and the multi-point target are respectively processed, multi-domain feature extraction is carried out on the multi-point target, and decision fusion is carried out on multi-frame samples in the test process. The method has the advantages of alleviating the problem of low single-point target recognition rate, improving the overall recognition rate, reducing the consumption of computing resources and storage resources and having good instantaneity.
The technical effects of the present invention will be further described with reference to experiments and data thereof
Example 7
The method for extracting and classifying target features based on millimeter wave Lei Dadian cloud data is the same as that of examples 1 to 6,
Experimental data: the invention adopts the disclosed millimeter wave radar dataset RADARSCENES to verify, the dataset adopts the data acquisition vehicle loaded with four radars to acquire data for more than 4 hours under different weather in different sections, the data acquisition vehicle comprises more than 7500 unique objects in total of 11 types, and point-by-point labels including category numbers, tracking numbers and scene numbers are provided. The invention reserves millimeter wave Lei Dadian cloud data belonging to pedestrians, pedestrian groups, two-wheelers, cars and large trucks in the data set, and the data fusion time is set to be 180ms.
The experimental contents are as follows: after preprocessing of scene-level millimeter wave radar point cloud data is completed, training respective random forest classifiers according to a single-point training set and a multi-point testing set respectively, respectively extracting corresponding single-point and multi-point characteristics according to sample points according to test data, testing by adopting the trained corresponding single-point and multi-point classifiers to obtain probability prediction vectors of current frame samples, and finally carrying out decision fusion on multi-frame samples to obtain final classification results.
The comparison method is a method which adopts the traditional characteristics and does not process a single-point target and a multi-point target respectively, and then the classified macro-F1 is counted and calculated, wherein the higher the macro-F1 is, the better the identification performance is, and the comparison result obtained by the experiment is shown in the table 1.
TABLE 1 comparison of conventional Classification method results with the Classification macro-F1 of the invention
As can be seen from Table 1, compared with the conventional classification method, when the invention is used for classifying 5 kinds of pavement targets, the macro-F1 of all targets and single-point targets are improved, and compared with the conventional classification method, the invention has 8% improvement of all targets and about 5% improvement of single-point targets, and the overall recognition performance is obviously better. Because the difference between the method and the conventional method mainly lies in the feature extraction mode and whether the single-point target and the multi-point target are respectively processed, the performance is improved mainly because the method extracts the features more suitable for the classification task and the strategy for respectively processing the single-point target and the multi-point target, the features more suitable for the classification task are extracted to better describe various characteristics of the object, the overall performance is improved, and the strategy for respectively processing the single-point target and the multi-point target enables the features and the classifier to have more pertinence, so that the single-point target identification performance is improved.
In summary, the target feature extraction and classification method based on millimeter wave Lei Dadian cloud data solves the technical problems that the prior method consumes large storage resources and computing resources and has low recognition rate. The implementation process comprises the following steps: acquiring target data, and preprocessing a scene-level millimeter wave Lei Dadian cloud data set to obtain a target-level point cloud data set; separating single-point and multi-point sample in the target level point cloud data set to generate single-point and multi-point sample sets for training and testing; respectively extracting corresponding single-point and multi-point training feature sets from single-point and multi-point samples for training and training corresponding single-point and multi-point classifiers; acquiring probability prediction vectors of single-frame test samples by adopting single-point and multi-point classifiers respectively; and fusing the probability prediction vectors of the multi-frame samples to classify. The invention has the innovation points that a single-point target and a multi-point target are respectively processed, multi-domain feature extraction is carried out on the multi-point target, and decision fusion is carried out on multi-frame samples in the test process. The method has the advantages of alleviating the problem of low single-point target recognition rate, improving the overall recognition rate, reducing the consumption of computing resources and storage resources and having good instantaneity. Road surface target classification tasks in autopilot may be applied.

Claims (4)

1. The target feature extraction and classification method based on millimeter wave Lei Dadian cloud data is characterized by respectively processing a single-point target and a multi-point target, performing multi-domain feature extraction on the multi-point target, and performing decision fusion on multi-frame samples in the test process, and comprises the following steps:
1) Preprocessing a data set: the method comprises the steps that a data acquisition system of the millimeter wave radar acquires a scene-level point cloud data set with a category number, a tracking number and a scene number, point cloud data of a target of interest are reserved according to the category number, the number of the residual point cloud data is readjusted to obtain a residual scene-level point cloud data set, sliding windows and data fusion are carried out on the residual scene-level point cloud data set, each residual scene-level point cloud data set generates a plurality of target-level point cloud samples, the target-level point cloud samples with the same tracking number are endowed with frame numbers which are sequentially increased along with time, all the target-level point cloud samples form a target-level millimeter wave Lei Dadian cloud data set with the frame numbers, and the target-level millimeter wave Lei Dadian cloud data set is randomly divided into a training set and a testing set according to the scene number;
2) Single point/multi-point sample separation: dividing a target-level millimeter wave Lei Dadian cloud data set for training and testing into a training/testing sample set of single points and multiple points respectively, wherein the dividing criterion is the number of target points in a current sample, the sample is determined to be a single-point sample when the sample has only one target point, and the sample is determined to be a multiple-point sample when the sample has two or more target points;
3) Training sample feature extraction: for each single point training sample of the set of single point target training samples, each sample extracting a different type of feature from the radar's underlying measurements and combinations thereof; carrying out multi-domain feature extraction on each multi-point training sample of a multi-point target training sample set, wherein each sample is used for manually extracting five corresponding different types of features from five different domains respectively, and all the features are used for reflecting the shape, speed, gesture, material and position characteristics of a target; extracting 6-dimensional features from each single-point sample, generating a single-point training feature set overall, extracting 125-dimensional features from each multi-point sample, and generating a multi-point training feature set overall;
3.1 Single point training sample feature extraction: after 6-dimensional features are extracted from all single-point training samples, all single-point training sample features generate a single-point target training feature set Where fea i,j represents the jth feature of the ith single point sample,A single-point feature matrix representing all feature components of all single-point training samples, the matrix dimension being N o×Mo,No representing the number of single-point samples, M o representing the feature dimension of the extracted features for the single-point samples;
3.2 Multi-point training sample feature extraction: extracting multi-domain features of all the multi-point training samples, and after 125-dimensional features are extracted, forming a multi-point target training feature set by all the multi-point training sample features Where fea i,j represents the j-th feature of the i-th single-point sample,/>A multi-point feature matrix representing all feature components of all single-point samples, wherein the matrix dimension is N l×Ml,Nl, the number of the multi-point samples is represented, and M l represents feature dimension of the extracted features of the multi-point samples;
4) Training a classifier: respectively training corresponding classifiers aiming at the single-point target and the multi-point target training feature set to obtain a single-point classifier for classifying the single-point target and a multi-point classifier for classifying the multi-point target;
4.1 Single point classifier training: feature set for single point training Inputting a random forest classifier for single-point target classification, wherein the output of the random forest classifier is the combination of the road surface target judgment results of each decision tree in the random forest; selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree satisfies one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value; after the ending condition is met, a single-point classifier for single-point target classification is obtained;
4.2 Multi-point classifier training: aggregating multipoint training features Inputting a random forest classifier for classifying the multi-point targets, wherein the output of the random forest classifier is the combination of the ground target judgment results of each decision tree in the random forest; selecting a Kerning coefficient for measuring node splitting attribute of the decision tree; the end condition of the decision tree is one of the following: the decision tree reaches the maximum depth, the purity of the leaf nodes reaches the threshold value, and the number of samples of the leaf nodes reaches the set value; after the finishing condition is met, a multi-point classifier for multi-point target classification is obtained;
5) Acquiring a probability prediction vector based on single frame samples: when the target millimeter wave Lei Dadian cloud test set is called for testing, each frame of samples in the target millimeter wave Lei Dadian cloud test set of the single point target are classified by a single point classifier, and each frame of samples in the target millimeter wave Lei Dadian cloud test set of the multi-point target are classified by a multi-point classifier; traversing all single-point and multi-point test samples respectively, extracting corresponding single-point target and multi-point target characteristics, and carrying out classification test by using corresponding single-point and multi-point classifiers respectively to obtain probability prediction vectors corresponding to each sample;
6) Fusing probability prediction vectors of multi-frame samples to classify: in single-point test or multi-point test, searching the probability prediction vector of the previous 2 frame samples adjacent to the current frame according to the frame number of the test sample, accumulating the probability prediction vector with the probability prediction vector obtained by the current frame, and obtaining the final probability prediction vector of the current frame after decision fusion, wherein the class corresponding to the maximum probability value in the final probability prediction vector of the current frame is the classification result of the sample; and traversing all single-point and multi-point test samples, fusing probability prediction vectors of the current frame and the samples of the adjacent previous 2 frames to obtain a final classification result of each sample, and finishing the test process to finish the extraction and classification of target features of millimeter wave radar point cloud data.
2. The method for extracting and classifying target features based on millimeter wave Lei Dadian cloud data according to claim 1, wherein the single-point training sample feature extraction in step 3.1) extracts 6-dimensional features for the single-point target, the features are: sigma is the value of the RCS of the radar cross section of the target, v is the radial velocity of the target, R is the distance between the target and the radar, theta is the azimuth angle of the target relative to the radar, sigma R is the combination of the RCS value and the distance, and v back is the backward velocity;
the combined sigma R combination calculation formula of the target radar sectional area value and the target and radar distance in the characteristics is as follows:
σR=σ×R2
The calculation formula of the target backward velocity v back is as follows:
vback=v/cos(θ)
the other 4 features are sigma, v, R and theta, which are basic measurement values of the radar and can be directly obtained.
3. The method for extracting and classifying object features based on millimeter wave Lei Dadian cloud data according to claim 1, wherein the extracting of the multi-point training sample features in step 3.2) extracts 125-dimensional features for the multi-point object, wherein the features include 5 types, namely object shape related features, speed distribution related features, RCS distribution related features, position related features and combination features, and the specific steps are as follows:
The extracted target shape-related features include the following 16 sets of total 53-dimensional features: xyLambda1, xyLambda2, xyLambdaQuad1, xyLambdaQuad are characteristic values of a covariance matrix of x and y coordinates of the target point cloud and a second power thereof, majorLength and minorLength are principal axes of 95% confidence ellipses corresponding to the covariance matrix of x and y coordinates of the target point cloud, nDetects, compactness, clusterWidth are points, compactness and width of the target point cloud, maxDistDev are average distances from each point of the target point cloud to a corresponding vector clusterWidth, areaRectHull, perimeterRectHull, densityRectHull are areas, circumferences and densities of minimum bounding rectangles of the target point cloud, areaConvexHull, perimeterConvexHull, densityConvexHull, circularity are areas, circumferences and densities of convex hulls of the target point cloud, radiusCircleFit, radiusCircleMin are radiuses of an optimal matching circle and the minimum bounding circle of the target point cloud, xyLinearity is a correlation coefficient of x and y coordinates of each point of the target point cloud, meanClusterWidth is an average distance between each point of the target point cloud, spreadX, madX, varX, stdX, skewX, kurtX is an expansion, an average deviation, a variance, a standard deviation, a bias degree, an abundance, spreadY, madY, varY, stdY, skewY, kurtY is an expansion of y coordinates of the target point cloud, an average deviation, a standard deviation, a bias degree, an abundance, areaXY, densityXY is an expansion of the target point cloud in x and y directions corresponding to the bounding rectangles, 69 is an area and a density of the target point cloud, 69 is a distance, a distance of the target point cloud is a square error, a power of the target point cloud is obtained, an expansion of the average deviation is a square error of the average deviation of the target point cloud is a square error, and an average deviation of the average deviation is a square error of the average deviation of the cloud is a square error of the average, and the average deviation is a square is obtained after the average deviation is calculated, and the average is calculated, and is calculated by average, and the average is calculated by average and is calculated by average;
The extracted target speed distribution characteristics comprise 4 groups of total 18-dimensional characteristics :minVelocity、maxVelocity、meanVelocity、spreadVelocity、madVelocity、varVelocity、stdVelocity、skewVelocity、kurtVelocity which are extreme values, mean values, expansion, average deviation, variance, standard deviation, skewness and abundance of the target point cloud radial speed, stdVelocotyUncomp which are standard deviation of the uncompensated target point cloud speed, meanVelocity, sqrtMeanVelocity, logMeanVelocity, quadMeanVelocity which are the average target point cloud speed and the open square, logarithm and second power thereof, and v 1~v5 which are the histogram codes of the target point cloud speed;
The extracted target RCS distribution-related features include the following 3 sets of total 18-dimensional features: minRCS, maxRCS, meanRCS, spreadRCS, madRCS, varRCS, stdRCS, skewRCS, kurtRCS, sumRCS is extremum, mean value, expansion, average deviation, variance, standard deviation, skewness, abundance sum and sum of the target point cloud RCS, meanRCS, sqrtMeanRCS, logMeanRCS, quadMeanRCS is open square, logarithmic sum and second power of the target point cloud RCS, and RCS 1-RCS 5 are histogram codes of the target point cloud RCS;
The extracted target location related features include the following 5 sets of total 14-dimensional features: minRange, maxRange, meanRange is the extreme value and the average value of the cloud distance of the target point, minAngle, maxAngle, meanAngle is the extreme value and the average value of the cloud azimuth angle of the target point, MEANDISTANCE, MEANORIENTATION is the position of the center point of the cloud of the target point relative to the radar, minX, maxX, meanX is the extreme value and the average value of the x coordinate of the cloud of the target point, and minY, maxY, meanY is the extreme value and the average value of the y coordinate of the cloud of the target point;
The extracted target combination features include 7 sets of 22-dimensional features :xyvrLambda1、xyvrLambda2、xyvrLambda3、xyvrLambda4、xyvrLambdaQuad1、xyvrLambdaQuad2、xyvrLambdaQuad3、xyvrLambdaQuad4 as target point cloud x, y coordinates and compensation speed, feature values of RCS covariance matrix and squares of feature values, axisLength, axisLength, axisLength, axisLength4 as target point cloud x, y coordinates and compensation speed, main axis length of 95% confidence ellipse corresponding to RCS covariance matrix, spreadAngleComp, nDetectsComp as target point cloud distance versus angular spread and target point compensation, RVRLINEARITY, ANGLEVRLINEARITY as correlation coefficients of target point cloud speed and distance and azimuth angle, majorVrLinearity, minorVrLinearity as correlation coefficients of projection length and speed of target point cloud points on main axis of 95% confidence ellipse corresponding to x and y coordinate covariance matrix, RVRSPREAD, ANGLEVRSPREAD as ratio of target point cloud distance spread and angle spread relative to speed spread, majorVrSpread, minorVrSpread as ratio of projection length and speed spread of target point cloud on main axis of 95% confidence ellipse corresponding to x and y coordinate covariance matrix.
4. The method for extracting and classifying target features based on millimeter wave Lei Dadian cloud data according to claim 1, wherein the classifying of the probability prediction vector of the fused multi-frame sample in step 6) is to accumulate the probability prediction vectors of the current frame and the adjacent previous 2 frame samples to obtain the final probability prediction vector of the current frame, each value in the prediction vector is the final output probability of each type of target, and the final output probability prob final (c) of each type of target is counted according to the following formula:
Wherein prob k (c) is the output probability of the previous k frames of the current frame to the c-th class target, k is the frame number of the frame participating in fusion, and k=0, 1,2; c is the class number of the object of interest, c=0, 1,2,3,4; the class corresponding to the maximum probability value in the final probability prediction vector of the current frame is the classification result of the sample.
CN202111153281.0A 2021-09-29 2021-09-29 Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method Active CN113723365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153281.0A CN113723365B (en) 2021-09-29 2021-09-29 Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111153281.0A CN113723365B (en) 2021-09-29 2021-09-29 Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method

Publications (2)

Publication Number Publication Date
CN113723365A CN113723365A (en) 2021-11-30
CN113723365B true CN113723365B (en) 2024-05-14

Family

ID=78685373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153281.0A Active CN113723365B (en) 2021-09-29 2021-09-29 Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method

Country Status (1)

Country Link
CN (1) CN113723365B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241258A (en) * 2021-12-09 2022-03-25 深圳慕智科技有限公司 Automatic driving radar point cloud data oriented amplification and optimization method
CN114818916B (en) * 2022-04-25 2023-04-07 电子科技大学 Road target classification method based on millimeter wave radar multi-frame point cloud sequence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110427986A (en) * 2019-07-16 2019-11-08 浙江大学 A kind of kernel support vectors machine objective classification method based on millimetre-wave radar point cloud feature
AU2020100027A4 (en) * 2019-11-20 2020-02-20 Nanjing University Of Posts And Telecommunications Electroencephalogram-based negative emotion recognition method and system for aggressive behavior prediction
CN112285709A (en) * 2020-05-19 2021-01-29 陕西理工大学 Atmospheric ozone remote sensing laser radar data fusion method based on deep learning
CN112505648A (en) * 2020-11-19 2021-03-16 西安电子科技大学 Target feature extraction method based on millimeter wave radar echo
AU2021101780A4 (en) * 2021-04-07 2021-05-27 Beijing Normal University Aboveground Biomass Estimation and Scale Conversion for Mean Regional Spectral Units
CN113093170A (en) * 2021-06-07 2021-07-09 长沙莫之比智能科技有限公司 Millimeter wave radar indoor personnel detection method based on KNN algorithm
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200225655A1 (en) * 2016-05-09 2020-07-16 Strong Force Iot Portfolio 2016, Llc Methods, systems, kits and apparatuses for monitoring and managing industrial settings in an industrial internet of things data collection environment
CN106446931A (en) * 2016-08-30 2017-02-22 苏州大学 Feature extraction and classification method and system based on support vector data description

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110427986A (en) * 2019-07-16 2019-11-08 浙江大学 A kind of kernel support vectors machine objective classification method based on millimetre-wave radar point cloud feature
WO2021008202A1 (en) * 2019-07-16 2021-01-21 浙江大学 Method for kernel support vector machine target classification based on millimeter-wave radar point cloud features
AU2020100027A4 (en) * 2019-11-20 2020-02-20 Nanjing University Of Posts And Telecommunications Electroencephalogram-based negative emotion recognition method and system for aggressive behavior prediction
CN112285709A (en) * 2020-05-19 2021-01-29 陕西理工大学 Atmospheric ozone remote sensing laser radar data fusion method based on deep learning
CN112505648A (en) * 2020-11-19 2021-03-16 西安电子科技大学 Target feature extraction method based on millimeter wave radar echo
AU2021101780A4 (en) * 2021-04-07 2021-05-27 Beijing Normal University Aboveground Biomass Estimation and Scale Conversion for Mean Regional Spectral Units
CN113093170A (en) * 2021-06-07 2021-07-09 长沙莫之比智能科技有限公司 Millimeter wave radar indoor personnel detection method based on KNN algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LiDAR点云中三维物体快速分割及分类系统;邹双徽;中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑;20210215;全文 *
Point Cloud Features-Based Kernel SVM for Human-Vehicle Classification in Millimeter Wave Radar;Zihao Zhao 等;IEEE Access;20200203;全文 *
基于激光雷达点云多特征提取的车辆目标识别算法;李欣 等;传感器与微系统;20200924(第10期);全文 *
无人机影像密集点云中目标层次提取研究;路创军;水利规划与设计;20200515(第05期);全文 *

Also Published As

Publication number Publication date
CN113723365A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN113723365B (en) Millimeter wave Lei Dadian cloud data-based target feature extraction and classification method
Zhou et al. Robust vehicle detection in aerial images using bag-of-words and orientation aware scanning
WO2023097971A1 (en) 4d millimeter wave radar data processing method
CN110033457B (en) Target point cloud segmentation method
CN105335702B (en) A kind of bayonet model recognizing method based on statistical learning
CN108957453A (en) A kind of high-precision pre-filter method and recognition methods based on multiple target tracking
CN110018453A (en) Intelligent type recognition methods based on aircraft track feature
CN111340855A (en) Road moving target detection method based on track prediction
CN113312438B (en) Marine target position prediction method integrating route extraction and trend judgment
CN113516052B (en) Imaging millimeter wave radar point cloud target classification method based on machine learning
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN111123262B (en) Automatic driving 3D modeling method, device and system
CN110210418A (en) A kind of SAR image Aircraft Targets detection method based on information exchange and transfer learning
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
CN114359876A (en) Vehicle target identification method and storage medium
CN103761747A (en) Target tracking method based on weighted distribution field
CN115657002A (en) Vehicle motion state estimation method based on traffic millimeter wave radar
CN107274446A (en) A kind of sharp Geometry edge point recognition methods of utilization normal direction uniformity
Li et al. Evaluation the performance of fully convolutional networks for building extraction compared with shallow models
CN111832463A (en) Deep learning-based traffic sign detection method
CN116664823A (en) Small sample SAR target detection and recognition method based on meta learning and metric learning
Han et al. Accurate and robust vanishing point detection method in unstructured road scenes
CN115909072A (en) Improved YOLOv4 algorithm-based impact point water column detection method
Shao et al. Design and research of metal surface defect detection based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant