CN117347965A - 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network - Google Patents

4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network Download PDF

Info

Publication number
CN117347965A
CN117347965A CN202311271904.3A CN202311271904A CN117347965A CN 117347965 A CN117347965 A CN 117347965A CN 202311271904 A CN202311271904 A CN 202311271904A CN 117347965 A CN117347965 A CN 117347965A
Authority
CN
China
Prior art keywords
data set
training
point cloud
radar
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311271904.3A
Other languages
Chinese (zh)
Inventor
张华�
胡敏
杨波
邱远帆
刘思奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202311271904.3A priority Critical patent/CN117347965A/en
Publication of CN117347965A publication Critical patent/CN117347965A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on the combined neural network comprises a data set generation module, a target detection module, a sidelobe suppression module, a radar point cloud target recognition module and a pre-training module; according to the invention, the 4D millimeter wave radar is adopted to acquire real target data, ideal target data is generated based on a radar echo signal model, different training/verification data sets consisting of the real data and the ideal data are generated according to different network module demand data generating modules, and the training/verification data sets are sent to corresponding network models in the training module to train and acquire a pre-training model. The pre-training module inputs a real radar image, acquires a high-quality radar image through target detection and sidelobe suppression, then generates a 4D point cloud image according to a point cloud space recombination algorithm, and sends the 4D point cloud image to a point cloud target recognition pre-training model to recognize the target type, so that the millimeter wave radar high-quality and high-resolution imaging is realized and the method is applied to traffic target recognition.

Description

4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network
Technical Field
The invention belongs to the technical field of millimeter wave radar imaging, and particularly relates to a 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on a combined neural network.
Background
Unlike cameras and lidars, millimeter wave radar is hardly affected by weather such as heavy fog, rain and snow, which enables it to work reliably in severe environments, and millimeter wave radar imaging technology can present detection targets in real-time in the form of intuitive point cloud images and display distance, speed and azimuth information, so millimeter wave radar imaging is a promising solution in future automotive fields. Urban road environments are crowded, close-range objects have different RCS, speeds and directions, which puts higher demands on millimeter wave radar imaging capability, however, radar images often have side lobes due to the limitation of total length, total number and topological structure of antenna units, radar 4D point cloud images obtained under the conditions of traffic small target radar scattering area (RCS), large channel mismatch and side lobe diffusion are blurred, and real target information is difficult to extract, so how to make radar side lobe level as low as possible, and optimizing radar imaging becomes an important research problem in the technical field of millimeter wave radar imaging.
Over the past few decades, many approaches have been proposed to address the sidelobe and noise issues, such as Wiener Filters (WF), deconvolution (L-R Decon), coherence Factors (CF), the Li algorithm, and classical windowing algorithms. WF produces a minimum mean square error between the restored image and the original image while removing additive noise and inverse blur, which requires us to know the power spectra of the noise and the original image, however in most practical cases such information is unknown or incompletely defined, inaccurate noise estimation may lead to undesirable restoration results. The L-R Decon is initially used for astronomical image restoration and noise elimination by using a nonlinear iterative deconvolution method, however, in the use process, the proper iteration times cannot be determined, and the increase of the iteration times can slow down the calculation process, and amplified noise introduces ringing effect. CF is defined as the ratio of coherent power to incoherent power of the radar signal, where CF is equal to 1 at the target location and 0 at no target location, clutter and sidelobes are effectively suppressed by the product of the radar image and CF, but CF has less effect along the range dimension.
In view of the above problems, in recent years, many scholars propose a radar image enhancement method based on a convolutional neural network, using an original radar image as an input sample, and training a network using its corresponding ideal radar image without side lobes as a tag, and the trained network can suppress the side lobes in the radar image.
However, it has the drawbacks of:
1. because of the difference of the total length, the total number and the topological structure of the antenna units, the images generated by the radar each time and the generated side lobe characteristics are different, and therefore the fixed side lobe suppression network model parameters have no universality.
2. The true radar image dataset for neural network training is lacking. Acquiring a large number of real data sets wastes a large amount of manpower and financial resources, noise and clutter added in the data sets acquired using simulation have gaps with the real data, and signal interference between radar antennas is not considered.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on a combined neural network, which adopts a 4D millimeter wave radar to acquire real target data, generates ideal target data based on a radar echo signal model, generates different training/verification data sets consisting of the real data and the ideal data according to a data generation module required by different network modules, and transmits the training/verification data sets to a corresponding network model in a training module to train to acquire a pre-training model. The pre-training module inputs a real radar image, acquires a high-quality radar image through target detection and sidelobe suppression, then generates a 4D point cloud image according to a point cloud space recombination algorithm, and sends the 4D point cloud image to a point cloud target recognition pre-training model to recognize the target type, so that the millimeter wave radar high-quality and high-resolution imaging is realized and the method is applied to traffic target recognition.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on the combined neural network comprises a data set generation module, a target detection module, a sidelobe suppression module, a radar point cloud target recognition module and a pre-training module;
the data set generation module is used for generating a target detection data set, a sidelobe suppression data set and a radar point cloud target identification data set by utilizing an ideal data set generated by the 4D radar signal model and a data set with the inverse acquisition angle of the actual radar platform;
the target detection module is used for building a network model, and training is carried out based on the target detection data set to obtain a target detection pre-training model;
the sidelobe suppression module is used for constructing a deep convolution network model, and training is carried out based on the sidelobe suppression data set to obtain a sidelobe suppression pre-training model;
lei Dadian cloud target recognition module for constructing a deep convolutional neural network model, and training based on the radar point cloud target recognition data set to obtain a radar point cloud target recognition pre-training model;
the pre-training module is used for inputting a data set acquired by an actual scene, obtaining a target data set through a target detection pre-training model and a sidelobe suppression pre-training model, obtaining a radar point cloud image from the target data set through a radar point cloud recombination algorithm, and carrying out target identification on the point cloud image through a radar point cloud target identification pre-training model.
The data set generation module comprises:
(1) Ideal data generation section based on 4D millimeter wave radar signal model: the generation of ideal data firstly establishes a mathematical model according to the antenna layout, echo characteristics and radar distance, speed and angle measurement principles of a 4D millimeter wave radar in a MIMO-TDM mode;
then, based on the signal model, setting different target information parameters to obtain a target data stream;
finally, processing the data stream through a radar signal processing algorithm to obtain a target azimuth pitching dimension matrix, completing the construction of an ideal data set, and taking the ideal data set as a training data set in each network model;
(2) A real data acquisition section based on an actual radar platform: the method comprises the steps that through the fact that the actual radar platform respectively adopts data of one angular inverse and data of two angular inverse placed in different azimuth pitching, collection of a real data set is achieved, and the real data set is used as a verification data set in each network model;
(3) Target detection network training-validation dataset portion: according to the ideal data and real data acquisition methods in the steps (1) and (2), generating an ideal data set with the size of 256 multiplied by 256 and an acquired real data set which are input by a target detection training network, wherein the quantity ratio of the ideal data set to the real data set is 4:1, the data set comprises two types of targets and targets, and the data set ratio of the two types is about 1:1 so as to meet the balance of the data types;
(4) Sidelobe suppression network training-verification dataset part: according to the ideal data and real data acquisition methods in the steps (1) and (2), generating a 2-channel ideal data set and a real data set which are used for sidelobe suppression network training and have the size of 256 multiplied by 256, wherein the real part and the imaginary part of the radar data set are respectively stored in the two channels, the number of the ideal data set and the real data set is acquired according to a ratio of 4:1, and the data set is classified into two types comprising only strong targets and weak targets, and the ratio is about 1:1;
(5) Lei Dadian cloud target recognition network training-verification dataset part: on the basis of the ideal data and the real data obtained in the steps (1) and (2), generating ideal and real point cloud data for radar point cloud target recognition network training through a radar point cloud space recombination algorithm, wherein the proportion of the ideal point cloud data to the real point cloud data is 4:1, the point cloud data set comprises 3 classes of pedestrians, carts and carts.
The target detection module comprises a convolutional neural network model, the network model comprises 4 convolutional layers, 2 full-connection layers and 1 activation layer, radar data features are extracted through the convolutional layers, the full-connection layers vectorize radar image data, the activation layer outputs data nonlinear decision to the content of the next neuron by using a relu function, the network model inputs a target detection network training-verification data set part generated by the data set generation module, the characteristics of the input data set are learned, a data set containing targets is output through iterative training, and the generated target detection pre-training model is sent to the pre-training module.
The sidelobe suppression module builds a full convolution neural network based on the ResNet model, is different from the ResNet model in that a full connection layer is removed, the full convolution neural network is used for guaranteeing the size of an output image to be unchanged, and the mode of using 3 multiplied by 3 convolution kernels is changed; the full convolution neural network comprises a plurality of convolution layers, and comprises a convolution layer for extracting characteristics of input data, 14 residual modules and a convolution layer for reducing dimension output, wherein each residual module consists of 2 convolution layers and a BN layer, the input data adopts a sidelobe suppression network training-verification data set part generated by a data set generating module, and an optimal pre-training model for sidelobe suppression is generated through training and is sent to the pre-training module.
The radar point cloud target identification module:
the PointNet is used as a network model for radar point cloud target recognition, a radar point cloud target recognition network training-testing data set part generated by a data set generating module is received as input, and a point cloud target recognition pre-training model is obtained through feature extraction, forward transmission and back propagation and is sent to a pre-training module part.
The point cloud target recognition block diagram of the network model for radar point cloud target recognition is mainly divided into two parts, wherein the first part is used for global feature extraction of point cloud data and comprises matrix conversion, MLP feature extraction and maximum pooling, radar point cloud data obtained through radar signal processing is input, the input data is aligned through multiplication with a conversion matrix learned by T-Net, then point cloud data features are extracted through a multi-layer perceptron and are aligned according to the conversion matrix, and finally the maximum pooling is performed on each dimension of the features to obtain final global features; the second part uses an MLP classifier to realize the classification of point cloud or the segmentation of point cloud, and predicts the final classification task of the global features through a perceptron.
The pre-training module comprises:
the target detection prediction part inputs the actual scene data set, predicts through the received target detection pre-training model, and sends output data to the sidelobe suppression prediction part;
the sidelobe suppression prediction part is used for receiving the data set detected by the target, predicting the data set through a sidelobe suppression pre-training model, acquiring the data set after sidelobe suppression and outputting the data set to the radar point cloud space recombination module;
lei Dadian cloud space reorganization module receives the data set after sidelobe suppression, generates point cloud data from azimuth pitching matrixes under all distance units through information such as distance, speed and angle, tensors into space stereoscopic images and sends the space stereoscopic images to radar point cloud target identification prediction parts;
lei Dadian cloud target identification prediction part receives the radar point cloud image generated by the radar point cloud space recombination module, predicts by utilizing the radar point cloud target identification pre-training model, and realizes the point cloud target category identification on the basis of image enhancement.
The data set generation module acquires real radar data as verification data sets of different network models by adopting a 77GHZ millimeter wave radar in a TDM-MIMO mode; based on the millimeter wave radar echo signal mathematical model, an ideal data set is obtained through simulation and used as a training data set of different network models.
The invention has the beneficial effects that:
(1) The established radar echo signal mathematical model has universality, the radar model and radar performance parameters are changed, a new training/verification data set for the deep learning network model can be quickly, efficiently and stably generated according to the radar echo signal model, and the disclosed radar data set can be richer and more complete through different radar data sets obtained through simulation;
(2) The neural network is used for suppressing the side lobe of the real radar image under the conditions of low signal-to-noise ratio, large channel mismatch, small target radar scattering surface and large observation angle, and compared with the traditional method, the method has better performance and better robustness;
(3) On the basis of suppressing a real radar sidelobe by using a neural network, a high-quality radar image can be acquired, a high-resolution radar 4D point cloud image is generated, a clear traffic road environment and target information are presented, and effective traffic road information is provided for traffic departments.
Drawings
Fig. 1 is a block diagram of the system architecture of the present invention.
Fig. 2 is a dataset acquisition of the present invention.
FIG. 3 is a model of an object detection network in the system of the present invention.
Fig. 4 is a side lobe suppressing network model in the system of the present invention.
FIG. 5 is a block diagram of point cloud target identification based on PointNet in the system of the present invention.
Fig. 6 is a flow chart of a point cloud spatial reorganization algorithm.
FIG. 7 is a flow chart of a pre-training module in the system of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, the 4D millimeter wave radar point cloud image sidelobe suppression and target identification system based on the combined neural network comprises a data set generation module, a target detection module, a sidelobe suppression module, a radar point cloud target identification module and a pre-detection module.
The data set generation module consists of an ideal data set and a real data set, wherein the ideal data set is obtained according to a radar signal model, the real data set is position information of angular opposition acquired by an actual millimeter wave radar platform, the ideal data set and the real data set respectively generate data/verification data sets for training of the target detection module, the sidelobe suppression module and the radar point cloud identification module according to functional requirements and a certain proportion, and the generated data sets of different types are sent to the corresponding neural network training module.
The target detection module, the sidelobe suppression module and the radar point cloud identification module respectively receive the data sets correspondingly generated by the data set generation module, perform parameter optimization and iterative training through the neural network models respectively built to obtain proper pre-training models of each network module, and send the pre-training models to the function prediction parts corresponding to the pre-training modules.
The pre-training module comprises a target detection part, a sidelobe suppression part, a radar point cloud space recombination part and a radar point cloud target identification part, radar data under a real traffic scene is input, a radar data set containing targets is obtained through a target detection pre-training model of the target detection part and is transmitted to the sidelobe suppression part, the pre-training model of the sidelobe suppression part outputs the radar data set with radar azimuth pitch dimension sidelobe suppression through training prediction and is transmitted to the radar point cloud space recombination part, coordinate information of the targets is obtained through distance and angle information under each distance unit, a high-quality point cloud image is formed and is transmitted to the radar point cloud target identification part, and the category of the point cloud targets obtained by the actual traffic scene is predicted through the radar point cloud target identification pre-training model.
Referring to fig. 2, the data set generating module of the present invention acquires real radar data as verification data sets of different network models in TDM-MIMO mode by adopting 77GHZ millimeter wave radar; based on the millimeter wave radar echo signal mathematical model, an ideal data set is obtained through simulation and used as a training data set of different network models. As shown in the figure, the target detection training/verification data set is divided into two cases of targets and targets, the sidelobe suppression training/verification data set is divided into two cases of only strong targets, only weak targets and strong targets, wherein a vehicle is a strong target, a pedestrian is a weak target, the Lei Dadian cloud target identification training/verification data set comprises a cart, a trolley and a pedestrian, and the training test is carried out by acquiring a complete data set and transmitting the complete data set to different network models.
Referring to fig. 3, the object detection network model of the present invention classifies images by extracting image features using a combination of convolution and pooling layers in succession, flattening and fully connected layers.
The built network model comprises 4 convolution layers and 2 full-connection layers, for 1600 radar training data sets and 400 verification data sets which are input and used for target detection, the first convolution layer carries out convolution by using 16 convolution cores of 21×21×2 (the step length is set to be 4 and the filling is 2), characteristic information contained in an image is extracted to obtain a 60×60×16 characteristic diagram, and then pooling is carried out in a size of 2×2 to obtain a characteristic diagram in a size of 30×30×16; the second convolution layer contains 8 convolutions of 5 x 5 (step size is set to 1, and padding is set to 0) and 2 x 2 pooling, and after the convolutions are pooled, the image size becomes 11 x 8; the third convolution layer contains 4 convolutions of 3×3, and the size of the image is unchanged after the convolution operation; the fourth volume layer contains 2 volumes of 3×3 and pooling of 2×2, the image size is unchanged after convolution operation, and the feature map becomes 5×5×2 after pooling; and finally, inputting the feature map extracted by 4 convolutions into a full-connection layer to obtain vector expression of an original radar image, wherein the number of output neurons of the first full-connection layer is 64, and the number of output neurons of the second full-connection layer is the class number 2 of the classification label (the data set label with the target is set to be 1, and the data set label without the target is set to be 0). The realization of the network model is characterized in that a Relu activation function is applied to a full connection layer, a cross entropy loss function is adopted, the learning rate is set to 0.1, the batch size is set to 10, the iteration number is set to 30, and the model verification is carried out by using a real data set through feature extraction, forward transmission and backward propagation of an input ideal data set, so that a proper target detection pre-training model is finally obtained and is sent to a pre-training module.
The object of this step is to obtain a target detection pre-training model through full convolution network training, the model can reject empty target data sets, and avoid sidelobe suppression by a sidelobe suppression network module recognizing environmental noise and clutter as targets to perform sidelobe suppression, and an effective data set is provided for sidelobe suppression.
Referring to fig. 4, the sidelobe canceling network model of the present invention, the ResNet-based full convolutional neural network comprises 30 convolutional layers, mainly summarized as one convolutional layer for feature extraction of input data, 14 residual modules and one dimension-reducing convolutional layer.
The input data are 2000 sidelobe suppression network training data sets and 500 verification data sets which are sent by the data set generation module; the first feature extraction layer consists of a convolution layer, a BN layer and an activation layer, wherein the convolution layer uses 2 convolution kernels with the size of 5 multiplied by 5 to carry out convolution, the size of an image is unchanged after convolution operation, and the activation layer uses a Prelu function; each residual error module consists of 2 convolution layers and a BN layer, wherein the convolution layers respectively adopt 8 convolution kernels of 5 multiplied by 5 and 4 convolution kernels of 3 multiplied by 3; the final layer uses 1 convolution layer of 1×1 to perform dimension reduction processing to output 256×256 images. In the training process, an ideal data set is input, a mean square error loss function is used, the learning rate is set to 0.1, the batch size is set to 20, the iteration number is set to 50, a real data set is used for model verification, and finally, a proper sidelobe suppression pre-training model is obtained and sent to a pre-training module.
The purpose of this step is to obtain a sidelobe suppression network pre-training model, which performs sidelobe suppression on radar target data in an actual scene, so as to realize radar image quality enhancement.
Referring to fig. 5, the point cloud target recognition block diagram based on the PointNet is mainly divided into two parts, wherein the first part is used for global feature extraction of point cloud data, including matrix transformation, MLP feature extraction and maximum pooling, radar point cloud data (expressed as n×3 two-dimensional tensor, n represents the point cloud number and 3 corresponds to xyz coordinates) obtained through radar signal processing is input, the input data is aligned by multiplying the input data with a transformation matrix learned by the T-Net, then the feature of the point cloud data is extracted by a multi-layer perceptron and is aligned according to the transformation matrix, and finally the maximum pooling is performed on each dimension of the feature to obtain the final global feature; the second part uses an MLP classifier to realize the classification of point cloud or the segmentation of point cloud, and predicts the final classification task of the global features through a perceptron.
The purpose of this step is to obtain a point cloud target recognition network pre-training model that identifies the type of Lei Dadian cloud targets.
Referring to fig. 6, a flow chart of a point cloud space reorganization algorithm of the present invention inputs a range azimuth pitch matrix obtained by a radar signal processing algorithm, calculates a position (x, y) and height information (z) of a target point according to azimuth and pitch information under each range unit, i.e., point cloud information, and forms point cloud information under all range units into a point cloud image in a space coordinate system.
Referring to fig. 7, the flow chart of the pre-training module of the invention inputs radar range azimuth pitching data under a real traffic scene, screens a data set through a target detection pre-training model, reserves the data set containing a target, transmits the data set to a sidelobe suppression pre-training part, realizes sidelobe suppression of a target azimuth pitching dimension under the learning of the pre-training model, then transmits the range azimuth pitching data after sidelobe suppression to a radar point cloud space recombination module, obtains a point cloud target image according to a point cloud space recombination algorithm, outputs the point cloud target image to a radar point cloud target recognition pre-training part, and finally recognizes a cart, a trolley and a pedestrian in the real traffic scene through the point cloud target recognition pre-training model.
The invention provides a 4D millimeter wave radar point cloud image sidelobe suppression and target identification system based on a combined neural network. Firstly, carrying out target detection on an original radar image based on a fully connected neural network, judging whether targets exist in the image, secondly, adopting the fully convolutional neural network based on ResNet to carry out sidelobe suppression on the radar image with targets, and finally, carrying out point cloud target identification through a PointNet network on the basis of obtaining the radar point cloud image through a space point cloud recombination algorithm.
The ideal data set obtained based on the millimeter wave radar echo model can enrich the radar data set, and the sidelobe suppression network suppresses the sidelobe of radar azimuth pitching images, so that the 4D point cloud imaging quality of millimeter wave radar traffic targets is enhanced, the clear and visual radar point cloud image information and the point cloud target class information obtained based on the point cloud target identification network can accelerate the development of intelligent traffic, and the method has a certain engineering application value.

Claims (10)

1. The 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on the combined neural network is characterized by comprising a data set generation module, a target detection module, a sidelobe suppression module, a radar point cloud target recognition module and a pre-training module;
the data set generation module is used for generating a target detection data set, a sidelobe suppression data set and a radar point cloud target identification data set by utilizing an ideal data set generated by the 4D radar signal model and a data set with the inverse acquisition angle of the actual radar platform;
the target detection module is used for building a network model, and training is carried out based on the target detection data set to obtain a target detection pre-training model;
the sidelobe suppression module is used for constructing a deep convolution network model, and training is carried out based on the sidelobe suppression data set to obtain a sidelobe suppression pre-training model;
lei Dadian cloud target recognition module for constructing deep convolutional neural network model,
training based on the radar point cloud target identification data set to obtain a radar point cloud target identification pre-training model;
the pre-training module is used for inputting a data set acquired by an actual scene, obtaining a target data set through a target detection pre-training model and a sidelobe suppression pre-training model, obtaining a radar point cloud image from the target data set through a radar point cloud recombination algorithm, and carrying out target identification on the point cloud image through a radar point cloud target identification pre-training model.
2. The combined neural network-based 4D millimeter wave Lei Dadian cloud image enhancement and identification system of claim 1, wherein the data set generation module comprises:
(1) Ideal data generation section based on 4D millimeter wave radar signal model: the generation of ideal data firstly establishes a mathematical model according to the antenna layout, echo characteristics and radar distance, speed and angle measurement principles of a 4D millimeter wave radar in a MIMO-TDM mode;
then, based on the signal model, setting different target information parameters to obtain a target data stream;
finally, processing a target data stream through a radar signal processing algorithm to obtain a target azimuth pitching dimension matrix, completing the construction of an ideal data set, and taking the ideal data set as a training data set in each network model;
(2) A real data acquisition section based on an actual radar platform: the method comprises the steps that through the fact that the actual radar platform respectively adopts data of one angular inverse and data of two angular inverse placed in different azimuth pitching, collection of a real data set is achieved, and the real data set is used as a verification data set in each network model;
(3) Target detection network training-validation dataset portion: according to the ideal data and real data acquisition methods in the steps (1) and (2), generating an ideal data set which is input by a target detection training network and acquiring a real data set, wherein the data set comprises a target type and a non-target type;
(4) Sidelobe suppression network training-verification dataset part: according to the ideal data and real data acquisition methods in the steps (1) and (2), a 2-channel ideal data set and a real data set for sidelobe suppression network training are generated, wherein the 2-channel ideal data set and the real data set respectively store the real part and the imaginary part of a radar data set, and the data set is classified into two types, namely a strong object and a weak object;
(5) Lei Dadian cloud target recognition network training-verification dataset part: and (3) generating ideal and real point cloud data for radar point cloud target recognition network training through a radar point cloud space recombination algorithm on the basis of the ideal data and the real data obtained in the steps (1) and (2).
3. The combined neural network-based 4D millimeter wave Lei Dadian cloud image enhancement and recognition system of claim 2, wherein the point cloud dataset comprises pedestrian, cart class 3, wherein the vehicle is a strong target and the pedestrian is a weak target.
4. The 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on a combined neural network according to claim 1, wherein the target detection module comprises a convolutional neural network model, the network model comprises 4 convolutional layers, 2 full-connection layers and 1 activation layer, radar data features are extracted through the convolutional layers, the full-connection layers vectorize radar image data of the radar data features, the activation layer outputs vectorized data nonlinear decisions to the content of the next neuron by using a relu function, the network model inputs a target detection network training-verification data set part generated by the data set generation module, the input data set features are learned, a data set containing targets is output through iterative training, and the generated target detection pre-training model is sent to the pre-training module.
5. The 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on the combined neural network according to claim 2, wherein the sidelobe suppression module builds a full convolution neural network based on a ResNet model, and the full convolution neural network is used for guaranteeing that the size of an output image is unchanged; the full convolution neural network comprises a plurality of convolution layers, and comprises a convolution layer for extracting characteristics of input data, 14 residual modules and a convolution layer for reducing dimension output, wherein each residual module consists of 2 convolution layers and a BN layer, the input data adopts a sidelobe suppression network training-verification data set part generated by a data set generating module, and an optimal pre-training model for sidelobe suppression is generated through training and is sent to the pre-training module.
6. The combined neural network-based 4D millimeter wave Lei Dadian cloud image enhancement and identification system of claim 2, wherein the radar point cloud target identification module: the PointNet is used as a network model for radar point cloud target recognition, a radar point cloud target recognition network training-testing data set part generated by a data set generating module is received as input, and a point cloud target recognition pre-training model is obtained through feature extraction, forward transmission and back propagation and is sent to a pre-training module part.
7. The 4D millimeter wave Lei Dadian cloud image enhancement and recognition system based on the combined neural network according to claim 6, wherein the point cloud target recognition block diagram of the network model for radar point cloud target recognition is mainly divided into two parts, the first part is used for global feature extraction of point cloud data, including matrix transformation, MLP feature extraction and maximum pooling, radar point cloud data obtained through radar signal processing is input, the input data is aligned through multiplication with a transformation matrix learned by T-Net, then point cloud data features are extracted through a multi-layer perceptron and feature alignment is carried out according to the transformation matrix, and finally the maximum pooling is carried out on each dimension of the features to obtain final global features; the second part uses an MLP classifier to realize the classification of point cloud or the segmentation of point cloud, and predicts the final classification task of the global features through a perceptron.
8. The combined neural network-based 4D millimeter wave Lei Dadian cloud image enhancement and identification system of claim 7, wherein the radar point cloud data is represented as an nx3 two-dimensional tensor, n represents the point cloud number, and 3 corresponds to xyz coordinates.
9. The combined neural network-based 4D millimeter wave Lei Dadian cloud image enhancement and recognition system of claim 1, wherein the pre-training module comprises:
the target detection prediction part inputs the actual scene data set, predicts through the received target detection pre-training model, and sends output data to the sidelobe suppression prediction part;
the sidelobe suppression prediction part is used for receiving the data set detected by the target, predicting the data set through a sidelobe suppression pre-training model, acquiring the data set after sidelobe suppression and outputting the data set to the radar point cloud space recombination module;
lei Dadian cloud space reorganization module receives the data set after sidelobe suppression, generates point cloud data from azimuth pitching matrixes under all distance units through information such as distance, speed and angle, tensors into space stereoscopic images and sends the space stereoscopic images to radar point cloud target identification prediction parts;
lei Dadian cloud target identification prediction part receives the radar point cloud image generated by the radar point cloud space recombination module, predicts by utilizing the radar point cloud target identification pre-training model, and realizes the point cloud target category identification on the basis of image enhancement.
10. The 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on the combined neural network according to claim 1, wherein the data set generation module acquires real radar data as verification data sets of different network models in a TDM-MIMO mode by adopting a 77GHZ millimeter wave radar; based on the millimeter wave radar echo signal mathematical model, an ideal data set is obtained through simulation and used as a training data set of different network models.
CN202311271904.3A 2023-09-28 2023-09-28 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network Pending CN117347965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311271904.3A CN117347965A (en) 2023-09-28 2023-09-28 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311271904.3A CN117347965A (en) 2023-09-28 2023-09-28 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network

Publications (1)

Publication Number Publication Date
CN117347965A true CN117347965A (en) 2024-01-05

Family

ID=89356726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311271904.3A Pending CN117347965A (en) 2023-09-28 2023-09-28 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network

Country Status (1)

Country Link
CN (1) CN117347965A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572376A (en) * 2024-01-16 2024-02-20 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117572376A (en) * 2024-01-16 2024-02-20 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method
CN117572376B (en) * 2024-01-16 2024-04-19 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method

Similar Documents

Publication Publication Date Title
CN110472627B (en) End-to-end SAR image recognition method, device and storage medium
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN109932730B (en) Laser radar target detection method based on multi-scale monopole three-dimensional detection network
CN112561796B (en) Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network
Ren et al. Extended convolutional capsule network with application on SAR automatic target recognition
CN117347965A (en) 4D millimeter wave Lei Dadian cloud image enhancement and identification system based on combined neural network
CN112668469A (en) Multi-target detection and identification method based on deep learning
He et al. Real-time vehicle detection from short-range aerial image with compressed mobilenet
CN117237919A (en) Intelligent driving sensing method for truck through multi-sensor fusion detection under cross-mode supervised learning
CN116486368A (en) Multi-mode fusion three-dimensional target robust detection method based on automatic driving scene
CN115995042A (en) Video SAR moving target detection method and device
CN116704304A (en) Multi-mode fusion target detection method of mixed attention mechanism
CN113850783B (en) Sea surface ship detection method and system
CN115100741A (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
Paek et al. Enhanced k-radar: Optimal density reduction to improve detection performance and accessibility of 4d radar tensor-based object detection
CN113281718A (en) 3D multi-target tracking system and method based on laser radar scene flow estimation
CN116129234A (en) Attention-based 4D millimeter wave radar and vision fusion method
CN116468950A (en) Three-dimensional target detection method for neighborhood search radius of class guide center point
Zhou et al. Complex background SAR target recognition based on convolution neural network
Piroli et al. Towards Robust 3D Object Detection In Rainy Conditions
CN117612129B (en) Vehicle dynamic perception method, system and dynamic perception model training method
CN116451590B (en) Simulation method and device of automatic driving simulation test platform
CN117611644B (en) Method, device, medium and equipment for converting visible light image into SAR image
Bai et al. Vehicle Detection Based on Deep Neural Network Combined with Radar Attention Mechanism
CN118135442A (en) Target detection and tracking method of anti-unmanned aerial vehicle system based on P2C-YOLOv s and K-KCF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination