CN110275163B - Millimeter wave radar detection target imaging method based on neural network - Google Patents

Millimeter wave radar detection target imaging method based on neural network Download PDF

Info

Publication number
CN110275163B
CN110275163B CN201910574031.0A CN201910574031A CN110275163B CN 110275163 B CN110275163 B CN 110275163B CN 201910574031 A CN201910574031 A CN 201910574031A CN 110275163 B CN110275163 B CN 110275163B
Authority
CN
China
Prior art keywords
millimeter wave
wave radar
convolution
matrix
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910574031.0A
Other languages
Chinese (zh)
Other versions
CN110275163A (en
Inventor
张雷
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingsi Microelectronics (Nanjing) Co.,Ltd.
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910574031.0A priority Critical patent/CN110275163B/en
Publication of CN110275163A publication Critical patent/CN110275163A/en
Application granted granted Critical
Publication of CN110275163B publication Critical patent/CN110275163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Abstract

The invention relates to a millimeter wave radar array imaging method based on a neural network, and belongs to the technical field of automatic driving. The method comprises the steps of firstly obtaining original signal data through a millimeter wave radar array, obtaining environment point cloud data through a laser radar, obtaining an initial characteristic matrix after Fourier transformation, obtaining point cloud data with the installation position of the millimeter wave radar array as a coordinate origin after coordinate transformation is carried out on the point cloud data, constructing a millimeter wave radar array imaging model by utilizing a convolutional neural network, and training the millimeter wave radar array imaging model by utilizing the characteristic matrix of millimeter wave radar signals and the point cloud data with the installation position of the millimeter wave radar array as the coordinate origin. The method disclosed by the invention realizes the construction of a millimeter wave radar array imaging model based on the neural network, and solves the problem that the side lobe interference signals of the radar antenna are difficult to decouple in the traditional radar signal modeling method.

Description

Millimeter wave radar detection target imaging method based on neural network
Technical Field
The invention relates to a millimeter wave radar detection target imaging method based on a neural network, and belongs to the technical field of automatic driving.
Background
In recent years, environmental awareness for automobile driving systems has become a research focus. The millimeter wave radar has the advantages of all-weather work and low price, and is often used for target distance measurement, speed measurement and angle measurement. The laser radar has the advantage of high precision and is used for point cloud imaging, but the laser radar is expensive and difficult to produce in mass. In order to solve the problem, few international enterprises are researching methods for realizing point cloud imaging by using millimeter wave radar arrays. Because the millimeter wave radar antenna can not completely inhibit the side lobe signals in the design process, the signals received by the millimeter wave radar array often have crosstalk of the side lobe signals. Decoupling models between different antenna signals often need actual conditions of millimeter wave radar products and antennas, and the decoupling problem of sidelobe crosstalk signals is difficult to achieve through a traditional radar signal processing algorithm. A more general method is needed to quickly implement the mapping of millimeter wave radar array signals to point cloud imaging data and target detection data.
The neural network algorithm is widely used for constructing a system model with complex mapping relation between input and output data, and has the advantages that the mapping relation does not need to be analyzed in principle, and the performance can be improved over the traditional system modeling method through data training. The disadvantage is that training of the neural network requires a large amount of data, so that the loss value of each training is accurately calculated, and in supervised learning, the data samples often need to be labeled manually.
Disclosure of Invention
The invention aims to provide millimeter wave radar detection target imaging based on a neural network, which utilizes the advantages of the neural network and the millimeter wave radar to collect point cloud data in the same working environment as the millimeter wave radar, trains the neural network by taking the point cloud data as a matching sample after carrying out corresponding coordinate transformation on the point cloud data, and realizes high-precision mapping of radar point cloud data and target detection data generated by the radar data.
The invention provides a millimeter wave radar detection target imaging method based on a neural network, which comprises the following steps:
(1) the method comprises the following steps of collecting millimeter wave radar data, preprocessing the data to obtain a characteristic matrix R of the millimeter wave radar, and specifically comprises the following steps:
(1-1) setting the original point o of the millimeter wave radar located in the rectangular coordinate system xyz, wherein N groups of antennas are arranged on the millimeter wave radar, and setting the included angle between the detection direction of the i group of antennas of the millimeter wave radar and the plane xoy of the rectangular coordinate system as alphai
(1-2) collecting original signals Z of the millimeter wave radar of T times sent by the ith group of antennas of the millimeter wave radari
(1-3) to the original signal Z of step (1-2)iRespectively carrying out Fourier transform to obtain a matrix Fi,FiIs a matrix of K × T dimension, K being the length of the original signal, for matrix FiPerforming two-dimensional Fourier transform to obtain a characteristic matrix ri
(1-4) traversing N groups of antennas in the millimeter wave radar, repeating the step (1-3) to obtain feature matrices of all the N groups of antennas, and combining all the feature matrices to obtain oneCharacteristic matrix R of millimeter wave radar, i.e. R ═ R1;r2;r3;…;rNR is a K multiplied by T multiplied by N dimensional matrix;
(2) setting a data acquisition point, acquiring point cloud data of a detection target to form a three-dimensional point cloud matrix P, and specifically comprising the following steps:
(2-1) setting a data acquisition point so that the data acquisition point H is located in the rectangular coordinate system xyz of the step (1-1), the coordinate of the point H is 0, y is λ, and z is 0, acquiring point cloud data E of a detection target from the point H, wherein the point cloud data E contains M point cloud position information, M is set according to detection accuracy, and M is set according to detection accuracy>100, the first point p in the M point cloud location informationFor position information of
Figure BDA0002111591600000021
Is represented by, whereinDenotes from point H to pThe included angle between the connecting line of the points and the plane xoy in the rectangular coordinate system xyz is 1, 2Is taken as the included angle alphaiAny one of the values i ═ 1, 2, …, N,
Figure BDA0002111591600000022
denotes from point H to pThe angle between the line of points and the plane yoz in the rectangular coordinate system, |Denotes H point to pThe distance of the points;
(2-2) separately comparing all the M position information using the following formula
Figure BDA0002111591600000023
Processing to obtain M pieces of new position information
Figure BDA0002111591600000024
Figure BDA0002111591600000025
Figure BDA0002111591600000026
βnew=β
(2-3) the M new location information according to the step (2-2)
Figure BDA0002111591600000027
Due to betanew=βThus betanewOnly N possible values, according to betanewDifferent values of (2) are obtained, and M new position information of the step (2-2) is obtained
Figure BDA0002111591600000028
Divided into N groups of
Figure BDA0002111591600000029
Data to be recorded
Figure BDA00021115916000000210
The data are arranged into a three-dimensional point cloud matrix P, namely P is one
Figure BDA00021115916000000211
A matrix;
(3) constructing a convolution neural network, utilizing the convolution neural network to automatically extract the three-dimensional characteristic matrix R in the step (1), and finally outputting
Figure BDA0002111591600000031
Estimated data value of dimensional point cloud
Figure BDA0002111591600000032
The method for forming the millimeter wave radar detection target imaging model specifically comprises the following steps:
(3-1) constructing an input layer of a convolutional neural network, wherein the input matrix of the convolutional neural network is the three-dimensional characteristic matrix R with the size of KxT x N in the step (1), and the input layer of the convolutional neural network has a g-th convolution kernel W of 100 convolution kernel input layersg0Where g is 1, 2, 3 … 100, the size of the convolution kernel is 3 × 3 × N, the convolution step size is 1, and the input to the layer convolution networkThe activation function is a RELU function, the output matrix of the input layer convolution network
Figure BDA0002111591600000033
Is K × T × 100, the operation formula of each convolution kernel is:
Dg=RELU(R*Wg0)
wherein, is convolution operator, convolution kernel Wg0As a parameter to be trained, DgFor the output after convolution with each convolution kernel, g is 1, 2, …, 100, all DgAre combined into
Figure BDA0002111591600000034
(3-2) constructing an intermediate layer of the convolutional neural network, wherein the intermediate layer of the convolutional neural network comprises Q layers of convolutional networks, and an input matrix of the Q layer of convolutional network is an output matrix of the Q-1 layer of convolutional network
Figure BDA0002111591600000035
Q is 2, …, Q, and the input matrix of the layer 1 convolutional network in the middle layer is the output matrix of the input layer of the convolutional neural network
Figure BDA0002111591600000036
Each layer of convolution network has 100 convolution kernels, the g-th convolution kernel of the q-th layer uses WgqThe convolution kernel size is 3X 100, the convolution step is 1, the activation function of each layer of convolution network is RELU function, the output matrix of the q-th layer of convolution network is used
Figure BDA0002111591600000037
The operation formula of each convolution kernel of the Q-layer convolution network with the size of K multiplied by T multiplied by 100 is shown as follows:
Figure BDA0002111591600000038
wherein, is convolution operator, convolution kernel WgqAs the parameters to be trained, the training parameters,
Figure BDA0002111591600000039
for the output after convolution with each convolution kernel, g is 1, 2, …, 100, all
Figure BDA00021115916000000310
Are combined in sequence
Figure BDA00021115916000000311
(3-3) constructing an output layer of the convolutional neural network, wherein the output layer of the convolutional neural network is a layer of convolutional network, and an input matrix of the convolutional network of the output layer is an output matrix of the last layer of convolutional network in the middle layer of the convolutional neural network
Figure BDA00021115916000000312
The convolution kernels of the output layer convolution network are 3M in number and are WeWhere e is 1, 2, …, 3M, the size of each convolution kernel is K × T × 100, and the operation formula of each convolution kernel is:
Figure BDA00021115916000000313
wherein, is convolution operator, convolution kernel WeAs a parameter to be trained, DeFor the result of the convolution operation with each convolution kernel, e is 1, 2, …, 3M, all DeAre combined into a scale of
Figure BDA00021115916000000314
The three-dimensional matrix is a millimeter wave radar detection target imaging model and is recorded as
Figure BDA00021115916000000315
(4) Repeating the step (1) and the step (2) for s times to obtain samples of s groups of feature matrices R and point cloud matrices P, and iteratively training the convolutional neural network in the step (3) by using a gradient descent method to obtain a millimeter wave radar detection target imaging model
Figure BDA0002111591600000041
The method specifically comprises the following steps:
(4-1) repeating the step (1) -the step (2) for s times to obtain s groups of characteristic matrix R and point cloud matrix P samples;
(4-2) traversing the samples of the s groups of feature matrixes R and the point cloud matrix P collected in the step (4-1) by adopting a gradient descent method, repeating the step (3), and training the training set in the step (4-1) to obtain the model in the step (3)
Figure BDA0002111591600000042
All parameters to be trained, i.e. Wg0、WgqAnd WeAnd obtaining and training parameters Wg0、WgqAnd WeCorresponding millimeter wave radar detection target imaging model
Figure BDA0002111591600000043
(4-3) repeating the step (4-2) eta times, wherein eta is more than or equal to 50 and less than or equal to 100, and obtaining a final eta training parameter Wg0、WgqAnd WeCorresponding millimeter wave radar detection target imaging model
Figure BDA0002111591600000044
Namely the final millimeter wave radar array imaging model
Figure BDA0002111591600000045
(5) And (4) utilizing the final millimeter wave radar detection target imaging model obtained in the step (4) to realize the imaging of the millimeter wave radar detection target.
The millimeter wave radar detection target imaging method based on the neural network has the advantages that:
1. the traditional radar target imaging method is difficult to solve the problem of interference caused by antenna side lobe noise during signal processing, and the millimeter wave radar detection target imaging method based on the neural network can realize self-decoupling of interference noise without considering the interference caused by the antenna side lobe noise.
2. According to the millimeter wave radar detection target imaging method based on the neural network, the problem of automatic mapping between different antenna signals and target imaging is solved through training of the neural network, and the target detection imaging process of the millimeter wave radar is simplified.
Drawings
Fig. 1 and fig. 2 are schematic diagrams of positions of millimeter wave radar related to the method of the present invention in a rectangular coordinate system.
FIG. 3 is a schematic diagram of data acquisition points involved in the method of the present invention.
In fig. 1 to 3, 1 is a millimeter wave radar, 2 is an antenna, and the detecting direction of the antenna 2 forms an angle α with the plane xoyi,βDenotes from point H to pThe connecting line of the points forms an included angle with a plane xoy in the rectangular coordinate system xyz,
Figure BDA0002111591600000046
represents the included angle between the connecting line from the point H to the point p and the plane yoz in the rectangular coordinate system, iDenotes H point to pThe distance of the points.
Detailed Description
The invention provides a millimeter wave radar detection target imaging method based on a neural network, which comprises the following steps:
(1) the method comprises the following steps of collecting millimeter wave radar data, preprocessing the data to obtain a characteristic matrix R of the millimeter wave radar, and specifically comprises the following steps:
(1-1) setting the original point o of the millimeter wave radar located in the rectangular coordinate system xyz, wherein N groups of antennas are arranged on the millimeter wave radar, and setting the included angle between the detection direction of the i group of antennas of the millimeter wave radar and the plane xoy of the rectangular coordinate system as alphai(ii) a As shown in fig. 1 and fig. 2, the millimeter wave radar 1 is located at the origin o of the rectangular coordinate system xyz, the millimeter wave radar antennas 2 are distributed around the radar, and the detection direction 2 of the i-th group of antennas 1 of the millimeter wave radar forms an angle α with the plane xoy of the rectangular coordinate systemi
(1-2) collecting original signals Z of the millimeter wave radar of T times sent by the ith group of antennas of the millimeter wave radari
(1-3) to the original signal Z of step (1-2)iRespectively carrying out Fourier transform to obtain a matrix Fi,FiIs a matrix of K × T dimension, K being the length of the original signal, for matrix FiPerforming two-dimensional Fourier transform to obtain a characteristic matrix ri
(1-4) traversing N groups of antennas in the millimeter wave radar, repeating the step (1-3) to obtain feature matrices of all the N groups of antennas, combining all the feature matrices to obtain a feature matrix R of the millimeter wave radar, namely R ═ { R ═ R }1;r2;r3;…;rNR is a K multiplied by T multiplied by N dimensional matrix;
(2) setting a data acquisition point, acquiring point cloud data of a detection target to form a three-dimensional point cloud matrix P, and specifically comprising the following steps:
(2-1) setting a data acquisition point so that the data acquisition point H is located in the rectangular coordinate system xyz of the step (1-1), the coordinate of the point H is 0, y is λ, and z is 0, acquiring point cloud data E of a detection target from the point H, wherein the point cloud data E contains M point cloud position information, M is set according to detection accuracy, and M is set according to detection accuracy>100, the first point p in the M point cloud location informationFor position information of
Figure BDA0002111591600000051
Is represented by, whereinDenotes an angle between a line connecting a point H to a point p and a plane xoy in the rectangular coordinate system xyz, 1, 2Is taken as the included angle alphaiAny one of the values i ═ 1, 2, …, N,
Figure BDA0002111591600000052
denotes from point H to pThe angle between the line of points and the plane yoz in the rectangular coordinate system, |Denotes H point to pDistance of points, as shown in FIG. 3;
(2-2) separately comparing all the M position information using the following formula
Figure BDA0002111591600000053
Is processed to obtainTo M new position information
Figure BDA0002111591600000054
Figure BDA0002111591600000055
Figure BDA0002111591600000056
βnew=β
(2-3) the M new location information according to the step (2-2)
Figure BDA0002111591600000057
Due to betanew=βThus betanewOnly N possible values, according to betanewDifferent values of (2) are obtained, and M new position information of the step (2-2) is obtained
Figure BDA0002111591600000061
Divided into N groups of
Figure BDA0002111591600000062
Data to be recorded
Figure BDA0002111591600000063
The data are arranged into a three-dimensional point cloud matrix P, namely P is one
Figure BDA0002111591600000064
A matrix;
(3) constructing a convolution neural network, utilizing the convolution neural network to automatically extract the three-dimensional characteristic matrix R in the step (1), and finally outputting
Figure BDA0002111591600000065
Estimated data value of dimensional point cloud
Figure BDA0002111591600000066
The method for forming the millimeter wave radar detection target imaging model specifically comprises the following steps:
(3-1) constructing an input layer of a convolutional neural network, wherein an input matrix of the convolutional neural network is the three-dimensional characteristic matrix R with the size of KxT x N in the step (1), the convolutional network of the input layer of the convolutional neural network has 100 convolutional kernels (a common term in the field of convolutional kernel machine learning), and the g-th convolutional kernel of the input layer is Wg0Where g is 1, 2, 3 … 100, the size of the convolution kernel is 3 × 3 × N, the convolution step is 1, the activation function of the input convolutional network is the RELU function (the RELU function is a well-known common function in the field of machine learning), and the output matrix of the input convolutional network
Figure BDA0002111591600000067
Is K × T × 100, the operation formula of each convolution kernel is:
Dg=RELU(R*Wg0)
wherein, is convolution operator, convolution kernel Wg0As a parameter to be trained, DgFor the output after convolution with each convolution kernel, g is 1, 2, …, 100, all DgAre combined into
Figure BDA0002111591600000068
(3-2) constructing an intermediate layer of the convolutional neural network, wherein the intermediate layer of the convolutional neural network comprises Q layers of convolutional networks, and an input matrix of the Q layer of convolutional network is an output matrix of the Q-1 layer of convolutional network
Figure BDA0002111591600000069
Q is 2, …, Q, and the input matrix of the layer 1 convolutional network in the middle layer is the output matrix of the input layer of the convolutional neural network
Figure BDA00021115916000000610
Each layer of convolution network has 100 convolution kernels, the g-th convolution kernel of the q-th layer uses WgqIt is shown that the convolution kernel size is 3 × 3 × 100, the convolution step size is 1, and the activation function of each layer of the convolution networkFor the output matrix of the q-th convolutional network for the RELU function
Figure BDA00021115916000000611
The operation formula of each convolution kernel of the Q-layer convolution network with the size of K multiplied by T multiplied by 100 is shown as follows:
Figure BDA00021115916000000612
wherein, is convolution operator, convolution kernel WgqAs the parameters to be trained, the training parameters,
Figure BDA00021115916000000613
for the output after convolution with each convolution kernel, g is 1, 2, …, 100, all
Figure BDA00021115916000000614
Are combined in sequence
Figure BDA00021115916000000615
(3-3) constructing an output layer of the convolutional neural network, wherein the output layer of the convolutional neural network is a layer of convolutional network, and an input matrix of the convolutional network of the output layer is an output matrix of the last layer of convolutional network in the middle layer of the convolutional neural network
Figure BDA00021115916000000616
The convolution kernels of the output layer convolution network are 3M in number and are WeWhere e is 1, 2, …, 3M, the size of each convolution kernel is K × T × 100, and the operation formula of each convolution kernel is:
Figure BDA0002111591600000071
wherein, is convolution operator, convolution kernel WeAs a parameter to be trained, DeFor the result of the convolution operation with each convolution kernel, e is 1, 2, …, 3M, all DeAre combined into a scale of
Figure BDA0002111591600000072
The three-dimensional matrix is a millimeter wave radar detection target imaging model and is recorded as
Figure BDA0002111591600000073
(4) Repeating the step (1) and the step (2) for s times to obtain samples of s groups of feature matrices R and point cloud matrices P, and iteratively training the convolutional neural network in the step (3) by using a gradient descent method to obtain a millimeter wave radar detection target imaging model
Figure BDA0002111591600000074
The method specifically comprises the following steps:
(4-1) repeating the step (1) -the step (2) for s times to obtain s groups of characteristic matrix R and point cloud matrix P samples;
(4-2) traversing the samples of the s groups of feature matrixes R and the point cloud matrix P collected in the step (4-1) by adopting a gradient descent method, repeating the step (3), and training the training set in the step (4-1) to obtain the model in the step (3)
Figure BDA0002111591600000075
All parameters to be trained, i.e. Wg0、WgqAnd WeAnd obtaining and training parameters Wg0、WgqAnd WeCorresponding millimeter wave radar detection target imaging model
Figure BDA0002111591600000076
(4-3) repeating the step (4-2) eta times, wherein eta is more than or equal to 50 and less than or equal to 100, in one embodiment of the invention, eta is 100, and the final eta training parameter W is obtainedg0、WgqAnd WeCorresponding millimeter wave radar detection target imaging model
Figure BDA0002111591600000077
Namely the final millimeter wave radar array imaging model
Figure BDA0002111591600000078
(5) And (4) utilizing the final millimeter wave radar detection target imaging model obtained in the step (4) to realize the imaging of the millimeter wave radar detection target.

Claims (1)

1. A millimeter wave radar detection target imaging method based on a neural network is characterized by comprising the following steps:
(1) the method comprises the following steps of collecting millimeter wave radar data, preprocessing the data to obtain a characteristic matrix R of the millimeter wave radar, and specifically comprises the following steps:
(1-1) setting the original point o of the millimeter wave radar located in the rectangular coordinate system xyz, wherein N groups of antennas are arranged on the millimeter wave radar, and setting the included angle between the detection direction of the i group of antennas of the millimeter wave radar and the plane xoy of the rectangular coordinate system as alphai
(1-2) collecting original signals Z of the millimeter wave radar of T times sent by the ith group of antennas of the millimeter wave radari
(1-3) to the original signal Z of step (1-2)iRespectively carrying out Fourier transform to obtain a matrix Fi,FiIs a matrix of K × T dimension, K being the length of the original signal, for matrix FiPerforming two-dimensional Fourier transform to obtain a characteristic matrix ri
(1-4) traversing N groups of antennas in the millimeter wave radar, repeating the step (1-3) to obtain feature matrices of all the N groups of antennas, combining all the feature matrices to obtain a feature matrix R of the millimeter wave radar, namely R ═ { R ═ R }1;r2;r3;...;rNR is a K multiplied by T multiplied by N dimensional matrix;
(2) setting a data acquisition point, acquiring point cloud data of a detection target to form a three-dimensional point cloud matrix P, and specifically comprising the following steps:
(2-1) setting a data acquisition point, enabling the data acquisition point H to be located in the rectangular coordinate system xyz in the step (1-1), enabling the coordinate of the point H to be 0, enabling y to be lambda and enabling z to be 0, acquiring point cloud data E of a detection target from the point H, wherein the point cloud data E comprises M point cloud position information, and M is used for acquiring cloud position information according to detectionSetting the precision, wherein M is more than 100, and the first point p in the cloud position information of M pointsFor position information of
Figure FDA0002710736640000011
Is represented by, whereinDenotes from point H to pThe included angle between the connecting line of the points and the plane xoy in the rectangular coordinate system xyz is 1, 2Is taken as the included angle alphaiAny one of the values i 1, 2, N,
Figure FDA0002710736640000012
denotes from point H to pThe angle between the line of points and the plane yoz in the rectangular coordinate system, |Denotes H point to pThe distance of the points;
(2-2) separately comparing all the M position information using the following formula
Figure FDA0002710736640000013
Processing to obtain M pieces of new position information
Figure FDA0002710736640000014
Figure FDA0002710736640000015
Figure FDA0002710736640000016
βnew=β
(2-3) the M new location information according to the step (2-2)
Figure FDA0002710736640000021
Due to betanew=βThus betanewOnly N possible values, according to betanewDifferent values of (2) are obtained by taking M new positions of the step (2-2)Information
Figure FDA0002710736640000022
Divided into N groups of
Figure FDA0002710736640000023
Data to be recorded
Figure FDA0002710736640000024
The data are arranged into a three-dimensional point cloud matrix P, namely P is one
Figure FDA0002710736640000025
A matrix;
(3) constructing a convolution neural network, utilizing the convolution neural network to automatically extract the three-dimensional characteristic matrix R in the step (1), and finally outputting
Figure FDA0002710736640000026
Estimated data value of dimensional point cloud
Figure FDA0002710736640000027
The method for forming the millimeter wave radar detection target imaging model specifically comprises the following steps:
(3-1) constructing an input layer of a convolutional neural network, wherein an input matrix of the convolutional neural network is the three-dimensional characteristic matrix R with the size of KxT x N in the step (1), the convolutional network of the input layer of the convolutional neural network has 100 convolutional kernels, and the g-th convolutional kernel of the input layer uses Wg0Let g ═ 1, 2, 3.. 100, the size of the convolution kernel is 3 × 3 × N, the convolution step size is 1, the activation function of the input layer convolution network is the RELU function, the output matrix of the input layer convolution network is the RELU function
Figure FDA0002710736640000028
Is K × T × 100, the operation formula of each convolution kernel is:
Dg=RELU(R*Wg0)
wherein, is convolution operator, convolution kernel Wg0As a parameter to be trained, DgFor the output after convolution with each convolution kernel, g 1, 2gAre combined into
Figure FDA0002710736640000029
(3-2) constructing an intermediate layer of the convolutional neural network, wherein the intermediate layer of the convolutional neural network comprises Q layers of convolutional networks, and an input matrix of the Q layer of convolutional network is an output matrix of the Q-1 layer of convolutional network
Figure FDA00027107366400000210
Q2.. Q, the input matrix of the layer 1 convolutional network in the middle layer is the output matrix of the input layer of the convolutional neural network
Figure FDA00027107366400000211
Each layer of convolution network has 100 convolution kernels, the g-th convolution kernel of the q-th layer uses WgqThe convolution kernel size is 3X 100, the convolution step is 1, the activation function of each layer of convolution network is RELU function, the output matrix of the q-th layer of convolution network is used
Figure FDA00027107366400000212
The operation formula of each convolution kernel of the Q-layer convolution network with the size of K multiplied by T multiplied by 100 is shown as follows:
Figure FDA00027107366400000213
wherein, is convolution operator, convolution kernel WgqAs the parameters to be trained, the training parameters,
Figure FDA00027107366400000214
for the output after convolution with each convolution kernel, g 1, 2
Figure FDA00027107366400000215
According toAre sequentially combined into
Figure FDA00027107366400000216
(3-3) constructing an output layer of the convolutional neural network, wherein the output layer of the convolutional neural network is a layer of convolutional network, and an input matrix of the convolutional network of the output layer is an output matrix of the last layer of convolutional network in the middle layer of the convolutional neural network
Figure FDA00027107366400000217
The convolution kernels of the output layer convolution network are 3M in number and are WeExpress e 1, 2., 3M, the size of each convolution kernel is K × T × 100, and the operation formula of each convolution kernel is:
Figure FDA0002710736640000031
wherein, is convolution operator, convolution kernel WeAs a parameter to be trained, DeD for each convolution kernel, e 1, 2eAre combined into a scale of
Figure FDA0002710736640000032
The three-dimensional matrix is a millimeter wave radar detection target imaging model and is recorded as
Figure FDA0002710736640000033
(4) Repeating the step (1) and the step (2) for s times to obtain samples of s groups of feature matrices R and point cloud matrices P, and iteratively training the convolutional neural network in the step (3) by using a gradient descent method to obtain a millimeter wave radar detection target imaging model
Figure FDA0002710736640000037
The method specifically comprises the following steps:
(4-1) repeating the steps (1) and (2) for s times to obtain s groups of characteristic matrix R and point cloud matrix P samples;
(4-2) traversing the samples of the s groups of feature matrixes R and the point cloud matrix P collected in the step (4-1) by adopting a gradient descent method, repeating the step (3), and training the training set in the step (4-1) to obtain the model in the step (3)
Figure FDA0002710736640000038
All parameters to be trained, i.e. Wg0、WgqAnd WeAnd obtaining and training parameters Wg0、WgqAnd WeCorresponding millimeter wave radar detection target imaging model
Figure FDA0002710736640000034
(4-3) repeating the step (4-2) eta times, wherein eta is more than or equal to 50 and less than or equal to 100, and obtaining a final eta training parameter Wg0、WgqAnd WeCorresponding millimeter wave radar detection target imaging model
Figure FDA0002710736640000035
Namely the final millimeter wave radar array imaging model
Figure FDA0002710736640000036
(5) And (4) utilizing the final millimeter wave radar detection target imaging model obtained in the step (4) to realize the imaging of the millimeter wave radar detection target.
CN201910574031.0A 2019-06-28 2019-06-28 Millimeter wave radar detection target imaging method based on neural network Active CN110275163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910574031.0A CN110275163B (en) 2019-06-28 2019-06-28 Millimeter wave radar detection target imaging method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910574031.0A CN110275163B (en) 2019-06-28 2019-06-28 Millimeter wave radar detection target imaging method based on neural network

Publications (2)

Publication Number Publication Date
CN110275163A CN110275163A (en) 2019-09-24
CN110275163B true CN110275163B (en) 2020-11-27

Family

ID=67962562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910574031.0A Active CN110275163B (en) 2019-06-28 2019-06-28 Millimeter wave radar detection target imaging method based on neural network

Country Status (1)

Country Link
CN (1) CN110275163B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353581B (en) * 2020-02-12 2024-01-26 北京百度网讯科技有限公司 Lightweight model acquisition method and device, electronic equipment and storage medium
CN111833395B (en) * 2020-06-04 2022-11-29 西安电子科技大学 Direction-finding system single target positioning method and device based on neural network model
CN111752754B (en) * 2020-06-05 2022-10-18 清华大学 Method for recovering radar image data in memory

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371105A (en) * 2016-08-16 2017-02-01 长春理工大学 Vehicle targets recognizing method, apparatus and vehicle using single-line laser radar
CN107229918A (en) * 2017-05-26 2017-10-03 西安电子科技大学 A kind of SAR image object detection method based on full convolutional neural networks
WO2018125928A1 (en) * 2016-12-29 2018-07-05 DeepScale, Inc. Multi-channel sensor simulation for autonomous control systems
CN108596961A (en) * 2018-04-17 2018-09-28 浙江工业大学 Point cloud registration method based on Three dimensional convolution neural network
CN109871787A (en) * 2019-01-30 2019-06-11 浙江吉利汽车研究院有限公司 A kind of obstacle detection method and device
CN109902702A (en) * 2018-07-26 2019-06-18 华为技术有限公司 The method and apparatus of target detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0916300D0 (en) * 2009-09-17 2009-10-28 Univ Manchester Metropolitan Remote detection of bladed objects
CN205679762U (en) * 2016-02-24 2016-11-09 闻鼓通信科技股份有限公司 Dangerous goods detecting devices hidden by millimetre-wave radar

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371105A (en) * 2016-08-16 2017-02-01 长春理工大学 Vehicle targets recognizing method, apparatus and vehicle using single-line laser radar
WO2018125928A1 (en) * 2016-12-29 2018-07-05 DeepScale, Inc. Multi-channel sensor simulation for autonomous control systems
CN107229918A (en) * 2017-05-26 2017-10-03 西安电子科技大学 A kind of SAR image object detection method based on full convolutional neural networks
CN108596961A (en) * 2018-04-17 2018-09-28 浙江工业大学 Point cloud registration method based on Three dimensional convolution neural network
CN109902702A (en) * 2018-07-26 2019-06-18 华为技术有限公司 The method and apparatus of target detection
CN109871787A (en) * 2019-01-30 2019-06-11 浙江吉利汽车研究院有限公司 A kind of obstacle detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAR image despeckling with a multilayer perceptron neural network;Xiao Tang et al.;《INTERNATIONAL JOURNAL OF DIGITAL EARTH》;20180308;全文 *
宽带毫米波雷达目标时延神经网络识别新方法;肖怀铁 等;《红外与毫米波学报》;20011231;第20卷(第6期);全文 *

Also Published As

Publication number Publication date
CN110275163A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110275163B (en) Millimeter wave radar detection target imaging method based on neural network
CN111077523A (en) Inverse synthetic aperture radar imaging method based on generation countermeasure network
CN110197112B (en) Beam domain Root-MUSIC method based on covariance correction
CN111639746B (en) GNSS-R sea surface wind speed inversion method and system based on CNN neural network
CN111983619B (en) Underwater acoustic target forward scattering acoustic disturbance positioning method based on transfer learning
CN104076360B (en) The sparse target imaging method of two-dimensional SAR based on compressed sensing
CN109597021B (en) Direction-of-arrival estimation method and device
CN108398659B (en) Direction-of-arrival estimation method combining matrix beam and root finding MUSIC
CN114355290B (en) Sound source three-dimensional imaging method and system based on stereo array
CN108919229B (en) Matrix reconstruction imaging method based on convolution inverse projection
CN108983187B (en) Online radar target identification method based on EWC
CN111610488B (en) Random array angle of arrival estimation method based on deep learning
CN110208736B (en) Non-circular signal uniform array direction-of-arrival angle estimation method based on fourth-order cumulant
CN112800599A (en) Non-grid DOA estimation method based on ADMM under array element mismatch condition
CN111352075B (en) Underwater multi-sound-source positioning method and system based on deep learning
CN111951204A (en) Sea surface wind speed inversion method for Tiangong No. two detection data based on deep learning
CN109557503B (en) MIMO (multiple input multiple output) co-prime array DOA (direction of arrival) estimation method based on correlation matrix reconstruction decorrelation
CN109061551B (en) Grid-free sparse spectrum estimation method based on polynomial root finding
CN116400724A (en) Intelligent inspection method for unmanned aerial vehicle of power transmission line
CN115201818A (en) Optimized vehicle-mounted point cloud imaging radar two-dimensional area array antenna signal processing method
CN113608192B (en) Ground penetrating radar far field positioning method and device and computer readable storage medium
CN113671485B (en) ADMM-based two-dimensional DOA estimation method for meter wave area array radar
CN113359196B (en) Multi-target vital sign detection method based on subspace method and DBF
CN111931596B (en) Group target grouping method based on algebraic graph theory
CN112684445B (en) MIMO-ISAR three-dimensional imaging method based on MD-ADMM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210329

Address after: Room 301, block C, Yingying building, 99 Tuanjie Road, yanchuangyuan, Jiangbei new district, Nanjing, Jiangsu, 211800

Patentee after: Qingsi Microelectronics (Nanjing) Co.,Ltd.

Address before: 100084 No. 1 Tsinghua Yuan, Beijing, Haidian District

Patentee before: TSINGHUA University