CN112255625A - One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependence error - Google Patents

One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependence error Download PDF

Info

Publication number
CN112255625A
CN112255625A CN202010903250.1A CN202010903250A CN112255625A CN 112255625 A CN112255625 A CN 112255625A CN 202010903250 A CN202010903250 A CN 202010903250A CN 112255625 A CN112255625 A CN 112255625A
Authority
CN
China
Prior art keywords
array
azimuth
deep learning
dimensional
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010903250.1A
Other languages
Chinese (zh)
Other versions
CN112255625B (en
Inventor
潘玉剑
姚敏
高晓欣
王�锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010903250.1A priority Critical patent/CN112255625B/en
Publication of CN112255625A publication Critical patent/CN112255625A/en
Application granted granted Critical
Publication of CN112255625B publication Critical patent/CN112255625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/62Sense-of-movement determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a one-dimensional linear array direction finding method under a two-dimensional angle dependence error based on deep learning. The method is based on the characteristic that deep learning is good at approximating a complex nonlinear function, and the problem of two-dimensional angle dependence type array error calibration is solved through machine learning. In order to be able to simultaneously handle the azimuth angle dependence and the pitch angle dependence of the array error, two-dimensional data acquisition is performed, i.e. different azimuth array steering vectors are acquired at different pitch angles. Expanding the measurement data by adopting a local array flow pattern interpolation value so as to reduce the overfitting risk of a deep learning model; deep learning is performed on the data with the lowest signal-to-noise ratio to adapt to the noisy signal. The method is used for improving the direction-finding precision of the one-dimensional linear array with the two-dimensional angle dependent array error, reducing the residual array error, and correcting the dependence of the array error on the azimuth angle and the pitch angle, so that the direction-finding method still has good performance on different pitch angles.

Description

One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependence error
Technical Field
The invention belongs to the field of array direction finding, particularly relates to direction finding of receiver sensor arrays such as radars, communication, sonar and microphones under the existence of array errors, and particularly relates to a one-dimensional linear array direction finding method based on deep learning and suitable for existence of azimuth and elevation two-dimensional angle dependent array errors.
Background
The sensor array is widely applied to radar, communication, sonar and microphone. The premise for direction finding with a sensor array is that the response of the array, i.e., the array steering vector, is precisely known. Under the ideal condition of no array error, the response of each sensor is the same and independent, the position of the sensor is accurately known, and the array guide vector has an accurate analytical expression. But this is not the case in practical applications: there are three types of array errors, namely amplitude and phase errors, mutual coupling and array element position errors. Array errors are further exacerbated by the limitations of the array shield material. Ultimately resulting in array errors that vary with angle. For a one-dimensional linear array, although it cannot estimate the pitch angle of the target but only the azimuth angle, it cannot guarantee that all targets come from the same pitch angle. Therefore, the array error in the direction finding of the one-dimensional linear array needs to consider the dependence on the azimuth angle and the dependence on the pitch angle.
Aiming at the problem of angle-dependent errors, a common method is to perform off-line calibration, and the idea is as follows: firstly, array guide vectors of the array at different angles are measured in a darkroom, and then array error calibration and direction finding are carried out according to the array guide vectors. At present, there are mainly three off-line Calibration methods, namely an exhaustive search method, a magnitude-phase compensation method and a global Array interpolation method (see documents: mat Viberg, Maria Lanne, independent Lundgren. channel 3: Calibration in Array Processing, Classical and model orientation-of-Arrival Estimation [ M ], Academic Press,2009, Pages 93-124). Of the three methods, only the exhaustive search method and the global array interpolation method have the capability of correcting the angle-dependent array errors. However, when there is a two-dimensional array error depending on both azimuth and pitch, the global array interpolation method brings a large residual array error due to the limitation of the linear least square fitting capability adopted by the global array interpolation method. The exhaustive search method needs two-dimensional traversal of all measured array guide vectors and interpolation processing of off-grid point targets, so that the calculation complexity is high and the storage data volume is large.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a one-dimensional linear array direction finding method based on a deep learning two-dimensional angle dependence error, and aims to solve the problems of large residual array error, high calculation complexity and large storage data amount of a calibration method in the prior art.
The one-dimensional linear array direction finding method under the two-dimensional angle dependence error based on the deep learning specifically comprises the following steps:
step 1, placing an M-element one-dimensional linear array on a servo platform in a darkroom, fixing a radiation source in a far field of the array, and collecting two-dimensional calibration data. System parameters are set so that the signal-to-noise ratio of the array output baseband signal is as close as possible to a maximum value within the dynamic range. Setting an azimuth grid point set omega as { theta in an azimuth angle of view of the array12,...,θLAnd setting a pitching grid point set in a pitching angle of view
Figure BDA0002660493120000021
Wherein L is the number of azimuth grid points, and I is the number of pitch grid points. The servo platform is rotated to make the radar pitch angle be
Figure BDA0002660493120000022
The azimuth angle is scanned at a grid point set omega, and M-dimensional array output baseband signals corresponding to all angles are recorded at each grid point
Figure BDA0002660493120000023
Where θ ∈ Ω, I ═ 1, 2.
Step 2, measuring the baseband signal obtained by each darkroom
Figure BDA0002660493120000024
Zero mean white gaussian noise was added using the monte carlo method. To pair
Figure BDA0002660493120000025
Performing Monte Carlo experiments Q times to obtain signals
Figure BDA0002660493120000026
Figure BDA0002660493120000027
Is variance of
Figure BDA0002660493120000028
Is a zero-mean white gaussian noise of (1),
Figure BDA0002660493120000029
the variance of the noise to be added to the ith grid point data corresponding to the ith azimuth grid point is indicated. The noise power is of such a magnitude that
Figure BDA00026604931200000210
The signal-to-noise ratio of (2) is obtained in practical application as the minimum value in the dynamic range of the target signal-to-noise ratio.
Figure BDA00026604931200000211
The signal-to-noise ratio is calculated by the formula
Figure BDA00026604931200000212
Wherein, the unit of SNR is dB,
Figure BDA00026604931200000213
to represent
Figure BDA00026604931200000214
2 norm of (d).
And 3, calculating an array guide vector. Directly calculating array guide vectors for the measured azimuth grid points of each pitch angle darkroom, performing local array flow pattern interpolation processing on the azimuth grid points which are not measured by each pitch angle darkroom,
Figure BDA00026604931200000215
the corresponding angle of the refined azimuth grid point is obtained.
If
Figure BDA00026604931200000216
That is, the azimuth is in the grid point situation, the steering vector calculation formula is:
Figure BDA00026604931200000217
wherein
Figure BDA00026604931200000218
Is composed of
Figure BDA00026604931200000219
The first element of (a);
② if
Figure BDA00026604931200000220
I.e., the azimuth off grid point situation, assume
Figure BDA00026604931200000221
The steering vector calculation formula is:
Figure BDA00026604931200000222
wherein
Figure BDA00026604931200000223
For an ideal resolved steering vector, T, which is dependent on array configuration and azimuth but independent of pitchiThe matrix is interpolated for the local array flow pattern.
TiLeast squares estimation of
Figure BDA00026604931200000224
The calculation method comprises the following steps: is provided with
Figure BDA00026604931200000225
To comprise thetalAnd thetal+1If theta is greater than the set of azimuth sub-grids formed by the continuous M' azimuth angle grid pointslAnd thetal+1There are M '-1 hits in Ω' not at the edges of the grid set. For each kind of extraction, the method can beAn interpolation matrix is calculated by using a least square method, and the calculation formula is as follows:
Figure BDA0002660493120000031
wherein (·)+The pseudo-inverse of the matrix is represented,
Figure BDA0002660493120000032
and A+And (omega ') is an array flow pattern matrix which is composed of columns of guide vectors and ideal guide vectors respectively calculated by measuring data on the azimuth sub-grid point set omega'. Therefore, if P grids are refined through interpolation between every two continuous darkroom measurement orientation grid points, P (M '-1) guide vectors are interpolated between every two grids because (M' -1) guide vectors can be calculated by each refined grid. Considering the monte carlo noise processing of step 2 and the edge effect of the grid set, the number of array steering vectors calculated by the darkroom measurement grid and the interpolation refinement grid is (L + (L-M '+ 1) (M' -1) P) IQ.
And 4, extracting the phase difference of each array guide vector under the corresponding complex number mode, and constructing the characteristics of a deep learning training set. The phase difference extraction method in the complex mode is as follows:
calculating covariance matrix
Figure BDA0002660493120000033
(·)HRepresenting the conjugate transpose operation of vector;
extracting all elements below the diagonal line but not including the diagonal line in the R to form an N-dimensional column vector beta', wherein N is M (M-1)/2;
the equation for calculating the phase difference in the complex mode is β ═ β '/abs (β'), where β/represents the point division, i.e., the division by the elements, and abs (·) represents the absolute value.
After the phase difference is real-numbered, the real number is used as the characteristic gamma, gamma ═ Re of the deep learning training setT(β);ImT(β)]TWherein Re (-), Im (-), and (-)TAnd respectively represents the real part taken,And taking an imaginary part and transposing.
And 5, deep learning network training. Taking a real number phase difference vector gamma as an input characteristic, taking an incoming wave azimuth theta as an output, training a deep learning neural network f (gamma) under a regression mode by using a back propagation algorithm, wherein the deep learning neural network f (gamma) is a fully-connected neural network, the number of neurons of an input layer is 2N, the number of hidden layer layers J is more than or equal to 3, the number of neurons of an output layer is 1, selecting a mean square error of a minimized network output value as a cost function of the training neural network, and simultaneously setting a regularization item of a 2 norm based on network weight for preventing overfitting to obtain the trained deep learning network
Figure BDA0002660493120000041
Step 6, utilizing the trained deep learning network
Figure BDA0002660493120000042
And (6) carrying out direction finding. Assuming that a baseband signal for test output by the array is z, taking z as a guide vector, and calculating according to the step 4 to obtain a phase difference vector beta under a complex modezIt is real to gammazDeep learning network with well-trained post-input
Figure BDA0002660493120000043
The azimuth angle theta of the incoming wave corresponding to the test signal z can be obtainedz
The invention has the following beneficial effects:
1. by utilizing the advantage that deep learning is good at approximating a complex nonlinear function, machine learning is introduced to calibrate the angle-dependent complex array error, the problem that the angle-dependent complex array error is difficult to correct in the traditional array correction is solved, the corrected residual array error is smaller, and the direction finding precision is higher;
2. in the process of deep learning model training, data of different pitch angles are adopted, and the dependence of array errors on azimuth angles and pitch angles is corrected, so that the direction-finding method has excellent performance when being applied to different pitch angles.
3. The neural network has only one output, so that the correction can be only carried out on a single target. However, multiple targets can be separated into multiple single targets in advance in the frequency domain, the time domain, the Doppler domain, or the like, so that the invention has universality in most cases.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a comparison of the azimuth direction-finding error at zero degree pitch angle under noise-free conditions with other methods;
FIG. 3 is a comparison of the RMS error of azimuth direction at different pitch angles in the noise-free condition of the present invention with other methods;
FIG. 4 is a comparison of the root mean square error of azimuth direction measurement in different SNR conditions with other methods.
Detailed Description
The invention is further explained below with reference to the drawings;
as shown in fig. 1, the present invention comprises the steps of:
step 1, an M-element one-dimensional linear array is placed on a servo platform in a darkroom, a radiation source is fixed in a far field of the array, two-dimensional calibration data are collected, the influence of multipath interference can be reduced by collecting signals in the darkroom, and the obtained baseband signals are ensured to be response to a single target. And setting system parameters to enable the signal-to-noise ratio of the array output baseband signals to be as close to the maximum value in a dynamic range as possible, so that the acquired baseband signals can be ensured to be in a noise-free state, and the signal-to-noise ratio which is as accurate as possible can be obtained by the Monte Carlo method and noise in the step 2. Setting an azimuth grid point set omega as { theta in an azimuth angle of view of the array12,...,θLAnd setting a pitching grid point set in a pitching angle of view
Figure BDA0002660493120000044
Wherein L is the number of azimuth grid points, and I is the number of pitch grid points. The servo platform is rotated to make the radar pitch angle be
Figure BDA0002660493120000051
The azimuth angle is scanned at a grid point set omega, and M-dimensional array output baseband signals corresponding to all angles are recorded at each grid point
Figure BDA0002660493120000052
Where θ ∈ Ω, I ═ 1, 2. And (5) scanning and sampling azimuth angles on different pitch angles, so that the deep learning training data in the step (5) has array guide vectors with different pitch angles and different azimuth angles, thereby achieving the purposes of correcting the azimuth angle dependent array error and correcting the pitch angle dependent array error at the same time.
If the array is ideal one-dimensional linear array, i.e. there is no array error, the azimuth angle theta and pitch angle
Figure BDA0002660493120000053
Collected signal of
Figure BDA0002660493120000054
Comprises the following steps:
Figure BDA0002660493120000055
in the formula, a (theta) is an ideal array steering vector, and the steering vector of an ideal linear array is independent of a pitch angle. The analytical expression of a (theta) is a (theta) which is exp (j2 pi mu sin (theta)/lambda), mu is an array element position vector, lambda is a signal wavelength,
Figure BDA0002660493120000056
② if the array has error, the a (theta) needs to be changed to
Figure BDA0002660493120000057
Unknown and no longer has analytical expressions. At this time, the process of the present invention,
Figure BDA0002660493120000058
the expression of (a) is:
Figure BDA0002660493120000059
it can be seen that the steering vector of the linear array with array error is related to the pitch angle.
Step 2, measuring the baseband signal obtained by each darkroom
Figure BDA00026604931200000510
Zero mean white gaussian noise was added using the monte carlo method. To pair
Figure BDA00026604931200000511
Performing Monte Carlo experiments Q times to obtain signals
Figure BDA00026604931200000512
Figure BDA00026604931200000513
Is variance of
Figure BDA00026604931200000514
Is a zero-mean white gaussian noise of (1),
Figure BDA00026604931200000515
the variance of the noise to be added to the ith grid point data corresponding to the ith azimuth grid point is indicated. The noise power is of such a magnitude that
Figure BDA00026604931200000516
The signal-to-noise ratio of (2) is obtained in practical application as the minimum value in the dynamic range of the target signal-to-noise ratio.
Figure BDA00026604931200000517
The signal-to-noise ratio is calculated by the formula
Figure BDA00026604931200000518
Wherein, the unit of SNR is dB,
Figure BDA00026604931200000519
to represent
Figure BDA00026604931200000520
2 norm of (d).
The signal-to-noise ratio after the noise is added is taken as the minimum value in the dynamic range of the target signal-to-noise ratio in practical application, and the generalization performance of the deep learning neural network to the signal with the noise can be improved.
For baseband signals with errors
Figure BDA00026604931200000521
Adding different noises to respectively obtain a low signal-to-noise ratio signal and a high signal-to-noise ratio signal yLo、yHi
Figure BDA0002660493120000061
Wherein the subscript (.)Lo、(·)HiRepresenting low and high signal-to-noise ratio modes, respectively. Due to epsilonLoAnd epsilonHiAll conform to the Gaussian distribution, if the Monte Carlo method is used to generate as much epsilon as possibleLoThe generated signal with low signal-to-noise ratio will cover the distribution of the signal with high signal-to-noise ratio, so that the neural network trained at low signal-to-noise ratio has good generalization performance for the signal with high signal-to-noise ratio.
And 3, calculating an array guide vector. Directly calculating array guide vectors for the measured azimuth grid points of each pitch angle darkroom, performing local array flow pattern interpolation processing on the azimuth grid points which are not measured by each pitch angle darkroom,
Figure BDA0002660493120000062
the corresponding angle of the refined azimuth grid point is obtained.
If
Figure BDA0002660493120000063
That is, the azimuth is in the grid point situation, the steering vector calculation formula is:
Figure BDA0002660493120000064
wherein
Figure BDA0002660493120000065
Is composed of
Figure BDA0002660493120000066
The first element of (a);
② if
Figure BDA0002660493120000067
I.e., the azimuth off grid point situation, assume
Figure BDA0002660493120000068
The steering vector calculation formula is:
Figure BDA0002660493120000069
wherein
Figure BDA00026604931200000610
For an ideal resolved steering vector, T, which is dependent on array configuration and azimuth but independent of pitchiThe matrix is interpolated for the local array flow pattern.
TiLeast squares estimation of
Figure BDA00026604931200000611
The calculation method comprises the following steps: is provided with
Figure BDA00026604931200000612
To comprise thetalAnd thetal+1If theta is greater than the set of azimuth sub-grids formed by the continuous M' azimuth angle grid pointslAnd thetal+1There are M '-1 hits in Ω' not at the edges of the grid set. For each extraction method, an interpolation matrix can be calculated by using a least square method, and the calculation formula is as follows:
Figure BDA00026604931200000613
wherein (·)+The pseudo-inverse of the matrix is represented,
Figure BDA00026604931200000614
and A+And (omega ') is an array flow pattern matrix which is composed of columns of guide vectors and ideal guide vectors respectively calculated by measuring data on the azimuth sub-grid point set omega'. Therefore, if P grids are refined through interpolation between every two continuous darkroom measurement orientation grid points, P (M '-1) guide vectors are interpolated between every two grids because (M' -1) guide vectors can be calculated by each refined grid. Considering the monte carlo noise processing of step 2 and the edge effect of the grid set, the number of array steering vectors calculated by the darkroom measurement grid and the interpolation refinement grid is (L + (L-M '+ 1) (M' -1) P) IQ.
And 4, extracting the phase difference of each array guide vector under the corresponding complex number mode, and constructing the characteristics of a deep learning training set. The phase difference extraction method in the complex mode is as follows:
calculating covariance matrix
Figure BDA0002660493120000071
(·)HRepresenting the conjugate transpose operation of vector;
extracting all elements below the diagonal line but not including the diagonal line in the R to form an N-dimensional column vector beta', wherein N is M (M-1)/2;
and phase jump phenomenon can occur when the phase difference phi of the array element pair is close to +/-pi. This problem can be avoided by converting the phase into a complex mode, which is:
β=exp(jφ)=exp(j2πdsin(θ)/λ)
wherein d is the length of the base line between the array element pairs. The amplitude of the complex phase difference is normalized after the complex phase difference is obtained. The elements below the diagonal line in R, but not including the diagonal line, all correspond to phase differences and amplitude differences between different array elements, and the amplitude is normalized by the following formula:
β=β′./abs(β′)
where/represents a point division, i.e. a division by an element, abs (·) represents an absolute value.
Since deep learning can only receive real numbers as input, the real numbers of the phase differences are used as the features γ, γ ═ Re of the deep learning training setT(β);ImT(β)]TWherein Re (-), Im (-), and (-)TAnd respectively representing taking a real part, taking an imaginary part and transposing.
The reason for choosing the potential difference as the training set feature is:
1. according to the principle of interferometer, the phase difference phi of array element pair has the following relation with the angle theta of incoming wave:
φ=2πdsin(θ)/λ
as can be seen from the formula, the wave angle is related only to the phase difference and not to the amplitude information.
2. The phase difference calculated based on the array steering vector and based on the array output baseband signal is the same, and is more flexible in practical application.
And 5, deep learning network training. Taking the real-valued phase difference vector γ as an input feature, the input feature is a matrix of size 2N × (L + (L-M '+ 1) (M' -1) P) IQ, where the rows of the matrix represent the feature dimension and the columns of the matrix represent the data sample dimension. Using an incoming wave azimuth theta as output, training a deep learning neural network f (gamma) under a regression mode by using a back propagation algorithm, wherein the deep learning neural network f (gamma) is a fully-connected neural network, the number of neurons of an input layer is 2N, the number of hidden layer layers J is more than or equal to 3, the number of neurons of an output layer is 1, selecting the mean square error of a minimized network output value by a cost function of the training neural network, and meanwhile, setting a regularization item of 2 norms based on network weight for preventing overfitting to obtain the trained deep learning network
Figure BDA0002660493120000081
Step 6, utilizing the trained deep learning network
Figure BDA0002660493120000082
And (6) carrying out direction finding. Assuming that a baseband signal for test output by the array is z, taking z as a guide vector, and calculating according to the step 4 to obtain a phase difference vector beta under a complex modezIt is real to gammazDeep learning network with well-trained post-input
Figure BDA0002660493120000083
The azimuth angle theta of the incoming wave corresponding to the test signal z can be obtainedz
Example one
Step 1, placing 8 array element linear arrays with antenna covers in a microwave darkroom, placing a radiation source at the far field position of the array, and setting the test signal-to-noise ratio to be 60 dB. Scanning is performed at 0.5 intervals over a uniform azimuthal grid within-40, 40 at depression angles-3, -2, - …,3, respectively, and array output baseband signals are acquired. The measurement data of the integral azimuth angle grid for all pitch angles, i.e., -40 °, -39 °, …,40 ° ], are used to construct training data, and the fractional azimuth angle grid for all pitch angles, i.e., -39.5 °, -38.5 °, …,39.5 °, are used to test calibration performance.
And 2, adding zero-mean Gaussian white noise by using a Monte Carlo method to generate a noise sample. And carrying out 100 Monte Carlo experiments on the collected array output baseband signals, and setting the signal-to-noise ratio of the noise sample to be 15 dB.
And 3, in the process of refining the azimuth grids through local array flow pattern interpolation processing and constructing training data, taking L as 81 and P as 9, namely, uniformly interpolating 9 refined grids among the integer grids, wherein M is 8 and M' is 4. The number of final training samples is (L + (L-M '+ 1) (M' -1) P) IQ 1530900.
And 4, extracting the phase difference under the corresponding complex mode for each array guide vector, extracting N-M (M-1)/2-28 phase differences, and performing real-number transformation to finally obtain 56 features gamma.
Step 5, deep learning network training, namely setting a hidden layer of a neural network as 5 layers, setting neuron of each hidden layer as 32, selecting ReLU as an activation function, selecting Adam as an optimizer, setting the maximum epoch number as 1000, setting the Batch size as 14336, setting the initial learning rate as 0.001 and setting the 2-norm regular term coefficient as 0.0001.
And 6, carrying out direction finding by using the trained deep learning network.
The simulation results based on the method of the present invention were compared with the deep learning method, pitch error compensation method and global array interpolation method using only pitch zero training data. The comparison index under the noise-free condition is the root mean square error of the direction finding result on the direction finding error and the decimal azimuth angle grid data, and the comparison index under the noise condition is the root mean square error of the direction finding result on the decimal azimuth angle grid data on the plurality of pitch angles. The amplitude and phase compensation method and the global array interpolation method adopt a beam forming method to measure angles.
The comparison results in the case of no noise are shown in fig. 2 and 3, the test data in fig. 2 are data of different azimuth angles at zero degree of pitch, and the test data in fig. 3 are data of different azimuth angles at [ -3 °, -2 °, …,3 ° ] pitch angle. If the deep learning model is trained with array steering vectors only at zero degrees of pitch, it will have the best performance at zero degrees of pitch, but the worst performance at other large pitch angles. The training data of the method of the invention adopts the array steering vectors corresponding to a plurality of pitch angles, so that the method has good performance on all the pitch angles and can well correct two-dimensional angle dependent errors. The performance of the global array interpolation method is slightly better than that of a magnitude-phase compensation method, but compared with the method, the performance of the method is better, and the direction-finding error is less than 0.1 degree.
② the comparison result in case of noise is shown in FIG. 4, wherein the angle measurement results are averaged over 500 experiments. The horizontal axis of the graph shows the change in signal-to-noise ratio from 15dB to 50dB, and the vertical axis shows the root mean square error of the angle measurement. The root mean square error of the global array interpolation method is slightly better than that of the amplitude-phase compensation method. In addition, the direction finding precision of the deep learning model trained by a plurality of pitching angle data is superior to that of the deep learning model only trained by pitching zero degree under all signal to noise ratios, and is superior to that of the other two methods based on signal processing.
The above description is only exemplary of the preferred embodiment and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (2)

1. The one-dimensional linear array direction finding method under the two-dimensional angle dependence error based on the deep learning is characterized in that: the method specifically comprises the following steps:
step 1, placing an M-element one-dimensional linear array on a servo platform in a darkroom, fixing a radiation source in a far field of the array, and collecting two-dimensional calibration data; setting system parameters to enable the signal-to-noise ratio of the array output baseband signals to be the maximum value in a dynamic range; setting an azimuth grid point set omega as { theta in an azimuth angle of view of the array12,...,θLAnd setting a pitching grid point set in a pitching angle of view
Figure FDA0002660493110000011
Wherein L is the number of azimuth grid points, and I is the number of pitch grid points; the servo platform is rotated to make the radar pitch angle be
Figure FDA0002660493110000012
The azimuth angle is scanned at a grid point set omega, and M-dimensional array output baseband signals corresponding to all angles are recorded at each grid point
Figure FDA0002660493110000013
Wherein θ ∈ Ω, I ═ 1, 2., I;
step 2, measuring the baseband signal obtained by each darkroom
Figure FDA0002660493110000014
Adding zero-mean Gaussian white noise by a Monte Carlo method; to pair
Figure FDA0002660493110000015
Performing Monte Carlo experiments Q times to obtain signals
Figure FDA0002660493110000016
Figure FDA0002660493110000017
Is variance of
Figure FDA0002660493110000018
Is a zero-mean white gaussian noise of (1),
Figure FDA0002660493110000019
representing a variance of noise to be added to the ith grid point data corresponding to the ith azimuth grid point; the noise power is of such a magnitude that
Figure FDA00026604931100000110
The signal-to-noise ratio of (2) is obtained in practical application as the minimum value in the dynamic range of the target signal-to-noise ratio;
Figure FDA00026604931100000111
the signal-to-noise ratio is calculated by the formula
Figure FDA00026604931100000112
Wherein, the unit of SNR is dB,
Figure FDA00026604931100000113
to represent
Figure FDA00026604931100000114
2 norm of (d);
step 3, calculating an array guide vector; directly calculating array guide vectors for the measured azimuth grid points of each pitch angle darkroom, performing local array flow pattern interpolation processing on the azimuth grid points which are not measured by each pitch angle darkroom,
Figure FDA00026604931100000115
after being refinedThe angle corresponding to the azimuth grid point of (a);
if
Figure FDA00026604931100000116
That is, the azimuth is in the grid point situation, the steering vector calculation formula is:
Figure FDA00026604931100000117
wherein
Figure FDA00026604931100000118
Is composed of
Figure FDA00026604931100000119
The first element of (a);
② if
Figure FDA00026604931100000120
I.e., the azimuth off grid point situation, assume
Figure FDA00026604931100000121
The steering vector calculation formula is:
Figure FDA00026604931100000122
wherein
Figure FDA0002660493110000021
For an ideal resolved steering vector, T, which is dependent on array configuration and azimuth but independent of pitchiA local array flow pattern interpolation matrix is adopted;
Tileast squares estimation of
Figure FDA0002660493110000022
The calculation method comprises the following steps: is provided with
Figure FDA0002660493110000023
To comprise thetalAnd thetal+1If theta is greater than the set of azimuth sub-grids formed by the continuous M' azimuth angle grid pointslAnd thetal+1Not at the edge of the grid set, omega 'has M' -1 extraction methods; for each extraction method, an interpolation matrix is calculated by using a least square method, and the calculation formula is as follows:
Figure FDA0002660493110000024
wherein (·)+The pseudo-inverse of the matrix is represented,
Figure FDA0002660493110000025
and A+(omega ') is an array flow pattern matrix composed of guide vectors and ideal guide vectors which are respectively calculated from the measured data on the azimuth sub-grid point set omega'; therefore, P grids are refined through interpolation between every two continuous darkroom measurement orientation grid points, and P (M '-1) guide vectors are interpolated between every two grids because (M' -1) guide vectors are calculated by each refined grid; considering the Monte Carlo noise processing of the step 2 and the edge effect of the grid set, the number of array guide vectors calculated by the darkroom measurement grid and the interpolation refinement grid is (L + (L-M '+ 1) (M' -1) P) IQ;
step 4, extracting the phase difference under the corresponding complex number mode for each array guide vector, and constructing the characteristics of a deep learning training set; the phase difference extraction method in the complex mode is as follows:
calculating covariance matrix
Figure FDA0002660493110000026
(·)HRepresenting the conjugate transpose operation of vector;
extracting all elements below the diagonal line but not including the diagonal line in the R to form an N-dimensional column vector beta', wherein N is M (M-1)/2;
the phase difference calculation formula under the complex mode is beta ═ beta '/abs (beta'), wherein,/represents point division, namely dividing by elements, and abs (·) represents an absolute value;
after the phase difference is real-numbered, the real number is used as the characteristic gamma, gamma ═ Re of the deep learning training setT(β);ImT(β)]TWherein Re (-), Im (-), and (-)TAnd respectively representing the real part taking, the imaginary part taking and the transposition;
step 5, deep learning network training; taking a real number phase difference vector gamma as an input characteristic, taking an incoming wave azimuth theta as an output, training a deep learning neural network f (gamma) under a regression mode by using a back propagation algorithm, wherein the deep learning neural network f (gamma) is a fully-connected neural network, the number of neurons of an input layer is 2N, the number of hidden layer layers J is more than or equal to 3, the number of neurons of an output layer is 1, selecting a mean square error of a minimized network output value as a cost function of the training neural network, and simultaneously setting a regularization item of a 2 norm based on network weight for preventing overfitting to obtain the trained deep learning network
Figure FDA0002660493110000031
Step 6, utilizing the trained deep learning network
Figure FDA0002660493110000032
Carrying out direction finding; assuming that a baseband signal for test output by the array is z, taking z as a guide vector, and calculating according to the step 4 to obtain a phase difference vector beta under a complex modezIt is real to gammazDeep learning network with well-trained post-input
Figure FDA0002660493110000033
Obtaining the azimuth angle theta of the incoming wave corresponding to the test signal zz
2. The one-dimensional linear array direction finding method under the two-dimensional angle dependence error based on the deep learning as claimed in claim 1, characterized in that: in step 5, the number of hidden layers of the deep learning network is 5, and the 2-norm regular term coefficient is 0.0001.
CN202010903250.1A 2020-09-01 2020-09-01 One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependent error Active CN112255625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010903250.1A CN112255625B (en) 2020-09-01 2020-09-01 One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependent error

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010903250.1A CN112255625B (en) 2020-09-01 2020-09-01 One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependent error

Publications (2)

Publication Number Publication Date
CN112255625A true CN112255625A (en) 2021-01-22
CN112255625B CN112255625B (en) 2023-09-22

Family

ID=74223752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010903250.1A Active CN112255625B (en) 2020-09-01 2020-09-01 One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependent error

Country Status (1)

Country Link
CN (1) CN112255625B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255171A (en) * 2021-07-09 2021-08-13 中国人民解放军国防科技大学 Direction finding error correction method and device based on transfer learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025563A1 (en) * 2007-06-08 2011-02-03 Thales Method for measuring incoming angles of coherent sources using space smoothing on any sensor network
CN104535971A (en) * 2014-12-08 2015-04-22 广西大学 Clutter suppression method and device based on space-time interpolation
CN105044688A (en) * 2015-08-24 2015-11-11 西安电子科技大学 Radar robust space-time adaption processing method based on iterative subspace tracking algorithm
CN109212526A (en) * 2018-10-17 2019-01-15 哈尔滨工业大学 Distributive array target angle measurement method for high-frequency ground wave radar
CN111487478A (en) * 2020-03-27 2020-08-04 杭州电子科技大学 Angle-dependent complex array error calibration method based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025563A1 (en) * 2007-06-08 2011-02-03 Thales Method for measuring incoming angles of coherent sources using space smoothing on any sensor network
CN104535971A (en) * 2014-12-08 2015-04-22 广西大学 Clutter suppression method and device based on space-time interpolation
CN105044688A (en) * 2015-08-24 2015-11-11 西安电子科技大学 Radar robust space-time adaption processing method based on iterative subspace tracking algorithm
CN109212526A (en) * 2018-10-17 2019-01-15 哈尔滨工业大学 Distributive array target angle measurement method for high-frequency ground wave radar
CN111487478A (en) * 2020-03-27 2020-08-04 杭州电子科技大学 Angle-dependent complex array error calibration method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李存勖 等: "基于空域稀疏性的方位依赖阵列误差校正算法", 《电子与信息学报》, pages 2219 - 2224 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255171A (en) * 2021-07-09 2021-08-13 中国人民解放军国防科技大学 Direction finding error correction method and device based on transfer learning

Also Published As

Publication number Publication date
CN112255625B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN106788653B (en) Adaptive beam forming method based on covariance matrix reconstruction
CN111487478B (en) Angle-dependent complex array error calibration method based on deep neural network
CN106707250B (en) Radar array Adaptive beamformer method based on mutual coupling calibration
CN111046591B (en) Joint estimation method for sensor amplitude-phase error and target arrival angle
Sun et al. A postmatched-filtering image-domain subspace method for channel mismatch estimation of multiple azimuth channels SAR
CN106842135B (en) Adaptive beamformer method based on interference plus noise covariance matrix reconstruct
CN110196417B (en) Bistatic MIMO radar angle estimation method based on emission energy concentration
CN116430303A (en) Broadband planar array multi-beam forming method and amplitude comparison angle measurement method
CN112255625A (en) One-dimensional linear array direction finding method based on deep learning under two-dimensional angle dependence error
CN113466782B (en) Mutual coupling correction DOA estimation method based on Deep Learning (DL)
Guo et al. Off-grid space alternating sparse Bayesian learning
Ma et al. A novel ESPRIT-based algorithm for DOA estimation with distributed subarray antenna
CN111610488A (en) Method for estimating wave arrival angle of any array based on deep learning
Li et al. Direction of arrival estimation of array defects based on deep neural network
Marinho et al. Robust nonlinear array interpolation for direction of arrival estimation of highly correlated signals
Hamici Elements failure robust compensation in 2D phased arrays for DOA estimation with M-ary PSK signals
CN115808659A (en) Robust beam forming method and system based on low-complexity uncertain set integration
CN115980721A (en) Array self-correcting method for error-free covariance matrix separation
CN115248413A (en) Off-grid signal direction-of-arrival estimation method suitable for non-uniform linear array
CN111366891B (en) Pseudo covariance matrix-based uniform circular array single snapshot direction finding method
CN109061564B (en) Simplified near-field positioning method based on high-order cumulant
Rajani et al. Direction of arrival estimation by using artificial neural networks
CN109633635B (en) Meter wave radar height measurement method based on structured recursive least squares
Bourennane et al. Propagator methods for finding wideband source parameters
CN114184999A (en) Generating model processing method of cross-coupling small-aperture array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant