CN113050077A - MIMO radar waveform optimization method based on iterative optimization network - Google Patents

MIMO radar waveform optimization method based on iterative optimization network Download PDF

Info

Publication number
CN113050077A
CN113050077A CN202110293102.7A CN202110293102A CN113050077A CN 113050077 A CN113050077 A CN 113050077A CN 202110293102 A CN202110293102 A CN 202110293102A CN 113050077 A CN113050077 A CN 113050077A
Authority
CN
China
Prior art keywords
network
matrix
iteration
input
convergence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110293102.7A
Other languages
Chinese (zh)
Other versions
CN113050077B (en
Inventor
王鹏飞
魏志勇
胡进峰
张伟见
李玉枝
邹欣颖
董重
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute of UESTC Huzhou filed Critical Yangtze River Delta Research Institute of UESTC Huzhou
Priority to CN202110293102.7A priority Critical patent/CN113050077B/en
Publication of CN113050077A publication Critical patent/CN113050077A/en
Application granted granted Critical
Publication of CN113050077B publication Critical patent/CN113050077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Variable-Direction Aerials And Aerial Arrays (AREA)

Abstract

The invention discloses an MIMO radar waveform optimization method based on an iterative optimization network, relates to the technical field of radar, and solves the problems that the weighting time delay of correlation only optimizes the WISL of a waveform, deep learning waveform design cannot be converged, and an initial value is insensitive. Inputting a normalized random vector or an optimized normalized phase vector in a set network model, and outputting a signal matrix, wherein the signal matrix is an MIMO radar waveform; and setting a signal processing function as a loss function of the network model, wherein the signal processing function is used for driving the network model, and parameters of the network model are optimized by an Adam deep learning method. The present invention facilitates a more thorough optimization and also solves the problem of invalid iterations.

Description

MIMO radar waveform optimization method based on iterative optimization network
Technical Field
The invention relates to the technical field of radar, in particular to a MIMO radar waveform optimization method based on an iterative optimization network.
Background
Because MIMO radar has better performance than phased array radar, MIMO radar waveforms with good self-correlation and cross-correlation properties are receiving wide attention, and have more advantages than conventional radar. On one hand, when the transmitting has good correlation waveform, the MIMO radar can effectively restrain noise and interference so as to improve the signal-to-interference-and-noise ratio. The MIMO radar has obvious advantages in target positioning, parameter estimation and improvement of spatial resolution, and virtual aperture increase can be carried out through a filter at a receiving end. At present, the waveform optimization of the MIMO radar is mainly based on the phase waveform design.
MIMO radar waveform design a waveform design method based on all time delays of correlation, such as "j.song, p.babu, and d.p.palomar," Optimization methods for designing sequences with low autocorrelation sites, "IEEE trans.signal process", vol.63, No.15, pp.3998-4009, and aug.2015, "has performed waveform design that optimizes autocorrelation performance, but this only optimizes autocorrelation and has no limitation on cross-correlation. The document Hu J, Wei Z, Li Y, et al, Designing unified Waveform(s) for MIMO Radar by Deep Learning Method [ J ] IEEE Transactions on Aeroperformance and Electronic Systems,2020 ] introduces a Deep Learning Method to optimize the autocorrelation and cross-correlation performance of the Waveform synchronously, which can generate a comprehensive Waveform, however, the Method is insensitive to the initial value and has no corresponding convergence condition.
For special cases, the waveform design method of the weighted delay of the correlation is also widely researched. Document H.He, P.Stoica, and J.Li, "Designing unidimensional sequence sequences with good coatings; the optimization of the weighted delay of correlation is first proposed by including an application to MIMO radar, "IEEE trans. signal process, vol.57, No.11, pp.4391-4405, nov. 2009", where the optimization criterion is approximate Weighted Integration Sidelobe Level (WISL), which cannot directly optimize WISL. The subsequent document Cui G, Yu X, Piezzo M, et al, constant module sequence set design with good correction properties [ J ]. Signal Processing,2017,139:75-85, proposes optimization directly for WISL, where performance and optimization time are further improved. However, none of these approaches constrains the Weighted Peak Sidelobe Level (WPSL), and therefore the optimized waveform produces a higher peak.
From the above, it can be found that the existing methods for waveform design of MIMO radar for the weighted delay of correlation only optimize the WISL of the waveform, and the existing deep learning waveform design methods have the problems of incapability of convergence and insensitivity of initial values.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the weighted time delay of the correlation only optimizes the WISL of the waveform, the deep learning waveform design cannot be converged, and the initial value is insensitive.
The invention is realized by the following technical scheme:
the MIMO radar waveform optimization method based on the iterative optimization network comprises the steps of inputting a normalized random vector or an optimized normalized phase vector into a set network model, and outputting a signal matrix, wherein the signal matrix is an MIMO radar waveform;
the details are as follows: setting a signal processing function as a loss function of a network model, wherein the signal processing function is used for driving the network model, and parameters of the network model are optimized by an Adam deep learning method;
wherein: inputting a time delay weighting vector and a normalized phase sequence into a constructed signal processing function and a constructed loss function to obtain a loss value, wherein the normalized phase sequence is in a phase matrix form of conversion of a plurality of antenna transmitting waveforms, the constructed signal processing function comprises the steps of converting pulse signals of the antenna transmitting waveforms into a signal real part matrix and a signal imaginary part matrix based on correlation, obtaining a correlation amplitude value by convolution calculation of a convolution network, and constructing a weighting correlation matrix according to the time delay weighting vector to calculate a correlation value;
inputting a normalized random vector and the number of neurons in the construction of a depth residual error network and outputting a normalized phase vector;
and adding convergence conditions into the signal processing function and the depth residual error network to obtain an inner iteration network and construct an outer iteration network, sequentially carrying out iteration convergence judgment on the inner iteration network and the outer iteration network, obtaining a phase matrix converted from a vector after convergence, and calculating to obtain a signal matrix.
The steps for realizing the principle and the calculation process are as follows:
step 1. constructing signal processing function
Input as a normalized phase sequence
Figure BDA0002983185060000021
Sum delay weight vector
Figure BDA0002983185060000022
The output is a loss value cn=L(ynγ). A loss function L (-) needs to be constructed.
Will be provided with
Figure BDA0002983185060000023
Conversion to [0,2 π]I.e. by
Figure BDA0002983185060000024
Then converted into a phase matrix form
Figure BDA0002983185060000025
Wherein
Figure BDA0002983185060000026
Representing the waveform transmitted by the first antenna. Because the pulse signal is in the form of
Figure BDA0002983185060000027
It needs to be converted into a form of a signal real part matrix and a signal imaginary part matrix
Figure BDA0002983185060000028
Figure BDA0002983185060000029
For the calculation of the correlation, a signal spreading matrix needs to be constructed, wherein a real part spreading matrix and an imaginary part spreading matrix are respectively constructed under the non-periodic condition:
Figure BDA0002983185060000031
Figure BDA0002983185060000032
the spreading matrix is constructed for the periodic case as:
Figure BDA0002983185060000033
Figure BDA0002983185060000034
wherein Q-NRepresenting a matrix constructed by deleting the nth row of the imaginary matrix H. In the same way, Q-1A matrix constructed by deleting row 1 of the phase matrix H is shown. For PAPThe same construction process is used. Used herein according to periodic and aperiodic conditions
Figure BDA0002983185060000035
And
Figure BDA0002983185060000036
spreading matrices representing the imaginary and real parts, respectively, in the case of periods
Figure BDA0002983185060000037
And is
Figure BDA0002983185060000038
If it is a non-periodic condition
Figure BDA0002983185060000039
And is
Figure BDA00029831850600000310
And performing convolution calculation by means of a convolution network. Because the correlation can be decomposed into imaginary and real parts, respectively:
Figure BDA00029831850600000311
here, the
Figure BDA00029831850600000312
Is smThe real part of (n) is,
Figure BDA00029831850600000313
is smThe imaginary part of (n). The calculation method of the convolution network comprises the following steps:
Figure BDA00029831850600000314
here, the
Figure BDA00029831850600000315
Representing a convolution calculation. The same can be obtained
Figure BDA00029831850600000316
And
Figure BDA00029831850600000317
the magnitude at which the correlation can be obtained is therefore
Figure BDA0002983185060000041
The vector may then be weighted according to the delay
Figure BDA0002983185060000042
The weighted correlation is obtained as:
Figure BDA0002983185060000043
wherein |, indicates that each element is multiplied correspondingly. A weighted autocorrelation matrix is then constructed:
Figure BDA0002983185060000044
a weighted cross-correlation matrix is then constructed:
Figure BDA0002983185060000045
a loss function may then be constructed and the loss value calculated as:
Figure BDA0002983185060000046
where Σ () and max () denote summing all elements of the matrix and finding the maximum of all elements, respectively. Where N is 1, 2. Obtaining the loss value cn
Step 2: constructing a depth residual network
The depth residual error network input is a normalized random vector
Figure BDA0002983185060000047
And the number of neurons d, the output being the normalized phase orientationMeasurement of
Figure BDA0002983185060000048
And constructing a deep residual error network, wherein the network consists of a plurality of residual error blocks and an input and output full connection layer, and each residual error block consists of two full connection layers and an identity mapping combination. The mathematical expression may be expressed as:
Figure BDA0002983185060000049
where W and B are parameters in the depth residual network. The mathematical expression of the fully-connected layer is as follows:
pi=xi-1Wi+bi
in the formula xi-1Representing the input, p, of the i-th fully-connected layeriRepresenting the output of the fully connected layer. Then pass through
Figure BDA0002983185060000051
Activating a function, x can be obtainedi=sigmod(pi);
And an identity map is constructed between every two fully connected layers. Is provided with
Figure BDA0002983185060000052
D is the number of neurons in each layer of the residual block, and is obtained according to the calculation method of the full connection layer
Figure BDA0002983185060000053
And
Figure BDA0002983185060000054
finally obtaining xi+1=sigmod(pi+1+xi-1)。
The depth residual error network is composed of 10 residual error blocks and input and output layers, the input layer and the output layer mainly carry out dimension conversion, and the number of neurons in the input and output layers and the residual error blocks is d-128.
Step 3. internal iteration network
The internal iteration network consists of a depth residual error network, a signal processing function and a convergence condition. Input as an input normalized phase sequence
Figure BDA0002983185060000055
Fusion factor xi12Time delay weight vector gamma, maximum number of iterations NmaxMinimum number of iterations NminThe current iteration number n is equal to 0, and the convergence factor theta of the inner iteration is2Convergence interval E, number of residual blocks RNThe number of neurons of the network K and the learning rate of Adam algorithm κ.
Obtaining initial loss value
c0=L(y0,γ)
Obtaining a normalized phase sequence y through a residual network:
Figure BDA0002983185060000056
construction of a normalized phase sequence y by adaptive addition of an input phase and an incremental phase using two adaptive adjustment factorsn
Figure BDA0002983185060000057
The loss value is obtained by a signal processing function:
cn=L(yn,γ)
where N is 1, 2. Obtaining the loss value c of the nth internal circulationn. And using ADAM algorithm min cnAnd optimizing W and B in the depth residual error network module. The optimal phase sequence is saved according to the loss value:
y′=mincyn,n=0,1,...N
updating an adaptation factor based on a loss value
Figure BDA0002983185060000061
Figure BDA0002983185060000062
Inner loss value c0And c' are each y0And a loss value corresponding to y ', wherein y' is the optimal phase of the inner loop of the upper round. I.e. c0=L(y0γ), c ═ L (y', γ). Where L is the loss function.
And finally, judging convergence. First, the maximum iteration number N is determinedmaxMinimum number of iterations NminInner iterative convergence factor xi1. And a convergence judgment section E. Definition j pre0 and jnowEach of the convergence amounts is 0, and is the amount of convergence twice consecutively in the convergence interval E. Either of the following two conditions is met, i.e. the inner loop is exited.
The first condition is as follows:
judging whether the iteration number reaches the limit, if N is NmaxAnd exiting the inner iteration.
And a second condition:
3)jnow=jnow+cn
4) when n ═ E, jpre=jnow,jnow=0。
5) When in use
Figure BDA0002983185060000063
Judgment of
Figure BDA0002983185060000064
And N > NminJump out of inner loop, otherwise jpre=jnowj now0, n +1, and then re-iterate.
Step 3, constructing external iteration network
Input as an initial normalized phase sequence
Figure BDA0002983185060000065
Fusion factor xi1And xi2Outer iteration convergence factor θ1. The output is the required waveform matrix S.
First generating an input normalized phase sequence
Figure BDA0002983185060000066
The phase sequence may be generated randomly or may be optimized by an input, where M1. Where y ism(n)∈[0,1]Representing the nth sub-pulse sequence transmitted by the mth antenna. And then inputting the normalized phase sequence into a loss function to obtain a loss value.
c0=L(y0,γ)
Where L (-) is the signal processing function. In addition, a fusion factor xi is constructed1And xi2This is used as an initial fit in the inner loop. Initializing the fusion factor according to the input condition, and if the random sequence is used as the input, ξ1=0,ξ 21 if the optimized waveform phase sequence is taken as input, ξ1=0.9,ξ2=0.1。
This is then input into the inner iteration as follows:
y′,c′=F(y012)
wherein
Figure BDA0002983185060000071
For the optimal phase sequence obtained by the inner loop, c' represents the inner iteration for the corresponding loss value F (-) and then judges whether the outer iteration converges or not, and one of two convergence conditions needs to be met. Are respectively as
Figure BDA0002983185060000072
Where theta is1Is the outer loop convergence factor. If the convergence conditions of the two outer loops are not met, the parameters need to be updated and a new round of loop is needed. Update y separately0=y′,c0And c'. And correspondingly updates ξ12If a random sequence is taken as input, then ξ1=0.5,ξ20.5, if the optimized waveform phase sequence is taken as input, ξ1=0.9,ξ2=0.1。
If any one condition is satisfied, the algorithm is converged and output
Figure BDA0002983185060000073
As a phase sequence. A normalized phase matrix is then generated
Figure BDA0002983185060000074
Where mat (-) is the transformation of the vector into a phase matrix. Then obtaining a signal matrix
Figure BDA0002983185060000075
This S signal matrix is the desired waveform.
The input is a normalized random vector or an optimized normalized phase vector, and the output is the MIMO radar waveform designed by the method. Designing a signal processing function into a loss function of the network model to drive the network model, and optimizing parameters of the network model by an Adam deep learning method.
The invention has the following advantages and beneficial effects:
1) aiming at the weighted comprehensive optimization function, the problem of too high WPSL is solved;
the invention proposes a weighted comprehensive optimization function and optimizes it using an iterative optimization network. The waveform optimally generated by the method has excellent performance, and the WPSL can be effectively depressed. In addition, the invention is also the only optimization method which can simultaneously optimize the WISL and the WPSL.
2) The invention solves the problem that the existing deep learning method is insensitive to input;
the invention can input the optimized normalized phase sequence of the waveform, thereby accelerating convergence. From the experimental results, it can be found that the normalized phase sequence of the input optimized waveform is optimized in half the time of optimization of the randomly generated normalized phase sequence, and also has excellent performance.
3) The invention solves the problem that the existing deep learning method can not effectively converge;
existing deep learning methods can only control the optimization by setting the number of iterations. The invention provides two convergence criteria of internal and external iteration, and can quickly converge under the condition of ensuring the optimization effect. On the one hand, the optimization can be promoted more thoroughly, and on the other hand, the problem of invalid iteration is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a diagram of an outer iteration structure in an embodiment of the present invention;
FIG. 2 is a diagram of a depth residual error network model according to an embodiment of the present invention;
FIG. 3 is a diagram of a residual block in an embodiment of the present invention;
FIG. 4 is a graph of performance curves of the present invention and the existing two methods at different sequence lengths, respectively;
FIG. 5 is a graph of the optimization curves of the present invention and the two prior art methods at different initial inputs;
fig. 6 is a graph of the optimization of the present invention for various initial inputs.
FIG. 7 is a graph of the optimization time of the present invention and the two prior art methods at different sequence lengths.
Detailed Description
Hereinafter, the term "comprising" or "may include" used in various embodiments of the present invention indicates the presence of the invented function, operation or element, and does not limit the addition of one or more functions, operations or elements. Furthermore, as used in various embodiments of the present invention, the terms "comprising," "having," and their derivatives, are intended to be only representative of the particular feature, number, step, operation, element, component, or combination of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
In various embodiments of the invention, the expression "or" at least one of a or/and B "includes any or all combinations of the words listed simultaneously. For example, the expression "a or B" or "at least one of a or/and B" may include a, may include B, or may include both a and B.
Expressions (such as "first", "second", and the like) used in various embodiments of the present invention may modify various constituent elements in various embodiments, but may not limit the respective constituent elements. For example, the above description does not limit the order and/or importance of the elements described. The foregoing description is for the purpose of distinguishing one element from another. For example, the first user device and the second user device indicate different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of various embodiments of the present invention.
It should be noted that: if it is described that one constituent element is "connected" to another constituent element, the first constituent element may be directly connected to the second constituent element, and the third constituent element may be "connected" between the first constituent element and the second constituent element. In contrast, when one constituent element is "directly connected" to another constituent element, it is understood that there is no third constituent element between the first constituent element and the second constituent element.
The terminology used in the various embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the invention. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not used as limitations of the present invention.
Example 1:
the MIMO radar is composed of M antennas, where each antenna transmits N sub-pulse waveforms, and in order to ensure efficient energy utilization, a signal is defined as a constant modulus, so that each sub-pulse can be expressed as:
Figure BDA0002983185060000091
wherein M1, N, M, N1. And it is necessary to ensure ym(n)∈[0,2π]. Definition of
Figure BDA0002983185060000092
As a sequence of waveforms transmitted by the mth antenna. Wherein formula (1) represents the nth sub-pulse signal transmitted by the mth antenna. Therefore, all the sub-pulse waveforms transmitted by all the antennas can be represented as a matrix
Figure BDA0002983185060000093
Since there are M signals, each transmitting N sub-pulses, there are a total of N rows and M columns. Where each column represents the waveform transmitted by one antenna and each element represents one sub-pulse. In the same way, it can be shown that the phase matrix corresponding to the signal matrix is
Figure BDA0002983185060000094
The orthogonality of the waveforms is often determined by auto-correlation and cross-correlation. Wherein the waveform
Figure BDA0002983185060000095
And wave form
Figure BDA0002983185060000096
The aperiodic cross-correlation sidelobe at time delay k can be expressed as:
Figure BDA0002983185060000101
where M, l ═ 1., M, -N +1 ≦ k ≦ N-1. ()*Defined as conjugate transpose. Assuming that m is l, equation (2) becomes aperiodic autocorrelation. The waveform can be defined as well
Figure BDA0002983185060000102
And wave form
Figure BDA0002983185060000103
Periodic cross-correlation of time delays k:
Figure BDA0002983185060000104
where M, l ═ 1., M, -N +1 ≦ k ≦ N-1. Also when m ═ l, it becomes a periodic autocorrelation. When k is equal to 0, the autocorrelation is expressed as the energy of the signal, and when k is equal to 0, the autocorrelation is expressed as the side lobe of the signal.
In some interesting correlations of the present invention, the delay weight is designed to be γ ═ γ-N+1,...,γk,...γN-1]Then weights the correlations, thus
Figure BDA0002983185060000105
Here k ∈ [ -N +1, N-1], M ═ 1,2,. M and l ═ 1,2,. M. An evaluation criteria Weighted Peak Sidelobe Level (WPSL) may be defined:
Figure BDA0002983185060000106
due to the broad nature of the above standards, the present invention further refines the optimization problem into two standards, Weighted Autocorrelation Peak Sidelobe Level (WAPSL), weighted cross correlation peak sidelobe level (WCPSL):
Figure BDA0002983185060000107
Figure BDA0002983185060000108
a common optimization criterion is the integrated sidelobe level (WISL):
Figure BDA0002983185060000109
likewise, WISL is split into autocorrelation synthesis side lobe level (WAISL), cross correlation synthesis side lobe level (WCISL):
Figure BDA00029831850600001010
Figure BDA0002983185060000111
when in use
Figure BDA0002983185060000112
And is
Figure BDA0002983185060000113
Time is denoted as an aperiodic problem, and taking periodic auto-and cross-correlations in the same way means a periodic problem.
The invention adopts a WCOF to carry out comprehensive optimization of the waveform. And the orders of magnitude of WPSL, WAISL and WCISL are not uniform, which can cause the algorithm to pay more attention to the optimization of WCISL and ignore other attributes, so the orders of magnitude of the WCISL are uniform based on the WCISL. WCOF may be configured as:
Figure BDA0002983185060000114
in the above formula, the weight vector is [ l ]1,l2,l3,l4,l5]The weight vector can be converted into different optimization problems by taking different values.
In terms of working principle: the embodiment provides a MIMO radar waveform optimization and design method based on an iterative optimization network, wherein the network model consists of double iterations. More specifically:
the present invention proposes an iterative optimization network to perform problem optimization. The iterative optimization network uses deep learning to solve this high-order non-convex optimization problem. Deep learning is a natural non-convex model, and can better fit a non-convex optimization problem. In addition, the iterative optimization network method solves the problem that the existing deep learning method is insensitive to the initial value, can link the input with the output, and performs optimization on the basis of the input initial value. A convergence method aiming at deep learning is also provided, and the convergence method aiming at the volatility of the neural network can effectively converge optimized waveforms.
The iterative optimization network consists of two sub-algorithms of outer iteration and inner iteration. The function of the external iteration is to replace the initial value and judge the termination condition of the algorithm. The effect of the inner iteration is to fix the initial values for waveform optimization using a neural network.
And (3) an outer iteration algorithm:
traditional neural networks are not sensitive to initial values and cannot link outputs and inputs together. Thereby integrating the algorithmThe body is divided into two parts. Wherein the outer iteration part carries out the replacement of the initial value to make the initial value continuously close to the optimal value. First, an initial normalized phase sequence is generated
Figure BDA0002983185060000121
Where M1, N, M and N1, N, may be randomly or optimally generated. Where y ism(n)∈[0,1]Representing the nth sub-pulse sequence transmitted by the mth antenna. Then inputting the normalized phase sequence into a signal processing function to obtain a loss value
c0=L(y0,γ) (12)
Where γ is the delay weight and L is the loss function, as will be described later. In addition, a fusion factor xi is constructed1And xi2This is used as an initial fit in the inner loop. Initializing the fusion factor according to the input condition:
Figure BDA0002983185060000122
this is then input into the inner iteration as follows:
y′,c′=F(y012) (14)
wherein
Figure BDA0002983185060000123
For the optimal phase sequence obtained by the inner iteration, c' is its corresponding loss value, and F (-) represents the inner iteration. Then, whether the outer loop converges or not is judged, and one of two convergence conditions needs to be met. Are respectively as
Figure BDA0002983185060000124
Where theta is1Is the outer loop convergence factor. The algorithm is converged and output when any one condition is met
Figure BDA0002983185060000125
AsA phase sequence. A normalized phase matrix is then generated
Figure BDA0002983185060000126
Where mat (-) is the transformation of the vector into a phase matrix. Then obtaining a signal matrix
Figure BDA0002983185060000127
This signal matrix is the desired waveform.
If the convergence conditions of the two outer loops are not met, the parameters need to be updated and a new round of loop is needed. Update y separately0=y′,c0And c'. And correspondingly updates ξ12Comprises the following steps:
Figure BDA0002983185060000131
the outer iteration algorithm flow chart is shown in fig. 1.
An inner iteration algorithm:
the inner loop algorithm is used for optimizing the waveform by using a depth residual error network and matching with an Adam algorithm, and has strong non-convex optimization capability. The main function of the inner iteration is to optimize and generate a better output waveform for a specific input waveform and automatically converge.
The depth residual error network improves the problems of gradient loss and gradient explosion caused by the fact that the depth of the network is too deep in the traditional neural network, and the depth of the neural network can be further deepened. Therefore, the invention adopts a network model taking a deep residual error network as a basis.
The whole network consists of a depth residual error network module and a loss function calculation module. The method comprises the steps of constructing a signal processing function through a signal model and calculating a loss value, and then optimizing parameters of the depth residual error network module through an Adam algorithm and judging convergence conditions for circular iteration. The signal processing module is also used in the outer loop.
Designing a depth residual error network module:
the depth residual error network is used for generating optimized waveforms, firstly, dimension conversion is carried out by using an input full connection layer, then, a plurality of residual error blocks are passed, and finally, dimension conversion is carried out by using an output full connection layer and then output. The specific structure is shown in fig. 2.
Firstly, generating a phase sequence y by outer iteration0When the input is input into the deep residual error network, the conversion relationship between the input and the output can be expressed as:
Figure BDA0002983185060000132
where P () represents the forward propagating transfer function, W ═ Wi|i=1,...,2Rn+2},b={bi|i=1,...,2Rn+2}, WiAnd biRepresents the weight vector and offset of the i-th fully-connected layer, where RnRepresenting the number of residual blocks. Each layer uses K128 neurons. The goal of (1) is to obtain an optimal transfer function P by optimizing W, b so that the input random sequence can generate the optimal output sequence.
The mathematical expression of the fully connected layer in the network is as follows:
xi=xi-1Wi+bi (20)
in the formula xi-1Representing the input, p, of the i-th fully-connected layeriRepresenting the output of the i-th fully connected layer. Then pass through
Figure BDA0002983185060000133
Activating a function, x can be obtainedi=sigmod(zi)。
The depth residual error network adds identity mapping on the basis of full connection to inhibit the problems of gradient disappearance and gradient explosion. The network of each two layers is mapped by an identity. The structure of the residual block is shown in fig. 3.
Is provided with
Figure BDA0002983185060000141
For the input of the ith residual block, the calculation method according to the full connection layer can be obtained similarly
Figure BDA0002983185060000142
Figure BDA0002983185060000143
And
Figure BDA0002983185060000144
finally obtaining
Figure BDA0002983185060000145
The adopted depth residual error network consists of a plurality of residual error blocks, an input layer and an output layer, wherein the number of neurons of the input layer and the residual error blocks is set to be K which is 128, and the number of neurons of the output layer is K which is M multiplied by N. The input is a normalized random vector
Figure BDA0002983185060000146
The output is an incremental normalized phase vector
Figure BDA0002983185060000147
The invention uses a residual network consisting of 10 residual blocks.
The signal processing module:
the phase of the output generated by the depth residual network is
Figure BDA0002983185060000148
The desired optimum phase is adjusted on the basis of the input phase. The present invention thus uses two adaptive adjustment factors for adaptive addition of the input phase and the delta phase.
Figure BDA0002983185060000149
In which ξ1For inputting an adaptation factor, xi2For incremental adaptation factor, ynThe phase waveform generated for the nth optimization of the inner loop. And then, calculating the loss value of the time according to the signal processing module.
Since the optimal phase obtained by the pre-processing is in normalized form, it needs to be converted to 0,2 π]I.e. by
Figure BDA00029831850600001410
To facilitate the correlation calculation, it is converted into a phase matrix form
Figure BDA00029831850600001411
Wherein
Figure BDA00029831850600001412
Representing the waveform transmitted by the first antenna. Because a real-valued network model is used and the pulse signal is in the form of
Figure BDA00029831850600001413
Therefore, it needs to be converted into a form of a signal real part matrix and a signal imaginary part matrix
Figure BDA00029831850600001414
Figure BDA00029831850600001415
For the calculation of the correlation, a signal spreading matrix needs to be constructed, wherein a real part spreading matrix and an imaginary part spreading matrix are respectively constructed under the non-periodic condition:
Figure BDA0002983185060000151
the spreading matrix is constructed for the periodic case as:
Figure BDA0002983185060000152
wherein Q-NRepresenting a matrix constructed by deleting the nth row of the imaginary matrix H. In the same way, Q-1A matrix constructed by deleting row 1 of the phase matrix H is shown. For PAPThe same construction process is used. Used herein according to periodic and aperiodic conditions
Figure BDA0002983185060000153
And
Figure BDA0002983185060000154
spreading matrices representing the imaginary and real parts, respectively, in the case of periods
Figure BDA0002983185060000155
And is
Figure BDA0002983185060000156
If it is a non-periodic condition
Figure BDA0002983185060000157
And is
Figure BDA0002983185060000158
The calculation of the correlation is known as the calculation of the convolution in (2) and (3). Here the convolution calculation is performed by means of a convolutional network. Because the correlation can be decomposed into imaginary and real parts, respectively:
Figure BDA0002983185060000159
here, the
Figure BDA00029831850600001510
Is smThe real part of (n) is,
Figure BDA00029831850600001511
is smThe imaginary part of (n). The calculation method of the convolution network comprises the following steps:
Figure BDA00029831850600001512
here, the
Figure BDA00029831850600001513
Representing a convolution calculation. The same can be obtained
Figure BDA00029831850600001514
And
Figure BDA00029831850600001515
the magnitude at which the correlation can be obtained is therefore
Figure BDA0002983185060000161
Next, a weighting coefficient vector may be constructed:
Figure BDA0002983185060000162
thus a weighted correlation can be obtained as:
Figure BDA0002983185060000163
wherein |, indicates that each element is multiplied correspondingly. A weighted autocorrelation matrix is then constructed:
Figure BDA0002983185060000164
a weighted cross-correlation matrix is then constructed
Figure BDA0002983185060000165
And constructing an optimization criterion through the two correlation matrixes:
WAISL=∑C (32)
WCISL=2×∑X (33)
WAPSL=max(C) (34)
WCPSL=max(X) (35)
where Σ () and max (-) denote summing all elements of the matrix and finding the maximum of all elements, respectively.
A loss function can be constructed and the loss value calculated according to equation (12) as:
Figure BDA0002983185060000166
where N is 1, 2. Obtaining the loss value c of the nth internal circulationn. And using Adam algorithm min cnAnd optimizing W and b in the depth residual network module. The optimal phase sequence is saved according to the loss value:
y′=mincyn,n=0,1,...N (37)
updating the adaptation factor after calculating the loss value based on the loss value, i.e.
Figure BDA0002983185060000171
Figure BDA0002983185060000172
The loss value c in this0And c' are each y0And a loss value corresponding to y ', wherein y' is the optimal phase of the iteration in the current round. I.e. c0=L(y0γ), c ═ L (y', γ). Where L is the loss function.
Internally iterative convergence determination
Finally, theThe convergence is determined. First, the maximum recursion number N is determinedmaxMinimum number of recursions NminInner loop convergence factor xi1. And a convergence judgment section E. Definition j pre0 and jnowEach of the convergence amounts is 0, and is the amount of convergence twice consecutively in the convergence interval E. Either of the following two conditions is met, i.e. the inner loop is exited.
The first condition is as follows:
judging whether the recursion times reach the limit, if N is NmaxAnd the internal circulation is exited.
And a second condition:
1)jnow=jnow+cn
2) when n ═ E, jpre=jnow,jnow=0。
3) When in use
Figure BDA0002983185060000173
Judgment of
Figure BDA0002983185060000174
And N > NminJump out of inner loop, otherwise jpre=jnow, jnowThe cycle continues at 0.
Example 2
Two methods of the invention are compared with the prior scheme I, document H.He, P.Stoica, and J.Li, design unidimensional sequence sequences with good coatings; include an application to MIMO radar, "scheme disclosed in IEEE transactions, Signal Process, vol.57, No.11, pp.4391-4405, Nov.2009", existing scheme two "scheme disclosed in Cui G, Yu X, Piezzo M, et al, constant module sequence set with good correction properties [ J ] Signal Processing,2017,139: 75-85".
In this embodiment, the convergence factor θ of the outer iteration1Default input phase y of 0.0010Is a random normalized phase sequence. Convergence factor theta of inner iteration20.001, maximum number of iterations of inner loop Nmax5000, minimum number of iterations Nmin1000, convergence zoneM is E100, the default weight vector is l1=l2=l3=l4l 51. The learning rate of the Adam deep learning algorithm is 0.0005, and the deep residual error network selects 10 residual error blocks to form the Adam deep learning algorithm.
(a) Comparison of multiple sequence Length Properties
The maximum iteration number of the existing method one is set to 10000, and the maximum iteration number of the existing method two is set to 500. The signal number M is set to 10, and the signal sequence N is set to [64,128,256,512,1024,2048 ]. Due to the excessively long optimization time of the first existing method and the second existing method, the partial long sequence waveform cannot be obtained optimally within an acceptable time. Then setting the time delay weight as:
Figure BDA0002983185060000181
comparing the performance results as shown in fig. 4 and 5, it can be seen that the method of the present invention is superior to the results of the first and second methods, and the method of the present invention can still generate waveforms with superior performance in the case of long sequences. Figure 4 shows the performance of WPSL, which is a significant advance of the two existing methods. FIG. 5 shows the performance of WISL, because the method of the present invention performs weighted optimization for various performance indexes, the performance of WISL in the case of short sequence is slightly worse than that of the existing two methods, but the method of the present invention slowly takes advantage as the sequence becomes more dominant. The method of the invention can be found to further promote WPSL optimization on the basis of ensuring WISL.
(b) Influence of initial value on optimization
Let the number of signals M be 10 and the sequence length N be 128. A randomly initialized sequence and a sequence optimized by a prior method are respectively used as initial inputs. Since the prior method, which is optimized for WISL, does not normalize the magnitude, the weight vector is set to
Figure BDA0002983185060000182
Thus, the optimization targets can be ensured to be the same, and the experiment is more convenientThe stringency. The random input is compared to the optimized sequence of the existing method. The result is shown in fig. 6, and it can be found that inputting the optimized sequence as an input greatly accelerates the optimization time, and the number of iterations is less than that of the random input. And the performance is also excellent.
(c) Optimizing temporal analysis
The maximum iteration number of the existing method one is set to 10000, and the maximum iteration number of the existing method two is set to 500. The signal number M is set to 10, and the signal sequence N is set to [64,128,256,512,1024,2048 ]. FIG. 7 shows the time complexity comparison results of four methods, and the method of the present invention does not have a significant time increase with the increase of the sequence length, and still has the very excellent results of maintaining the performance. While the other three methods are too long to guarantee optimization in long sequences because of the training time.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except combinations where mutually exclusive features and/or steps are present.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. The MIMO radar waveform optimization method based on the iterative optimization network is characterized in that a normalized random vector or an optimized normalized phase vector is input into a set network model, a signal matrix is output, and the signal matrix is an MIMO radar waveform;
the details are as follows: setting a signal processing function as a loss function of a network model, wherein the signal processing function is used for driving the network model, and parameters of the network model are optimized by an Adam deep learning method;
wherein: inputting a time delay weighting vector and a normalized phase sequence into a constructed signal processing function and the loss function to obtain a loss value, wherein the normalized phase sequence is in a phase matrix form of conversion of a plurality of antenna transmitting waveforms, the constructed signal processing function comprises the steps of converting pulse signals of the antenna transmitting waveforms into a signal real part matrix and a signal imaginary part matrix based on correlation, obtaining a correlation amplitude value by convolution calculation of a convolution network, and constructing a weighting correlation matrix according to the time delay weighting vector to calculate a correlation value;
inputting a normalized random vector and the number of neurons in the construction of a depth residual error network and outputting a normalized phase vector;
and adding convergence conditions into the signal processing function and the depth residual error network to obtain an inner iteration network and construct an outer iteration network, sequentially carrying out iteration convergence judgment on the inner iteration network and the outer iteration network, obtaining a phase matrix converted from a vector after convergence, and calculating to obtain a signal matrix.
2. The MIMO radar waveform optimization method based on the iterative optimization network of claim 1, wherein the detailed steps are as follows:
step 1. constructing signal processing function
Input as a normalized phase sequence
Figure FDA0002983185050000011
Sum delay weight vector
Figure FDA0002983185050000012
The output is a loss value cn=L(ynγ), constructing a loss function L (·), will
Figure FDA0002983185050000013
Conversion to [0,2 π]I.e. by
Figure FDA0002983185050000014
Then converted into a phase matrix form
Figure FDA0002983185050000015
Wherein
Figure FDA0002983185050000016
Representing the waveform transmitted by the first antenna, the pulse signal being in the form of
Figure FDA0002983185050000017
Converting the form of pulse signal into the form of signal real part matrix and signal imaginary part matrix
Figure FDA0002983185050000018
Figure FDA0002983185050000019
Constructing a signal expansion matrix for correlation calculation, wherein a real part expansion matrix and an imaginary part expansion matrix are respectively constructed under the non-periodic condition:
Figure FDA0002983185050000021
Figure FDA0002983185050000022
the spreading matrix is constructed for the periodic case as:
Figure FDA0002983185050000023
Figure FDA0002983185050000024
wherein Q-NRepresenting a matrix constructed by deleting the nth row of the imaginary matrix H, and, similarly, Q-1Matrix representing the structure of the 1 st row of the deleted phase matrix H, for PAPThe same applies to the construction process of (1), used here according to periodic and non-periodic conditions
Figure FDA0002983185050000025
And
Figure FDA0002983185050000026
expansion matrices representing the imaginary and real parts, respectively, in the case of periods
Figure FDA0002983185050000027
And is
Figure FDA0002983185050000028
If it is a non-periodic condition
Figure FDA0002983185050000029
And is
Figure FDA00029831850500000210
The convolution calculation is performed by means of a convolutional network, since the correlation can be decomposed into an imaginary part and a real part, which are calculated separately:
Figure FDA00029831850500000211
here, the
Figure FDA00029831850500000212
Is smThe real part of (n) is,
Figure FDA00029831850500000213
is sm(n) and the convolution network is calculated by:
Figure FDA00029831850500000214
here, the
Figure FDA00029831850500000215
Representing convolution calculations, which can be obtained analogously
Figure FDA00029831850500000216
And
Figure FDA00029831850500000217
the magnitude at which the correlation can be obtained is therefore
Figure FDA0002983185050000031
The vector may then be weighted according to the delay
Figure FDA0002983185050000032
The weighted correlation is obtained as:
Figure FDA0002983185050000033
where |, indicates that each element is multiplied correspondingly, the weighted autocorrelation matrix is then constructed:
Figure FDA0002983185050000034
a weighted cross-correlation matrix is then constructed:
Figure FDA0002983185050000035
a loss function may then be constructed and the loss value calculated as:
Figure FDA0002983185050000036
where Σ (·) and max (·) denote the summation of all elements of the matrix and the maximum of all elements, respectively, where N is 1,2n
Step 2: constructing a depth residual network
The depth residual error network input is a normalized random vector
Figure FDA0002983185050000037
And the number d of neurons, the output being a normalized phase vector
Figure FDA0002983185050000038
Constructing a depth residual error network, wherein the depth residual error network is composed of a plurality of residual error blocks and input and output full connection layers, each residual error block is composed of two full connection layers and identity mapping combination, and the mathematical expression of the residual error block can be expressed as follows:
Figure FDA0002983185050000039
wherein, W and B are parameters in the depth residual error network, and the mathematical expression form of the full connection layer is as follows:
pi=xi-1Wi+bi
in the formula xi-1Representing the input, p, of the i-th fully-connected layeriRepresenting the output of the fully-connected layer, then switched onFor treating
Figure FDA0002983185050000041
Activating a function, x can be obtainedi=sigmod(pi);
An identity map is constructed between every two fully connected layers,
Figure FDA0002983185050000042
d is the number of neurons in each layer of the residual block, and is obtained according to the calculation method of the full connection layer
Figure FDA0002983185050000043
And
Figure FDA0002983185050000044
finally obtaining xi+1=sigmod(pi+1+xi-1);
Step 3. internal iteration network
The internal iteration network consists of a depth residual error network, a signal processing function and a convergence condition, and the input is an input normalized phase sequence
Figure FDA0002983185050000045
Fusion factor xi12Time delay weight vector gamma, maximum number of iterations NmaxMinimum number of iterations NminThe current iteration number n is equal to 0, and the convergence factor theta of the inner iteration is2Convergence interval E, number of residual blocks RNThe number of neurons K of the network and the learning rate κ of the Adam algorithm.
Obtaining initial loss value
c0=L(y0,γ)
Obtaining a normalized phase sequence through a residual network
Figure FDA0002983185050000046
Figure FDA0002983185050000047
Construction of a normalized phase sequence y by adaptive addition of an input phase and an incremental phase using two adaptive adjustment factorsn
Figure FDA0002983185050000048
The loss value is obtained by a signal processing function:
cn=L(yn,γ)
where N is 1, 2.. times.n, the loss value c of the nth inner loop is obtainednAnd using ADAM algorithm min cnOptimizing W and B in the depth residual error network module, and saving the optimal phase sequence according to the loss value:
y′=mincyn,n=0,1,...N
updating an adaptation factor based on a loss value
Figure FDA0002983185050000049
Figure FDA0002983185050000051
Inner loss value c0And c' are each y0And a loss value corresponding to y ', where y' is the optimum phase of the upper in-round cycle, i.e., c0=L(y0γ), c ═ L (y', γ), where L is the loss function,
finally, the convergence is judged, and the maximum iteration number N is determined firstlymaxMinimum number of iterations NminInner iterative convergence factor xi1The convergence judgment interval E, definition jpre0 and jnowWhen the convergence quantity is 0, the convergence quantity is continuously converged twice in the convergence interval E, and the inner loop exits when the convergence condition of the inner iteration network is met;
step 3, constructing external iteration network
Input as an initial normalized phase sequence
Figure FDA0002983185050000052
Fusion factor xi1And xi2Outer iteration convergence factor θ1The output is the required waveform matrix S,
first generating an input normalized phase sequence
Figure FDA0002983185050000053
The phase sequence can be generated randomly or optimized by inputting, wherein M1m(n)∈[0,1]Represents the nth sub-pulse sequence transmitted by the mth antenna, and then inputs the normalized phase sequence into the loss function to obtain the loss value.
c0=L(y0,γ)
Wherein L (-) is a signal processing function, and a fusion factor xi is additionally constructed1And xi2This is used by initial fitting in the inner loop, where the fusion factor is initialized according to the input, and xi is the random sequence if it is input1=0,ξ21 if the optimized waveform phase sequence is taken as input, ξ1=0.9,ξ2=0.1,
This is then input into the inner iteration as follows:
y′,c′=F(y012)
wherein
Figure FDA0002983185050000054
For the optimal phase sequence obtained by the inner loop, c' represents the inner iteration for the corresponding loss value F (-) and then judges whether the outer iteration converges or not, wherein one of two convergence conditions is required to be met, namely
Figure FDA0002983185050000055
Where theta is1Is an outer loop convergence factor, if the two outer loop convergence conditions are not satisfied, the parameters need to be updated and a new round of loop is performed again, y is updated respectively0=y′,c0C' and corresponding update ξ12If a random sequence is taken as input, then ξ1=0.5,ξ20.5, if the optimized waveform phase sequence is taken as input, ξ1=0.9,ξ2=0.1,
If any one condition is satisfied, the algorithm is converged and output
Figure FDA0002983185050000061
As a phase sequence, a normalized phase matrix is then generated
Figure FDA0002983185050000062
Wherein mat (-) converts the vector into a phase matrix and then obtains a signal matrix
Figure FDA0002983185050000063
This signal matrix S is the desired waveform.
3. The MIMO radar waveform optimization method based on the iterative optimization network of claim 2, wherein the deep residual network is composed of 10 residual blocks and input and output layers, the input layer and the output layer mainly perform dimension conversion, and the number of neurons in the input and output layers and the residual blocks is d-128.
4. The MIMO radar waveform optimization method based on the iterative optimization network of claim 2, wherein the convergence condition of the inner iterative network is a condition one or a condition two;
the first condition is as follows:
judging whether the iteration number reaches the limit, if N is NmaxAnd the inner iteration is exited,
and a second condition:
1)jnow=jnow+cn
2) when n ═ E, jpre=jnow,jnow=0,
When n is equal to zE, the compound is,
Figure FDA0002983185050000064
judgment of
Figure FDA0002983185050000065
And N > NminJump out of inner loop, otherwise jpre=jnow,jnow0, n +1, and then re-iterate.
5. The MIMO radar waveform optimization method based on the iterative optimization network as claimed in claim 2, wherein the method comprises a double-iteration network model, the network model is an iterative optimization network composed of an outer iteration sub-algorithm and an inner iteration sub-algorithm, the outer iteration sub-algorithm is used for replacing an initial value and judging an algorithm termination condition, and the inner iteration sub-algorithm is used for fixing the initial value of waveform optimization by using a neural network.
CN202110293102.7A 2021-03-18 2021-03-18 MIMO radar waveform optimization method based on iterative optimization network Active CN113050077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110293102.7A CN113050077B (en) 2021-03-18 2021-03-18 MIMO radar waveform optimization method based on iterative optimization network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110293102.7A CN113050077B (en) 2021-03-18 2021-03-18 MIMO radar waveform optimization method based on iterative optimization network

Publications (2)

Publication Number Publication Date
CN113050077A true CN113050077A (en) 2021-06-29
CN113050077B CN113050077B (en) 2022-07-01

Family

ID=76513758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110293102.7A Active CN113050077B (en) 2021-03-18 2021-03-18 MIMO radar waveform optimization method based on iterative optimization network

Country Status (1)

Country Link
CN (1) CN113050077B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505480A (en) * 2021-07-08 2021-10-15 北京华大九天科技股份有限公司 Method for improving transient simulation convergence of transmission line
CN113791404A (en) * 2021-09-15 2021-12-14 电子科技大学长三角研究院(衢州) Radar ambiguity-resolving and shielding method based on orthogonal frequency division signals

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109375186A (en) * 2018-11-22 2019-02-22 中国人民解放军海军航空大学 Radar target identification method based on the multiple dimensioned one-dimensional convolutional neural networks of depth residual error
CN109492556A (en) * 2018-10-28 2019-03-19 北京化工大学 Synthetic aperture radar target identification method towards the study of small sample residual error
CN111060902A (en) * 2019-12-30 2020-04-24 电子科技大学 MIMO radar waveform design method based on deep learning
CN111220958A (en) * 2019-12-10 2020-06-02 西安宁远电子电工技术有限公司 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
CN111693976A (en) * 2020-06-08 2020-09-22 电子科技大学 MIMO radar beam forming method based on residual error network
WO2020220191A1 (en) * 2019-04-29 2020-11-05 Huawei Technologies Co., Ltd. Method and apparatus for training and applying a neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492556A (en) * 2018-10-28 2019-03-19 北京化工大学 Synthetic aperture radar target identification method towards the study of small sample residual error
CN109375186A (en) * 2018-11-22 2019-02-22 中国人民解放军海军航空大学 Radar target identification method based on the multiple dimensioned one-dimensional convolutional neural networks of depth residual error
WO2020220191A1 (en) * 2019-04-29 2020-11-05 Huawei Technologies Co., Ltd. Method and apparatus for training and applying a neural network
CN111220958A (en) * 2019-12-10 2020-06-02 西安宁远电子电工技术有限公司 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
CN111060902A (en) * 2019-12-30 2020-04-24 电子科技大学 MIMO radar waveform design method based on deep learning
CN111693976A (en) * 2020-06-08 2020-09-22 电子科技大学 MIMO radar beam forming method based on residual error network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHARLES E. THORNTON ET AL.: "Deep Reinforcement Learning Control for Radar Detection and Tracking in Congested Spectral Environments", 《IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING》 *
张宁: "基于残差神经网络的辐射源个体识别", 《航天电子对抗》 *
杨予昊等: "基于卷积神经网络的高分辨距离像目标识别", 《现代雷达》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505480A (en) * 2021-07-08 2021-10-15 北京华大九天科技股份有限公司 Method for improving transient simulation convergence of transmission line
CN113791404A (en) * 2021-09-15 2021-12-14 电子科技大学长三角研究院(衢州) Radar ambiguity-resolving and shielding method based on orthogonal frequency division signals
CN113791404B (en) * 2021-09-15 2024-05-07 电子科技大学长三角研究院(衢州) Radar defuzzification and shielding method based on orthogonal frequency division signals

Also Published As

Publication number Publication date
CN113050077B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN113050077B (en) MIMO radar waveform optimization method based on iterative optimization network
CN111060902B (en) MIMO radar waveform design method based on deep learning
Hu et al. Designing unimodular waveform (s) for MIMO radar by deep learning method
TW544622B (en) Adaptive filter and method for adaptive filtering
KR100229094B1 (en) Signal processing method of array antenna using eigenvector corresponding to maximum eigen value
Jing et al. Designing unimodular sequence with low peak of sidelobe level of local ambiguity function
US20200019839A1 (en) Methods and apparatus for spiking neural network computing based on threshold accumulation
Basit et al. Transmit beamspace design for FDA–MIMO radar with alternating direction method of multipliers
Zhao et al. The structure optimization of radial basis probabilistic neural networks based on genetic algorithms
CN111553513A (en) Medium-and-long-term runoff prediction method based on quadratic decomposition and echo state network
CN113472409B (en) Hybrid pre-coding method based on PAST algorithm in millimeter wave large-scale MIMO system
Zhang et al. Linear unequally spaced array synthesis for sidelobe suppression with different aperture constraints using whale optimization algorithm
CN116882149A (en) Antenna array synthesis method based on hybrid differential drosophila optimization algorithm
CN114839604A (en) Orthogonal waveform design method and system for MIMO radar
CN114167347B (en) Amplitude-phase error correction and direction finding method for mutual mass array in impact noise environment
CN112242860B (en) Beam forming method and device for self-adaptive antenna grouping and large-scale MIMO system
CN113238220A (en) MIMO radar orthogonal phase coding design method based on SCAN algorithm
Suksmono et al. Intelligent beamforming by using a complex-valued neural network
Lin et al. Periodic binary waveform design for MIMO radar
Zhong et al. Learned Complex Circle Manifold Network for MIMO Radar Waveform Design
Erdim et al. Covariance Matrix Tapered Beamformer That Is Universal Over Notch Width
CN115859017A (en) Dimension reduction robust self-adaptive beam forming method based on grouping circulation optimization
Lv et al. MIMO Radar Transmit Beampattern Design Based on Neural Network Under Similarity and Constant Modulus Constraints
Al Ka'bi A Proposed Algorithm for Synthesizing the Radiation Pattern of Antenna Arrays
Sun et al. Polyphase Orthogonal Waveform Design for MIMO Radar Based on Improved HHO Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant