CN116068520A - Cognitive radar joint modulation recognition and parameter estimation method based on transducer - Google Patents
Cognitive radar joint modulation recognition and parameter estimation method based on transducer Download PDFInfo
- Publication number
- CN116068520A CN116068520A CN202310246346.9A CN202310246346A CN116068520A CN 116068520 A CN116068520 A CN 116068520A CN 202310246346 A CN202310246346 A CN 202310246346A CN 116068520 A CN116068520 A CN 116068520A
- Authority
- CN
- China
- Prior art keywords
- modulation
- parameter
- layer
- sequence
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/418—Theoretical aspects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a cognitive radar joint modulation identification and parameter estimation method based on a transducer, which can simultaneously carry out automatic modulation type identification and variable structure modulation parameter estimation on radar working modes defined by different modulation types and modulation parameter combinations of each control parameter so as to realize the identification of the cognitive radar working modes; the method is based on the thought of multi-output multi-structure learning, combines a recurrent neural network and a transducer structure, utilizes the automatic feature learning characterization capability of a depth network, and can effectively extract the time sequence features among pulses; through special tag sequence design, the task of estimating the modulation parameters of scalar and vector structures can be realized at the same time, the association relationship between the modulation type tag and the modulation parameter tag can be fully utilized, and the recognition performance is improved; the variable length output of the tag sequence can be achieved by the transducer recursively outputting the tag sequence.
Description
Technical Field
The invention relates to the technical field of radar electronic reconnaissance, in particular to a cognitive radar joint modulation recognition and parameter estimation method based on a transducer.
Background
The cognitive radar is a complex sensor with various dynamic change working modes, and can adjust the transmitting and receiving loops of the cognitive radar in real time according to the change of the environment and the target, so as to realize flexible working modes, thereby fully exerting the performance potential of the radar and meeting the preset radar performance index. The operation mode of the cognitive radar can be defined as a radar Pulse sequence with a finite length, wherein each Pulse contains a plurality of control parameters, such as Pulse repetition interval (Pulse repetition interval, PRI), radio Frequency (RF), pulse Width (PW) and Pulse internal waveform (Modulation on Pulse, MOP), each control parameter has a corresponding modulation type, and each modulation type has a specific parameter space, from which specific modulation parameter values can be selected, and different modulation parameters and modulation type combinations define different radar operation modes. Through continuous sensing environment, the cognitive radar can optimize a preset objective function, and flexible switching of different modulation types can be realized for each control parameter so as to realize the working state of the modulation type-level radar; furthermore, for a specific modulation type, the cognitive radar can optimize specific modulation parameter values in a specific parameter space so as to realize the working state of the modulation parameter level radar with different modulation parameters of the same modulation type. In order to realize effective recognition of the working state of the cognitive radar, recognition of different modulation types of control parameters is realized, and estimation of modulation parameters of the same modulation type is also realized. The flexible switching of the radar working states of the two layers including the type level and the parameter level brings great challenges to the task of identifying the working states of the cognitive radar.
In conventional radar operating state recognition, conventional methods often perform modulation parameter recognition and modulation parameter estimation as two unrelated tasks, respectively. Early automatic modulation type identification often uses a mathematical statistics method, manual features are designed according to priori knowledge, and the captured pulse sequences are classified after feature extraction. However, the traditional method often brings greater computational complexity, and cannot meet the requirement of actual application on timeliness of information processing; and different types of signals often need to be designed with different characteristics, and cannot adapt to complex and changeable electromagnetic environments. On the other hand, the models used in the traditional method all assume that the data are ideal, and the pulse sequence received by the real system is often influenced by three typical non-ideal factors such as parameter measurement errors, pulse signal loss and false pulse interference, so that the recognition capability of the traditional method on the working state of the cognitive radar is reduced. In terms of modulation parameter estimation research, the conventional method often can only determine discrete values of each state definition parameter from a statistical perspective, but cannot give a modulation control parameter corresponding to a modulation type of each state definition parameter.
For the conventional method, although good performance can be achieved in two tasks, it is difficult to mine and utilize the associated information therein. The deep learning model is a neural network model with a plurality of nonlinear mapping layers, can abstract an input sequence layer by layer and extract characteristics, digs potential rules of deeper layers, and has strong robustness to non-ideal situations such as noise, false pulses, missing pulses and the like. In deep learning, a Multi-output Multi-structure Learning deep neural network has the ability to simultaneously solve a plurality of tasks, and the output can have a variable structure, which is widely used in many fields. On one hand, the multi-output learning of the network can simultaneously complete a plurality of tasks, and the information sharing among the plurality of tasks can be realized so as to realize the improvement of the network performance; on the other hand, the multi-output learning can realize variable structure output so as to adapt to variable structures with variable and flexible numbers of modulation parameters of different modulation types in an actual pulse sequence of the radar working state.
Disclosure of Invention
The invention provides a cognitive radar joint modulation recognition and parameter estimation method based on a transducer, which can simultaneously realize automatic modulation type recognition and modulation parameter estimation of a variable structure for a received cognitive radar pulse signal with a plurality of control parameters, abundant modulation types and flexible parameter values and complete recognition of working states defined by different modulation types and modulation parameter combinations.
A cognitive radar joint modulation recognition and parameter estimation method based on a transducer comprises the following steps:
s1, constructing a data set for training:
s11, when an input signal for training is an intra-pulse waveform signal, selecting an intra-pulse modulation type from a fixed number of modulation type sets, selecting corresponding modulation parameters, and obtaining a sample of the waveform signal, wherein the sample is described by the modulation type and the corresponding modulation parameters; obtaining a plurality of samples of waveform signals having different modulation types and modulation parameter combinations, and labeling modulation types and modulation parameters describing the signals to form a data set for training;
s12, when the input signals used for training are PDW signals, each PDW signal is described by a definition parameter, each pulse in the PDW signals is described by an M-dimensional vector, the vector describes the specific value of the M-dimensional definition parameter corresponding to the pulse, and M represents the number of the definition parameters; each definition parameter selects a corresponding modulation type from a fixed number of modulation type sets, and selects a corresponding modulation parameter from a corresponding parameter space, and then a sample of a PDW signal is described by the modulation type and modulation parameter combination corresponding to each definition parameter; obtaining a plurality of samples of PDW signals with different modulation types and modulation parameter combinations, and labeling definition parameters, modulation types and modulation parameter combinations describing the signals to form a data set for training;
s2, constructing a label set with multiple outputs and multiple structures:
s21, designing a label for the signal. When the signals are labeled, the modulation type and the modulation parameter labels of the signals are characterized as a sequence form for the pulse waveform signals, the modulation type is arranged at the front, the modulation parameter is arranged at the rear, and the relation among the labels in the label sequence is characterized by setting a marker to obtain the label sequence of the pulse waveform signals; for a PDW signal, representing the modulation types and the modulation parameter labels corresponding to a plurality of definition parameters of the signal in a sequence form, sequentially arranging the modulation types and the modulation parameter labels corresponding to the same definition parameters in adjacent positions, arranging the modulation types in front, arranging the modulation parameters in back, and setting a identifier to represent the relation among all the labels in the label sequence to obtain a label sequence of the PDW signal;
s22, carrying out quantization coding on the obtained tag sequence: when the signal is quantized and encoded, continuous values are encoded into discrete values at a certain quantization interval for modulation parameter tags in a tag sequence; for the modulation type label and the identifier in the label sequence, directly encoding the modulation type label and the identifier into a specified discrete value; obtaining a tag sequence represented by a discrete value after quantization coding;
s3, constructing a depth multi-output multi-structure neural network JMRRE-MOMS, wherein the JMRRE-MOMS comprises a data mapping module and a depth feature extraction module, and the depth feature extraction module comprises the following steps of:
s31, the data mapping module comprises data mapping of signals and data mapping of labels, and specifically comprises the following steps:
the data mapping of the signals is realized based on an LSTM layer, and the mapped feature vector X is obtained project ;
The data mapping of the label is realized by an embedding layer and a position coding layer, a label sequence is mapped to a low-dimensional feature space through the embedding layer, then time sequence position information is added to the embedded label sequence through the position coding layer, the data mapping of the label is completed, and a mapping result Y of the label is obtained project ;
S32, a depth feature extraction module uses a transducer structure and relies on an encoder-decoder framework, and specifically comprises the following steps:
X project firstly, entering an encoder, and carrying out feature extraction in the encoder, wherein the feature extraction sequentially passes through a self-attention layer, a feedforward layer, a residual connection layer and a normalization layer, and the method specifically comprises the following steps:
X project the input self-attention layer, the first step of calculating self-attention features is to generate three vectors, namely a query vector q, a key vector k and a value vector v, from input vectors of each encoder;
the second step of self-attention calculation is to calculate a score, which indicates the attention degree of other time steps when encoding the current time step i, and the score is obtained by performing dot product on the query vector of the current time step and the key vector of the other time step j:
wherein q i A query vector representing a current time step i; k (k) j A key vector representing the other time step j;
the third step in calculating self-attention is to divide the score by the square root of the key vector dimensionNormalizing the scores of all words through a softmax layer, wherein the obtained scores are positive values and the sum is 1;
the fourth step in calculating self-attention is to multiply each value vector by softmax and sum it:
wherein v is i A value vector representing the current time step i;
then, Z is sent into a feedforward layer for feature extraction, wherein the feedforward layer comprises two linear transformation layers, and a ReLU activation function is used for obtaining an output result of the feedforward layer;
then, residual connection and normalization operation are carried out on the output result of the feedforward layer to obtain the output Z of the first encoder * ;
Then Z * Sequentially passing through the cascaded encoders, wherein each encoder has the same structure and comprises a self-attention layer, a feedforward layer, a residual connection layer and a normalization layer, so as to obtain a query vector q and a key vector k of the last encoder;
the decoder structure comprises two groups of self-attention layers and residual connection and normalization layers, Y project From the first set of self-attention layer and residual connection and normalization layer inputs of the decoder, then reenterA set of self-attention layer and residual connection and normalization layer, wherein the query vector q and key vector k of the second self-attention layer are calculated in the last encoder;
wherein the first element y in the tag sequence y 1 After label mapping, the label is sent to a cascade encoder, and the second element y of the label sequence is obtained through prediction 2 Then (y) 1 ,y 2 ) The third element y of the label sequence is obtained by the prediction of the encoder after label mapping 3 And so on until the last element y of the tag is calculated L ;
S4, inputting the data set obtained in the step S1 into a depth multi-output multi-structure neural network JMRRE-MOMS, and outputting a probability distribution sequence of the tag by a Softmax layer; probability distribution sequences using true tag sequence y and predicted tagsConstructing a cross entropy loss function; training the depth multi-output multi-structure neural network based on the loss function;
s5, testing a deep multi-output multi-structure neural network:
the input radar signal x to be detected is arranged into a format of a sample in a data set, and is input into a network JMRRE-MOMS based on multi-output learning; wherein the data x to be measured enters the first encoder, and the start identifier<BOLS>A decoder, as a first element concatenation of the tag sequence, predicting a second element in the tag sequence, the predicted element being combined with the previous element, and reentering the decoder to predict the next element; the prediction of the tag sequence is completed in this recursive manner until the end identifier is detected<EOLS>Marking the corresponding tag sequence for the data x to be testedAnd (5) after the prediction is finished, obtaining a recognition result of the modulation type and an estimation result of the modulation parameter.
Preferably, in S21, the beginning and end of the tag sequence are respectively provided with the marks of the beginning and the end of the sequence; a separator is provided between the different labels.
Preferably, in the step S22, when the obtained tag sequence is quantized and encoded, for the modulation parameter tag in the tag sequence, the continuous value y continuous Encoded into discrete values at a set quantization interval, a quantization interval D is set, and an upper quantization limit D is set up Mapping parameter labels from continuous space into discrete space:
preferably, in S31, the method for the position coding layer to perform position coding on the input tag sequence includes:
PE (l,2i) encoding representing even positions, PE (l,2i+1) Representing the encoding of the cardinal positions, which together constitute the result PE of the position encoding.
Preferably, in S32, the step of calculating the self-attention is performed in a matrix form:
wherein q= (Q 1 ,q 2 ,…,q T ) T ,K=(k 1 ,k 2 ,…,k T ) T ,V=(v 1 ,v 2 ,…,v T ) T ,and Z=(z 1 ,z 2 ,…,z T ) T 。
Preferably, the output result of the feedforward layer is expressed as:
FC(z i )=max(0,z i W 1 +b 1 )W 2 +b 2
wherein z is i Represents a row vector in Z, W 1 ,W 2 Respectively two trainable parameter matrices, b 1 ,b 2 Is a trainable bias parameter.
The invention has the beneficial effects that:
the invention provides a cognitive radar joint modulation recognition and parameter estimation method based on a transducer, which can simultaneously realize automatic modulation type recognition and modulation parameter estimation of a variable structure for a received cognitive radar pulse signal sequence with a plurality of control parameters, abundant modulation types and flexible parameter values, and specifically comprises the following steps:
(1) The method combines a multi-output multi-structure deep neural network and a transducer model, utilizes the automatic feature learning characterization capability of the deep network, can effectively extract time sequence features between pulses and in pulses, and can effectively complete automatic modulation type identification and modulation parameter estimation tasks under severe non-ideal conditions;
(2) The method can realize the modulation parameter estimation task of the output variable-length variable structure, and can finish the modulation parameter estimation of different numbers and different structures for different modulation types;
(3) The JMRRE-MOMS method provided by the invention is not only limited to PDW type input, but also can finish the same task of modulation type identification and modulation parameter estimation on the input signal in the intra-pulse waveform form;
(4) The method for identifying the joint modulation type and estimating the modulation parameters of the cognitive radar signal sequence can provide technical means support for analysis and reasoning of the working state of the subsequent cognitive radar.
Drawings
Fig. 1 is a diagram illustrating an exemplary modulation of PDW signals for a simulated cognitive radar operating condition in accordance with the present invention.
Fig. 2 is a hierarchical structure diagram of a joint modulation type recognition and parameter estimation network constructed in accordance with the present invention.
Fig. 3 is an internal hierarchical structure diagram of an encoder decoder.
Fig. 4 is an exemplary diagram of a joint modulation type identification and parameter estimation network test.
Fig. 5 is a diagram illustrating an example of a method for setting a tag for a PDW signal of a cognitive radar.
Fig. 6 is a schematic flow chart of a method for constructing a multi-output label set according to the present invention.
Detailed Description
The invention provides a cognitive radar joint modulation recognition and parameter estimation method based on a transducer, which comprises the following steps:
s1, constructing a data set for training:
s11, if the input signal is an intra-pulse waveform signal, the following description is given below:
the input pulse signal sample is a pulse waveform signal with a specific signal-to-noise ratio, the waveform signal can select a corresponding modulation type from a fixed number of modulation type sets, and corresponding modulation parameters are selected, so that one input signal sample can be described by the modulation type and the corresponding modulation parameter combination. Each input sample is an intra-pulse modulated waveform and the modulation type and modulation parameters describing the signal are labeled to form a data set for training.
S12, if the input signal is a PDW signal, the description is as follows:
the input pulse signal sample is a PDW sequence, and is described by M-dimensional definition parameters such as PRI, RF, PW and MOP (again, for simplicity, these four definition parameters are described later as examples). In the PDW sequence, each pulse is described by an M-dimensional vector describing the specific value of the M-dimensional definition parameter corresponding to the pulse. The PDW sequence containing L pulses is an mxl matrix. Similarly, each definition parameter selects a corresponding modulation type from a fixed number of modulation type sets, and selects a corresponding modulation type, so that a sample of the PDW signal is described by the modulation type corresponding to each definition parameter and the modulation parameter combination; samples of a plurality of PDW signals having different modulation types and combinations of modulation parameters are obtained and labeled to describe the defined parameters and modulation types and combinations of modulation parameters of the signals, forming a data set for training.
S2, constructing a multi-output label set:
s21, designing a label for the signal. When a signal is labeled, the modulation type and the modulation parameter label of the signal are characterized as a sequence, the modulation types are sequentially arranged, the modulation types are arranged at the front, and a marker is set to characterize the relation among all the labels in the label sequence, so that the label sequence of the signal is obtained, and the label sequence is characterized as follows:
y=(<BOLS>,y c ,<ITV>,y e,1 ,<ITV>,y e,2 ,…,y e,k ,<EOLS>)
wherein y is c Tag indicating modulation type identification, y e A tag representing an estimate of the modulation parameter,<BOLS>the start of the sequence is marked and is indicated,<EOLS>indicating the end of the sequence,<ITV>for separating different labels.
S22, performing quantization coding on the obtained tag sequence. For modulation parameter labels in a label sequence, the successive values y are continuous Encoded into discrete values at a certain quantization interval, a quantization interval D is set, and an upper quantization limit D is set up Mapping parameter labels from continuous space into discrete space:
for the modulation type label and the identifier in the label sequence, directly encoding the modulation type label and the identifier into a specified discrete value; a tag sequence is obtained which is characterized by discrete values, wherein the tag sequence comprises a modulation type and a modulation parameter tag, so that two tasks of modulation type identification and modulation parameter estimation are simultaneously completed, and the modulation parameter can have a variable structure.
S3, constructing and training a deep multi-output multi-structure neural network (JMRRE-MOMS) for joint modulation type identification and modulation parameter estimation; the JMRRE-MOMS comprises a data mapping module and a depth feature extraction module; the data mapping module comprises data mapping of signals and data mapping of labels, wherein the data mapping of the signals is realized by an LSTM layer, and the data mapping of the labels is realized by an embedded layer and a position coding layer; the depth feature extraction module uses a transducer structure, depends on an encoder-decoder framework, and comprises a self-attention (self-attention) layer, a feedforward layer, a residual error connection layer and a normalization layer which are sequentially connected in series in each encoder and decoder, wherein the encoder-decoder comprises:
s31, data mapping module
The data mapping module includes a data mapping for signals and a data mapping for tags.
The data mapping of the signals is realized based on an LSTM layer, and the output H= (H) of the LSTM layer is calculated for the input PDW sequence containing L pulses 1 ,h 2 ,…,h T ). Wherein the method comprises the steps of
Where L represents the length of a feature sequence output by a preceding attention mechanism layer,features representing the t-th time; LSTM represents an LSTM function.
Calculating the mapped feature vector according to the LSTM layer output H:
X project =W H H+b H
W H representing trainable weight parameters, b H Representing a trainable bias parameter.
The data mapping of the labels is realized based on an embedding layer and a position coding layer, and is used for mapping an input signal into a low-dimensional feature vector space and carrying out preliminary characterization on time sequence features, and the embedding layer is used for carrying out discrete label sequence after quantizationBy embedding matrix->Mapping into a low-dimensional vector space, d model The method is characterized by comprising the following steps:
Y emb =y discretized ·W E
the position coding layer performs position coding on the input tag sequence to characterize position information of the input tag sequence,
PE (l,2i) encoding representing even positions, PE (l,2i+1) Encoding of the cardinal positions, which together form a position-encoded result PE, is then added to the embedded result to obtain an output result of the data mapping layer:
Y project =Y emb +PE
s32, depth feature extraction module
The depth feature extraction module uses a transducer structure, relying on an encoder-decoder framework.
X project Firstly, entering an encoder, and carrying out feature extraction in the encoder, wherein the encoder sequentially passes through a self-attention (self-attention) layer, a feedforward layer, a residual connection and a normalization layer, and the method specifically comprises the following steps:
self-section layer of encoder: x is X project The input self-attention layer, the first step in computing self-attention features is to generate three vectors, query vector q, key vector k and value vector v, from the input vector of each encoder. The three vectors are created by multiplying the result of the data mapping layer with three parameter matrices;
q=x e ·W Q
k=x e ·W K
v=x e ·W V
wherein x is e Is X project The vector of the e-th row in (c),and-> Respectively three trainable weight matrixes;
the second step of self-attention calculation is to calculate a score which determines the attention degree of other time steps when encoding the current time step i, and the score is obtained by dot product of the query vector of the current time step and the key vector of the other time step j:
wherein q i A query vector representing a current time step i; k (k) j A key vector representing the other time step j;
calculate the self-attention firstThe three steps are dividing the score by the square root of the key vector dimensionAnd normalizing the scores of all words through a softmax layer, wherein the obtained scores are positive values and the sum is 1, namely, the following operation is carried out:
the fourth step in calculating self-attention is to multiply each value vector by softmax and sum it:
for fast operation, the above steps of calculating self-attention are often performed in a matrix form,
wherein q= (Q 1 ,q 2 ,…,q T ) T ,K=(k 1 ,k 2 ,…,k T ) T ,V=(v 1 ,v 2 ,…,v T ) T ,and Z=(z 1 ,z 2 ,…,z T ) T ;
Then, Z is fed into the feed forward layer of feature extraction, which includes two linear transformation layers, expressed as:
FC(z i )=max(0,z i W 1 +b 1 )W 2 +b 2
wherein z is i Represents a row vector in Z, W 1 ,W 2 Respectively two trainable parameter matrices, b 1 ,b 2 Is a trainable bias parameter.
Then, carrying out residual connection and normalization operation on the output result of the feedforward layer,LayerNorm (x+Sublayer (x)), sublayer (.) represents the output of the self-attention layer or feedforward layer, yielding the output Z of the first encoder * 。
Then Z * The query vector q and the key vector k of the last encoder are obtained by sequentially passing through the cascaded encoders, wherein each encoder has the same structure and comprises a self-attention layer, a feedforward layer, a residual connection layer and a normalization layer.
The decoder is similar in structure to the encoder but includes two sets of self-attention and residual connection and normalization layers, Y project The first set of self-attention and residual connections and normalization layers entering the decoder, the calculation method of each network layer is the same as in the encoder, except that the query vector q and key vector k of the second self-attention layer are calculated in the last encoder.
The first element y in the tag sequence y 1 Obtaining Y after label mapping project Entering a cascade encoder, predicting to obtain the second element y of the tag sequence 2 Then (y) 1 ,y 2 ) The third element y of the label sequence is obtained by the prediction of the encoder after label mapping 3 And so on until the last element y of the tag is calculated L . Each element in the tag is predicted jointly from the input and the first m-1 elements:
s4, training of deep multi-output multi-structure neural network
And constructing a loss function of the training depth multi-output multi-structure neural network. And (3) inputting the data set obtained in the step (S1) into a depth multi-output multi-structure neural network (JMRRE-MOMS), and outputting a probability distribution sequence of the label by a Softmax layer. Probability distribution sequences using true tag sequence y and predicted tagsConstructing cross entropy lossThe loss function is:
s5, testing of deep multi-output multi-structure neural network
And (3) sorting the input complex radar signals to be detected into a format of a sample in a data set, and inputting the format into a depth multi-output multi-structure neural network JMRRE-MOMS based on multi-output learning combined modulation type identification and modulation parameter estimation. The method comprises the steps that data x to be detected enters an encoder, a start identifier < BOLS > is used as a first element of a tag sequence to enter a decoder, a second element in the tag sequence is predicted, the predicted element is combined with a previous element, and the next element is predicted by the decoder again:
the prediction of the tag sequence is completed in this recursive manner until the end identifier is detected<EOLS>Marking the corresponding tag sequence for the data x to be testedAnd (3) after the prediction is finished, obtaining a recognition result of the modulation type and an estimation result of the modulation parameter, and realizing the label prediction of the variable-length variable structure in this way.
Examples:
s1, firstly, generating a sequence sample data set for model training by using recorded data or simulation data:
s11, cleaning, extracting or simulating to generate data sets D1 with 10 different data lengths according to corresponding domain expert knowledge. The data set comprises 10 data subsets, respectively corresponding to [50,75,100,125,150,200,250,300,350,400 ]]10 different data lengths. Each training sample in the data subset corresponds to a combination of modulation types, the samples being PDW signals, and being inter-pulse modulated by PRI state defining parameters. Such as P i =(p 1 ,p 2 ,…,p n )∈R 1×n For the ith sample in the data set D1, n is the sequence length, each sampling point in the sample contains the PRI data feature, and four PRI modulation types are shown in fig. 1;
s12, simulating and generating a data set D2 containing different numbers of training samples according to corresponding domain expert knowledge. D2 consists of 13 data subsets, defined as 13 data sets containing different numbers of training samples, decreasing in number from scene 1 to scene 13. If scene 1 only contains 48000 training samples, scene 2 is increased to 44000 training samples, scene 3 is increased to 40000 training samples, and so on, 4000 training samples are sequentially reduced, and as the number of samples is reduced, the difficulty of feature extraction is also increased. The data set is obtained by screening a data subset with the data length of 100 in D1.
S21, designing labels of the PDW data set generated in the S1, representing modulation types and modulation parameter labels corresponding to a plurality of definition parameters of the signal into a sequence form, sequentially arranging the modulation types and the modulation parameter labels corresponding to the same definition parameters at adjacent positions, arranging the modulation types at the front, and setting identifiers to represent the relation among the labels in the label sequence, wherein the label design of each modulation type is shown in figure 5. By way of example with a tag sequence of the 4-point spread modulation type,
y= (< BOLS >, spread, < ITV >,4, < ITV >,2,10,5,7, < EOLS >)
S22, performing quantization coding on the obtained tag sequence. In the quantization encoding of a signal, for modulation parameter labels in a label sequence, consecutive values are encoded into discrete values at certain quantization intervals, a quantization interval d=0.2 is set, and an upper quantization limit b=10, the parameter labels are mapped from the consecutive space into the discrete space,
for the modulation type tag and the identifier in the tag sequence, the tag sequence is directly encoded into a specified discrete value to obtain a quantized encoded tag sequence, as shown in fig. 6:
y=(56,51,58,20,58,10,50,25,35,57)
s3, constructing and training a depth multi-output multi-structure neural network JMRRE-MOMS for joint modulation type identification and modulation parameter estimation; the JMRRE-MOMS comprises a data mapping module and a depth feature extraction module, as shown in FIG. 2.
S31, the data mapping module comprises data mapping for signals and data mapping for labels. The data mapping of the signals is realized by an LSTM layer, the LSTM layer receives the input signals X, and the local time sequence characteristics of the signals are extracted to obtain mapped X project The method comprises the steps of carrying out a first treatment on the surface of the The data mapping of the label is realized by an embedding layer and a position coding layer, the label sequence is mapped to a low-dimensional feature space through the embedding layer, and then the position coding layer adds time sequence position information to the embedded label sequence to obtain mapped Y project 。
S32, constructing a depth feature extraction module. The depth feature extraction module uses a transducer structure, X, depending on the encoder-decoder framework as shown in FIG. 3 project Enter encoder, Y project Enters the decoder where the resulting query vector q and key vector k in the encoder enter the second self-attention layer in the decoder.
The final decoder recursively outputs the predicted tag sequence, first 56 as the first element mapped to Y project Entering a decoder, predicting to obtain a second element 51; then combine them into (56,51), mapped into Y project The encoder is entered again, the third element 58 is predicted, i.e. the combination of the element of the current tag sequence output by the decoder and the previously predicted element is entered again as input into the decoder,
until the presence of the end identifier 57 is detected, marking the corresponding tag sequence for the data x to be testedAnd (4) after the prediction is finished, obtaining a recognition result of the modulation type and an estimation result of the modulation parameter, and in this way, the label prediction of the variable length structure can be realized, as shown in fig. 4.
In summary, the above is merely an example of the present invention based on the selected fixed modulation type and modulation parameter, and is not intended to limit the scope of the present invention. The cognitive radar working state based on different modulation type combinations of a plurality of PDW parameters and different modulation parameter definitions, the tag sequence design and quantization coding method for coping with the working state identification task and the multi-output multi-structure neural network model based on a transducer structure are core points of the invention. The recognition of the combined modulation type of the cognitive radar signals and the estimation method of the modulation parameters, which are formed by corresponding modification, replacement, improvement and other operations, are all included in the protection scope of the invention.
Claims (6)
1. A cognitive radar joint modulation recognition and parameter estimation method based on a transducer is characterized by comprising the following steps:
s1, constructing a data set for training:
s11, when an input signal for training is an intra-pulse waveform signal, selecting an intra-pulse modulation type from a fixed number of modulation type sets, selecting corresponding modulation parameters, and obtaining a sample of the waveform signal, wherein the sample is described by the modulation type and the corresponding modulation parameters; obtaining a plurality of samples of waveform signals having different modulation types and modulation parameter combinations, and labeling modulation types and modulation parameters describing the signals to form a data set for training;
s12, when the input signals used for training are PDW signals, each PDW signal is described by a definition parameter, each pulse in the PDW signals is described by an M-dimensional vector, the vector describes the specific value of the M-dimensional definition parameter corresponding to the pulse, and M represents the number of the definition parameters; each definition parameter selects a corresponding modulation type from a fixed number of modulation type sets, and selects a corresponding modulation parameter from a corresponding parameter space, and then a sample of a PDW signal is described by the modulation type and modulation parameter combination corresponding to each definition parameter; obtaining a plurality of samples of PDW signals with different modulation types and modulation parameter combinations, and labeling definition parameters, modulation types and modulation parameter combinations describing the signals to form a data set for training;
s2, constructing a label set with multiple outputs and multiple structures:
s21, designing a label for the signal. When the signals are labeled, the modulation type and the modulation parameter labels of the signals are characterized as a sequence form for the pulse waveform signals, the modulation type is arranged at the front, the modulation parameter is arranged at the rear, and the relation among the labels in the label sequence is characterized by setting a marker to obtain the label sequence of the pulse waveform signals; for a PDW signal, representing the modulation types and the modulation parameter labels corresponding to a plurality of definition parameters of the signal in a sequence form, sequentially arranging the modulation types and the modulation parameter labels corresponding to the same definition parameters in adjacent positions, arranging the modulation types in front, arranging the modulation parameters in back, and setting a identifier to represent the relation among all the labels in the label sequence to obtain a label sequence of the PDW signal;
s22, carrying out quantization coding on the obtained tag sequence: when the signal is quantized and encoded, continuous values are encoded into discrete values at a certain quantization interval for modulation parameter tags in a tag sequence; for the modulation type label and the identifier in the label sequence, directly encoding the modulation type label and the identifier into a specified discrete value; obtaining a tag sequence represented by a discrete value after quantization coding;
s3, constructing a depth multi-output multi-structure neural network JMRRE-MOMS, wherein the JMRRE-MOMS comprises a data mapping module and a depth feature extraction module, and the depth feature extraction module comprises the following steps of:
s31, the data mapping module comprises data mapping of signals and data mapping of labels, and specifically comprises the following steps:
the data mapping of the signals is realized based on an LSTM layer, and the mapped feature vector X is obtained project ;
The data mapping of the label is realized by an embedding layer and a position coding layer, a label sequence is mapped to a low-dimensional feature space through the embedding layer, then time sequence position information is added to the embedded label sequence through the position coding layer, the data mapping of the label is completed, and a mapping result Y of the label is obtained project ;
S32, a depth feature extraction module uses a transducer structure and relies on an encoder-decoder framework, and specifically comprises the following steps:
X project firstly, entering an encoder, and carrying out feature extraction in the encoder, wherein the feature extraction sequentially passes through a self-attention layer, a feedforward layer, a residual connection layer and a normalization layer, and the method specifically comprises the following steps:
X project the input self-attention layer, the first step of calculating self-attention features is to generate three vectors, namely a query vector q, a key vector k and a value vector v, from input vectors of each encoder;
the second step of self-attention calculation is to calculate a score, which indicates the attention degree of other time steps when encoding the current time step i, and the score is obtained by performing dot product on the query vector of the current time step and the key vector of the other time step j:
wherein q i A query vector representing a current time step i; k (k) j A key vector representing the other time step j;
the third step in calculating self-attention is to divide the score by the square root of the key vector dimensionAnd normalizing the scores of all words by the softmax layer,the scores obtained are all positive values and the sum is 1;
the fourth step in calculating self-attention is to multiply each value vector by softmax and sum it:
wherein v is i A value vector representing the current time step i;
then, Z is sent into a feedforward layer for feature extraction, wherein the feedforward layer comprises two linear transformation layers, and a ReLU activation function is used for obtaining an output result of the feedforward layer;
then, residual connection and normalization operation are carried out on the output result of the feedforward layer to obtain the output Z of the first encoder * ;
Then Z * Sequentially passing through the cascaded encoders, wherein each encoder has the same structure and comprises a self-attention layer, a feedforward layer, a residual connection layer and a normalization layer, so as to obtain a query vector q and a key vector k of the last encoder;
the decoder structure comprises two groups of self-attention layers and residual connection and normalization layers, Y project Inputting from a first group of self-attention layers and residual connection and normalization layers of the decoder, and then entering the first group of self-attention layers and residual connection and normalization layers, wherein a query vector q and a key vector k of a second self-attention layer are calculated in a last encoder;
wherein the first element y in the tag sequence y 1 After label mapping, the label is sent to a cascade encoder, and the second element y of the label sequence is obtained through prediction 2 Then (y) 1 ,y 2 ) The third element y of the label sequence is obtained by the prediction of the encoder after label mapping 3 And so on until the last element y of the tag is calculated L ;
S4, inputting the data set obtained in the step S1 into a depth multi-output multi-structure neural network JMRRE-MOMS, and outputting a probability distribution sequence of the tag by a Softmax layer; probability distribution sequences using true tag sequence y and predicted tagsConstructing a cross entropy loss function; training the depth multi-output multi-structure neural network based on the loss function;
s5, testing a deep multi-output multi-structure neural network:
the input radar signal x to be detected is arranged into a format of a sample in a data set, and is input into a network JMRRE-MOMS based on multi-output learning; wherein the data x to be measured enters the first encoder, and the start identifier<BOLS>A decoder, as a first element concatenation of the tag sequence, predicting a second element in the tag sequence, the predicted element being combined with the previous element, and reentering the decoder to predict the next element; the prediction of the tag sequence is completed in this recursive manner until the end identifier is detected<EOLS>Marking the corresponding tag sequence for the data x to be testedAnd (5) after the prediction is finished, obtaining a recognition result of the modulation type and an estimation result of the modulation parameter.
2. The method for cognitive radar joint modulation recognition and parameter estimation based on a transducer according to claim 1, wherein in S21, the beginning and the end of the tag sequence are respectively provided with the marks of the beginning and the end of the sequence; a separator is provided between the different labels.
3. The method for cognitive radar joint modulation recognition and parameter estimation based on a transducer according to claim 1, wherein in S22, when the obtained tag sequence is quantized and encoded, for the modulation parameter tag in the tag sequence, the continuous value y continuous Encoded into discrete values at a set quantization interval, a quantization interval D is set, and an upper quantization limit D is set up Mapping parameter labels from continuous space into discrete space:
4. the method for identifying cognitive radar joint modulation and estimating parameters based on transducer according to claim 1 and 3, wherein in S31, the method for performing position coding on the input tag sequence by the position coding layer is as follows:
PE (l,2i) encoding representing even positions, PE (I,2i+1) Representing the encoding of the cardinal positions, which together constitute the result PE of the position encoding.
5. The method for cognitive radar joint modulation identification and parameter estimation based on a transducer according to claim 1. Wherein in S32, the step of calculating the self-attention is performed in a matrix form:
wherein q= (Q 1 ,q 2 ,...,q T ) T ,K=(k 1 ,k 2 ,...,k T ) T ,V=(v 1 ,v 2 ,...,v T ) T ,and Z=(z 1 ,z 2 ,...,z T ) T 。
6. The method for identifying and estimating cognitive radar joint modulation and parameters based on transducer according to claim 1, wherein the output result of the feedforward layer is expressed as:
FC(z i )=max(0,z i W 1 +b 1 )W 2 +b 2
wherein z is i Represents a row vector in Z, W 1 ,W 2 Respectively two trainable parameter matrices, b 1 ,b 2 Is a trainable bias parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310246346.9A CN116068520A (en) | 2023-03-07 | 2023-03-07 | Cognitive radar joint modulation recognition and parameter estimation method based on transducer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310246346.9A CN116068520A (en) | 2023-03-07 | 2023-03-07 | Cognitive radar joint modulation recognition and parameter estimation method based on transducer |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116068520A true CN116068520A (en) | 2023-05-05 |
Family
ID=86178666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310246346.9A Pending CN116068520A (en) | 2023-03-07 | 2023-03-07 | Cognitive radar joint modulation recognition and parameter estimation method based on transducer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116068520A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116432703A (en) * | 2023-06-12 | 2023-07-14 | 成都大学 | Pulse height estimation method, system and terminal based on composite neural network model |
-
2023
- 2023-03-07 CN CN202310246346.9A patent/CN116068520A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116432703A (en) * | 2023-06-12 | 2023-07-14 | 成都大学 | Pulse height estimation method, system and terminal based on composite neural network model |
CN116432703B (en) * | 2023-06-12 | 2023-08-29 | 成都大学 | Pulse height estimation method, system and terminal based on composite neural network model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165664B (en) | Attribute-missing data set completion and prediction method based on generation of countermeasure network | |
CN111079836B (en) | Process data fault classification method based on pseudo label method and weak supervised learning | |
CN111444790B (en) | Pulse-level intelligent identification method for multifunctional radar working mode sequence | |
CN112185104B (en) | Traffic big data restoration method based on countermeasure autoencoder | |
CN110321401B (en) | Space-time data correlation deep learning method | |
CN109471074B (en) | Radar radiation source identification method based on singular value decomposition and one-dimensional CNN network | |
CN113406588B (en) | Joint modulation type identification and parameter estimation method for cognitive radar signals | |
CN113607325B (en) | Intelligent monitoring method and system for looseness positioning of steel structure bolt group | |
CN116068520A (en) | Cognitive radar joint modulation recognition and parameter estimation method based on transducer | |
CN111340076B (en) | Zero sample identification method for unknown mode of radar target of new system | |
CN113759323B (en) | Signal sorting method and device based on improved K-Means joint convolution self-encoder | |
CN114528755A (en) | Power equipment fault detection model based on attention mechanism combined with GRU | |
CN114239935A (en) | Prediction method for non-uniform track sequence | |
CN116643246A (en) | Deep clustering radar pulse signal sorting method based on inner product distance measurement | |
CN114722950A (en) | Multi-modal multivariate time sequence automatic classification method and device | |
CN114488069A (en) | Radar high-resolution range profile identification method based on graph neural network | |
CN114707635A (en) | Model construction method and device based on network architecture search and storage medium | |
CN114371474A (en) | Intelligent radar signal sorting method and system based on convolution noise reduction self-encoder | |
CN113537240B (en) | Deformation zone intelligent extraction method and system based on radar sequence image | |
CN117195031A (en) | Electromagnetic radiation source individual identification method based on neural network and knowledge-graph dual-channel system | |
CN113030849A (en) | Near-field source positioning method based on self-encoder and parallel network | |
CN113688770A (en) | Long-term wind pressure missing data completion method and device for high-rise building | |
CN113269217A (en) | Radar target classification method based on Fisher criterion | |
CN113516242B (en) | Self-attention mechanism-based through-wall radar human body action recognition method | |
CN116340533B (en) | Satellite-borne electromagnetic spectrum big data intelligent processing system based on knowledge graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |