CN114114227B - Radar signal modulation type identification method based on dual-channel heterogeneous fusion network - Google Patents

Radar signal modulation type identification method based on dual-channel heterogeneous fusion network Download PDF

Info

Publication number
CN114114227B
CN114114227B CN202210096729.8A CN202210096729A CN114114227B CN 114114227 B CN114114227 B CN 114114227B CN 202210096729 A CN202210096729 A CN 202210096729A CN 114114227 B CN114114227 B CN 114114227B
Authority
CN
China
Prior art keywords
group
data
radar signal
signal modulation
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210096729.8A
Other languages
Chinese (zh)
Other versions
CN114114227A (en
Inventor
梁红俊
陈加根
周闪闪
李静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jinghuai Jianrui Electronic Technology Co ltd
Original Assignee
Anhui Jinghuai Jianrui Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jinghuai Jianrui Electronic Technology Co ltd filed Critical Anhui Jinghuai Jianrui Electronic Technology Co ltd
Priority to CN202210096729.8A priority Critical patent/CN114114227B/en
Publication of CN114114227A publication Critical patent/CN114114227A/en
Application granted granted Critical
Publication of CN114114227B publication Critical patent/CN114114227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/021Auxiliary means for detecting or identifying radar signals or the like, e.g. radar jamming signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation

Abstract

The embodiment of the invention provides a radar signal modulation type identification method based on a dual-channel heterogeneous fusion network, which comprises the following steps: preprocessing pulse data of various radar signal modulation types to generate fine characteristic frequency spectrograms of the various radar signal modulation types; training the fine characteristic frequency spectrogram of each radar signal modulation type by adopting a two-channel heterogeneous fusion network model, and extracting and storing each radar signal modulation type and characteristic value; and extracting the characteristic value of the radar signal modulation type to be identified, and matching the characteristic value with the stored various radar signal modulation types, thereby identifying the modulation type of the radar signal to be identified. The embodiment of the invention improves the recognition rate and solves the problem of poor generalization of the traditional recognition method.

Description

Radar signal modulation type identification method based on dual-channel heterogeneous fusion network
Technical Field
The embodiment of the invention relates to the technical field of radar signal reconnaissance, in particular to a method for identifying a radar signal modulation type.
Background
The sorting and identification of radar signals are mainly realized according to the relation and the difference between unknown radar signal parameters, a conventional parameter template matching method is generally used for processing radar signals with simplicity and fewer categories, the identification efficiency is higher, but the problems of insufficient capability, poor generalization and the like are shown when the current increasingly complex radar radiation source identification problem is processed; the sorting and identifying calculation amount of the radar signals with the complex system is particularly large.
The deep learning has the self-learning and self-adapting capability to uncertainty, and can be used for fitting any continuous function, is easy to construct, has simple algorithm and the like, and can directly and automatically learn the original data, so that more abstract high-level representation is obtained. The learning process does not need human consciousness intervention, and model parameters are reasonably adjusted by combining a plurality of priori knowledge and experiments, so that automatic identification of signals is realized. The method can effectively process a large amount of signal data with complex rules and difficult recognition, realize multi-target classification, adapt to the diversity and the fuzziness of signals and have certain anti-noise capability.
Disclosure of Invention
Aiming at the problems of insufficient capacity, poor generalization and the like when radar signals are identified by a conventional parameter template matching method, the embodiment of the invention provides a radar signal modulation type identification method based on a dual-channel heterogeneous fusion network, which comprises the following steps:
preprocessing pulse data of various radar signal modulation types to generate fine characteristic frequency spectrograms of the various radar signal modulation types;
training the fine characteristic frequency spectrogram of each radar signal modulation type by adopting a two-channel heterogeneous fusion network model, and extracting and storing each radar signal modulation type and characteristic value;
and extracting the characteristic value of the radar signal modulation type to be identified, and matching the characteristic value with the stored various radar signal modulation types, thereby identifying the modulation type of the radar signal to be identified.
Further, the step of preprocessing the pulse data of various radar signal modulation types includes: extracting pulse data, determining the positions of upper and lower edges of the pulse and segmenting subtle features.
Further, the extracting pulse data includes: searching 20 pulse data which continuously rise or fall by adopting a pulse search algorithm and dividing the 20 pulse data into A group data and B group data; wherein the content of the first and second substances,
the group A data specifically comprises: dividing the signals larger than 0 into Ax groups, and recording the Ax (1), Ax (2), Ax (3) … … Ax (i); the corresponding position group is Ay group, and is marked as Ay (1), Ay (2), Ay (3) … … Ay (i);
the group B data specifically comprises: dividing the components less than 0 into Bx groups, and marking as Bx (1), Bx (2), Bx (3) … … Bx (i); the corresponding position groups are By groups and are marked as By (1), By (2), By (3) … … By (i).
Further, the step of determining the positions of the upper and lower edges of the pulse comprises:
a. according to the data of the group A and the data of the group B, respectively performing curve fitting on the data of the group A and the data of the group B to obtain a fitted curve A, B;
according to a quadratic functionp(x)=a 0 +a 1 x+a 2 x 2 And respectively performing piecewise curve fitting on the group A and the group B, wherein,p(x)the function of the second degree is represented,xdata representing Ax groups and Bx groups,a 0 、a 1 anda 2 a constant, a first order coefficient and a second order coefficient respectively representing a quadratic function; substituting the Ax group data and the Bx group data intop(x),Find a0、a1And a2
b. (iii) solving for the mean square error of the fitted curve A, B;
according to the formula
Figure 819180DEST_PATH_IMAGE001
The mean square error of the fitted curve a is found, wherein,
Figure 425742DEST_PATH_IMAGE002
is shown asiThe mean square error of the individual data,p(x i )is shown asiThe values of the curve fit of the individual data,y i denotes the second in Ay groupiA piece of data; according to the method, the mean square error of the fitted curve B is obtained;
c. taking a derivative of the fitted curve A, and judging the derivatives of the Ax group and the Ay group,
when the derivatives of the Ax group and the Ay group are positive or negative at the same time, judging that the mean square error of the fitted curve A is qualified, and outputting a group A mark; otherwise, jumping out of the data processing and re-extracting the pulse data;
taking a derivative of the fitted curve B, and performing the same operation as the group A on the group B data;
d. judging the consistency of the marks of the group A and the group B,
when A, B sets of marks agree, the mark segment is considered to be the pulse rising edge position;
when the A, B sets of marks do not coincide, the mark segment is considered to be the pulse falling edge position.
Further, the segmenting the subtle features comprises: generating a frequency spectrum amplitude, searching the maximum value and the maximum and minimum value of the frequency spectrum amplitude for normalization processing; wherein the content of the first and second substances,
generating a spectrum amplitude, specifically comprising: intercepting effective single pulse segments according to the determined pulse rising edge position and pulse falling edge position, and respectively performing Fast Fourier Transform (FFT) to generate a frequency spectrum amplitude value;
finding the maximum value of the spectrum amplitude specifically comprises the following steps: carrying out absolute value normalization on the frequency spectrum amplitude, eliminating clutter frequency, searching the maximum value of the frequency spectrum amplitude, and extracting 128 frequency points on the left and right of the maximum value position of the frequency spectrum amplitude to form a new frequency spectrogram;
the maximum and minimum normalization processing specifically includes: and carrying out maximum and minimum value normalization of the proportion 1/100 on the new spectrogram to obtain the spectrogram with fine characteristics.
Further, the two-channel heterogeneous fusion network model is a neural network, and the basic unit of the two-channel heterogeneous fusion network model is a convolution + batch normalization + linear correction (Conv + Bn + Relu, CBR) unit;
the method for training the fine characteristic frequency spectrograms of various radar signal modulation types by adopting the dual-channel heterogeneous fusion network model comprises the following steps: and the characteristic area separation, the extraction of the backbone network, the characteristic fusion and the prediction output end.
Further, the step of feature region separation comprises:
acquiring a preprocessed spectrogram;
converting the spectrogram into an image standard format, wherein the standard format is 320x 320;
splitting the spectrogram into an upper part fine feature spectrum and a lower part fine feature spectrum; wherein the content of the first and second substances,
the upper fine feature spectrum includes: selecting a middle 220x220 part from the central frequency spectral line part of the upper 4/5 of the spectrogram;
the lower part of the fine feature spectrum comprises: the bottom 1/5 of the spectrogram contains the background noise and other spectral components near the center frequency, and the middle row 1x320 is selected.
Further, the extraction backbone network comprises an extraction backbone network and an extraction subnetwork; wherein the content of the first and second substances,
the step of extracting the main network comprises: performing primary contour feature extraction and dimension reduction twice on the upper part fine feature spectrum by a convolutional neural network, then performing residual deep feature extraction, combining full connection layers, and outputting a one-dimensional feature value of the upper part fine feature spectrum;
the step of extracting the sub-network includes: and performing primary contour feature extraction on the lower part of the fine feature spectrum by a convolutional neural network, then sending the lower part of the fine feature spectrum into a circulating network for transverse correlation feature extraction, and finally shaping and outputting a one-dimensional feature value of the lower part of the fine feature spectrum.
Further, the feature fusion step includes:
directly carrying out end-to-end splicing fusion on the central frequency one-dimensional characteristic value extracted by the main network and the base frequency one-dimensional characteristic value extracted by the sub-network to obtain a one-dimensional characteristic value of the spectrogram;
and sending the radar signal modulation type and the one-dimensional characteristic value of the spectrogram into a characteristic classifier for storage.
Further, the prediction output includes: the feature classifier matches the feature value extracted in the training with the feature value stored in the feature classifier to judge whether the feature value is successfully matched,
if the characteristic values are successfully matched, identifying and outputting the radar signal modulation type of the training;
and if the characteristic value matching is unsuccessful, feeding back the radar signal modulation type and the characteristic value of the training to the dual-channel heterogeneous fusion network model, and adding an unknown radar signal modulation type.
The embodiment of the invention has the following beneficial technical effects: the scheme of the invention can identify the modulation type of the unknown radar signal, and the identification accuracy is improved by 10 percent compared with that of a single-channel common network model.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description in the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart provided by an embodiment of the present invention;
FIG. 2 is a graph of spectral magnitudes after Fourier transform in accordance with an embodiment of the present invention;
FIG. 3 is a spectrum diagram of 128 spectrum points around the maximum value in accordance with an embodiment of the present invention;
FIG. 4 is a graph of maximum and minimum normalized spectra of an embodiment 1/100 of the present invention;
FIG. 5 is a diagram of a two-channel network heterogeneous model architecture according to an embodiment of the present invention;
FIG. 6 is a structural diagram of a CBR according to an embodiment of the present invention;
FIG. 7 is a spectrum distribution diagram of modulation types according to an embodiment of the present invention;
FIG. 8 is a graph of a partial fine feature spectrum according to an embodiment of the present invention;
FIG. 9 is a partial fine feature spectrum of an embodiment of the present invention;
FIG. 10 is a diagram of a backbone network hierarchy in accordance with an embodiment of the present invention;
FIG. 11 is a lstm subnetwork hierarchy diagram of an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present invention and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, or article that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, or article.
Figure 1 is a main flow chart for identifying the type of modulation of a radar signal according to the present embodiment,
in step S1, the pulse data of various radar signal modulation types are preprocessed to generate fine feature spectrograms of the various radar signal modulation types. The preprocessing step comprises the steps of extracting pulse data, determining the positions of the upper edge and the lower edge of a pulse and segmenting subtle features:
1. the extracting pulse data includes: the pulse search algorithm is adopted to search 20 pulse data which continuously rise or fall and are divided into A group data and B group data, wherein,
the group A data specifically comprises: dividing the signals larger than 0 into Ax groups, and recording the Ax (1), Ax (2), Ax (3) … … Ax (i); the corresponding position group is Ay group, and is marked as Ay (1), Ay (2), Ay (3) … … Ay (i);
the group B data specifically comprises: dividing the samples less than 0 into Bx groups, and marking the Bx groups as Bx (1), Bx (2), Bx (3) … … Bx (i); the corresponding position groups are By groups and are marked as By (1), By (2) and By (3) … … By (i);
the pulse data is single pulse data of an intermediate frequency pulse sequence, if a plurality of pulse data exist in the intermediate frequency pulse sequence, all the pulse data are extracted one by one, and the intermediate frequency pulse sequence is an intermediate frequency pulse sequence of a radar signal digital receiver.
The data in one pulse is grouped; the other pulse data are the same as the single pulse data in the extraction method, and are not described in detail.
2. The step of determining the position of the upper and lower edges of the pulse comprises:
a. according to the data of the group A and the data of the group B, respectively performing curve fitting on the data of the group A and the data of the group B to obtain a fitted curve A, B;
according to a quadratic functionp(x)=a 0 +a 1 x+a 2 x 2 And respectively performing piecewise curve fitting on the group A and the group B, wherein,p(x)the function of the second degree is represented,xdata representing Ax and Bx groups,a 0 、a 1 anda 2 a constant, a first order coefficient and a second order coefficient respectively representing a quadratic function; substituting the Ax group data and the Bx group data intop(x),Find a0、a1And a2
b. (iii) solving for the mean square error of the fitted curve A, B;
according to the formula
Figure 356789DEST_PATH_IMAGE003
The mean square error of the fitted curve a is found, wherein,
Figure 296406DEST_PATH_IMAGE004
is shown asiThe mean-square error of the individual data,p(x i )is shown asiThe values of the curve fit of the individual data,y i denotes the second in Ay groupiA piece of data; according to the method, the mean square error of the fitted curve B is obtained;
c. taking a derivative of the fitted curve A, and judging the derivatives of the Ax group and the Ay group,
when the derivatives of the Ax group and the Ay group are positive or negative at the same time, judging that the mean square error of the fitted curve A is qualified, and outputting a group A mark; otherwise, jumping out of the data processing and re-extracting the pulse data;
taking a derivative of the fitted curve B, and performing the same operation as the group A on the group B data;
d. judging the consistency of the marks of the group A and the group B,
when A, B sets of marks agree, the mark segment is considered to be the pulse rising edge position;
when the A, B sets of marks do not coincide, the mark segment is considered to be the pulse falling edge position.
3. The step of segmenting the subtle features comprises: generating a spectral amplitude, finding a maximum and a maximum minimum normalization process, wherein,
generating a spectrum amplitude, specifically comprising: intercepting effective single pulse segments according to the determined pulse rising edge position and pulse falling edge position, respectively performing Fast Fourier Transform (FFT), and generating a spectrum amplitude diagram, as shown in fig. 2;
searching for the maximum value specifically comprises: carrying out absolute value normalization on the frequency spectrum amplitude, eliminating clutter frequencies, carrying out maximum value search, and extracting 128 frequency points on the left and right of the maximum value position to form a new frequency spectrum diagram, as shown in fig. 3;
the maximum and minimum normalization processing specifically includes: and performing maximum and minimum normalization of the proportion 1/100 on the new spectrogram to obtain the spectrogram with fine features as shown in FIG. 4.
In step S2, the dual-channel heterogeneous fusion network model trains the fine feature spectrograms of the various radar signal modulation types, and extracts and stores the feature values of the various radar signal modulation types.
The dual-channel heterogeneous fusion network model is a neural network, and as shown in fig. 5, the training and recognition are carried out in two steps:
firstly, extracting data of signals to obtain a spectrum distribution diagram, and separating the characteristics of an upper half center frequency 4/5 region and a lower half noise 1/5 region of a spectrum amplitude; the signals comprise a non-modulation signal (NM), a linear frequency modulation signal (LFM), a non-linear frequency modulation signal (NLFM), a binary phase coded signal (BPSK), a four-phase coded signal (QPSK) and an eight-phase coded signal (8 PSK);
secondly, sending the basic unit of the dual-channel heterogeneous fusion network model to train, and obtaining a characteristic value and a model through characteristic fusion; when a signal to be identified is encountered, two steps of operation are also carried out in the same way; and the corresponding characteristic values of each step are matched to uniquely identify the signal.
The basic unit of the dual-channel heterogeneous fusion network model is a convolutional neural network + batch normalization + linear correction (Conv + Bn + Relu, CBR) unit, as shown in FIG. 6: the convolutional Neural network is a feed-forward Neural network (fed-forward Neural Networks) containing convolutional calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning); batch Normalization (BN), a technique used to improve the performance and stability of artificial neural networks; a Linear correction Unit (ReLU), which is an activation function commonly used in artificial neural networks, generally refers to a non-Linear function represented by a ramp function and its variants.
Specifically, the step of implementing the dual-channel heterogeneous convergence network model includes: according to the process, the method comprises the steps of separating a characteristic area, extracting a backbone network, fusing characteristics and predicting an output end.
1. Feature region separation
(1) Acquiring a preprocessed spectrogram;
(2) converting the spectrogram into an image standard format, wherein the standard format is 320x320, and the specific process is as follows:
and calling a polylines function of OpenCV to draw a 320x320 spectrum distribution diagram, wherein the spectrum diagram is a preprocessed spectrum diagram with fine features. The image pixel resolution cannot be less than 320, otherwise the resolution is too low and detail overlap cannot be identified and must be an integer multiple of 32, since no information is lost just 32 after network down-sampling. In the OpenCV drawing process, fine fluctuation can be filtered, which is equivalent to phase-change noise reduction, and for a spectrum with excessive noise, smooth noise reduction can be performed, so that the recognition rate is further improved, and a spectrum distribution diagram of each modulation type as shown in fig. 7 is obtained.
(3) Splitting the spectrogram into an upper portion of the fine feature spectrum and a lower portion of the fine feature spectrum, wherein,
the upper part fine feature spectrum comprises: the central frequency spectral line part of 4/5 on the spectrogram selects a middle 220x220 part, and the upper part fine characteristic spectrum is shown in FIG. 8;
the lower part of the fine feature spectrum comprises: the base noise and other spectrum parts near the center frequency of 1/5 below the spectrogram are selected from the middle row of pixels 1x320, and the upper fine feature spectrum is shown in fig. 9.
2. The extraction backbone network includes an extraction backbone network and an extraction subnetwork, wherein,
the step of extracting the primary network comprises: and performing primary contour feature extraction and dimension reduction twice on the upper part fine feature spectrum by using a convolutional neural network, performing residual deep feature extraction, combining full connection layers, and outputting a one-dimensional feature value of the upper part fine feature spectrum.
As shown in fig. 10, a specific implementation process of the master network for extracting the center frequency feature according to the embodiment of the present invention is described. The first convolution layer of the upper part fine feature spectrum adopts 8 convolution kernels with the size of a single channel 3x3, the step length is 2, and 8 convolution kernels are enough to distinguish because the pattern classification features are few and only consist of simple and few geometric shapes such as straight lines, angles, circular arcs and the like; then, maximum pooling level down-sampling is carried out, and the pooling window of 2x2 is adopted to reduce the dimension to 110x 110; then, the same convolution layer and pooling layer are carried out, and the dimension is further reduced to 55x 55; the back is provided with 3 residual error networks, the convolution kernel number is 64- >32- >16 in sequence, and deep layer features are further extracted; and finally, combining the full connection layers into a one-dimensional characteristic value.
The step of extracting the subnet includes: and performing primary contour feature extraction on the lower part of the fine feature spectrum by a convolutional neural network, then sending the lower part of the fine feature spectrum into a circulating network for transverse correlation feature extraction, and finally outputting a one-dimensional feature value of the lower part of the fine feature spectrum after shaping.
As shown in fig. 11, a detailed implementation process of the sub-network for extracting fundamental frequency features according to the embodiment of the present invention is described. The lower part of the fine characteristic spectrum only has one convolution layer, 32 convolution kernels with the size of a single channel 1x3 are adopted, the step length is 2, and because the relation between the base frequencies is variable, more convolution kernels are adopted to refine the differences; then, the LSTM sub-network with 128 hidden layers is sent in, the base frequency correlation characteristic is extracted, 50% of inactivation layers are used for preventing overfitting, and finally the one-dimensional characteristic value is shaped into a uniform format.
3. Feature fusion
Directly carrying out head-to-tail splicing fusion on the center frequency one-dimensional characteristic value extracted by the main network and the base frequency one-dimensional characteristic value extracted by the sub network to obtain a one-dimensional characteristic value of the spectrogram;
sending the radar signal modulation type and the one-dimensional characteristic value of the spectrogram into a characteristic classifier for storage;
sequentially calculating, matching, classifying and identifying in the feature classifier, and finally making a decision;
in the model training stage, directly storing the characteristic value and the known modulation type in the characteristic classifier;
and in the stage of identifying the modulation type of the unknown radar signal, the characteristic classifier classifies the characteristic value, matches the characteristic value stored in the characteristic classifier, and outputs the modulation type according to which type the matching belongs to.
4. Prediction output
The feature classifier matches the feature values extracted in the training with the feature values stored in the feature classifier to judge whether the feature values are successfully matched,
if the characteristic values are successfully matched, identifying and outputting the radar signal modulation type of the training;
and if the characteristic value matching is unsuccessful, feeding back the radar signal modulation type and the characteristic value of the training to the dual-channel heterogeneous fusion network model, and adding an unknown radar signal modulation type.
And detecting the sample after the training is finished, and outputting the dual-channel heterogeneous fusion network model.
In step S3, the feature values of the radar signal modulation types to be identified are extracted and matched with the feature values of the various radar signal modulation types stored in the two-channel heterogeneous fusion network model, so as to identify the modulation types of the radar signals to be identified.
Wherein, the extraction of the characteristic value of the radar signal modulation type to be identified is performed according to the step S1 and the steps 1-3 in the step S2;
the matching of the feature values is performed according to the method of step 4 in step S2.
The embodiment of the invention uses a dual-channel heterogeneous Network, a single-channel common CNN Network and a single-channel LSTM Network to respectively test 200 sample recognition rates, and experimental results show that the dual-channel heterogeneous fusion Neural Network is superior to the accuracy rate of processing two-dimensional images by using a Convolutional Neural Network (CNN) singly and is also superior to the accuracy rate of processing one-dimensional frequency spectrum data by using a long and short memory Network (LSTM) singly, the accuracy rate is averagely higher than the accuracy rate of processing one-dimensional frequency spectrum data by more than 10 percent, and the advantages of the two are combined, and the specific data are detailed in tables 1 to 3:
table 1 dual-channel heterogeneous network test 200 sample recognition rates
Figure 971101DEST_PATH_IMAGE005
Table 2 single-channel common CNN network test 200 sample recognition rates
Figure 697748DEST_PATH_IMAGE006
TABLE 3 Single-channel LSTM network test 200 sample recognition rates
Figure 799696DEST_PATH_IMAGE007
In summary, in the embodiments of the present invention, a two-channel heterogeneous convergence network model is designed, and feature values of various radar signal modulation types trained by the two-channel heterogeneous convergence network model are stored; the characteristic value of the radar signal modulation type to be identified is matched with the characteristic value stored in the dual-channel heterogeneous fusion network model, so that the purpose of identifying the radar signal modulation type to be identified is achieved, and the accuracy rate of the dual-channel heterogeneous fusion network is higher than that of other single-channel networks by more than 10% on average.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A radar signal modulation type identification method based on a dual-channel heterogeneous convergence network is characterized by comprising the following steps:
s1, preprocessing pulse data of various radar signal modulation types, and generating fine feature spectrograms of the various radar signal modulation types;
s2, training the fine characteristic spectrograms of the various radar signal modulation types by adopting a two-channel heterogeneous fusion network model, and extracting and storing the various radar signal modulation types and characteristic values;
s3, extracting characteristic values of the radar signal modulation types to be identified, and matching the characteristic values with the stored various radar signal modulation types, so as to identify the radar signal modulation types to be identified;
in step S2, the step of training the fine feature spectrograms of the various radar signal modulation types by using the two-channel heterogeneous fusion network model includes: the method comprises the following steps of characteristic area separation, backbone network extraction, characteristic fusion and prediction output end, wherein the characteristic area separation comprises the following steps:
acquiring a preprocessed spectrogram;
converting the spectrogram into an image standard format, wherein the standard format is 320x 320;
splitting the spectrogram into an upper part fine characteristic spectrum and a lower part fine characteristic spectrum; wherein:
the upper portion fine feature spectrum includes: the central frequency spectral line part of the upper 4/5 of the spectrogram selects the middle 220x220 part;
the lower portion of the fine feature spectrum comprises: the bottom 1/5 of the spectrogram contains the background noise and other spectral components near the center frequency, and the middle row 1x320 is selected.
2. The method of claim 1, wherein the step of preprocessing the pulse data for each radar signal modulation type comprises: extracting pulse data, determining the positions of upper and lower edges of the pulse and segmenting subtle features.
3. The method of claim 2, wherein the extracting pulse data comprises: searching 20 pulse data which continuously rise or fall by adopting a pulse search algorithm and dividing the 20 pulse data into A group data and B group data; wherein the content of the first and second substances,
the group A data specifically comprises: dividing the signals larger than 0 into Ax groups, and recording the Ax (1), Ax (2), Ax (3) … … Ax (i); the corresponding position group is Ay group, and is marked as Ay (1), Ay (2), Ay (3) … … Ay (i);
the group B data specifically comprises: dividing the components less than 0 into Bx groups, and marking as Bx (1), Bx (2), Bx (3) … … Bx (i); the corresponding position groups are By groups and are marked as By (1), By (2) and By (3) … … By (i).
4. The method of claim 3, wherein the step of determining the position of the upper and lower edges of the pulse comprises:
a. according to the data of the group A and the data of the group B, respectively performing curve fitting on the data of the group A and the data of the group B to obtain a fitted curve A, B;
according to a quadratic function p (x), a0+ a1x + a2x2, performing piecewise curve fitting on the data of the group A and the group B respectively, wherein p (x) represents the quadratic function, x represents the data of the group Ax and the group Bx, and a0, a1 and a2 respectively represent constants, first-order coefficient and second-order coefficient of the quadratic function; substituting the Ax group data and the Bx group data into p (x) to obtain a0, a1 and a 2;
b. (iii) solving for the mean square error of the fitted curve A, B;
according to the formula
Figure FDA0003560617780000021
The mean square error of the fitted curve a is found, wherein,
Figure FDA0003560617780000022
represents the mean square error of the ith data, p (xi) represents the curve fitting value of the ith data, and yi represents the ith data in the Ay group; according to the method, the mean square error of the fitted curve B is obtained;
c. taking a derivative of the fitted curve A, and judging the derivatives of the Ax group and the Ay group,
when the derivatives of the Ax group and the Ay group are positive or negative at the same time, judging that the mean square error of the fitted curve A is qualified, and outputting a group A mark; otherwise, jumping out of the data processing and re-extracting the pulse data;
taking a derivative of the fitted curve B, and performing the same operation as the group A on the group B data;
d. judging the consistency of the marks of the group A and the group B,
when A, B sets of marks agree, the mark segment is considered to be the pulse rising edge position;
when the A, B sets of marks do not coincide, the mark segment is considered to be the pulse falling edge position.
5. The method of claim 2, wherein segmenting the fine features comprises: generating a frequency spectrum amplitude, searching the maximum value and the maximum and minimum value of the frequency spectrum amplitude for normalization processing; wherein the content of the first and second substances,
generating a spectrum amplitude, specifically comprising: intercepting effective single pulse segments according to the determined pulse rising edge position and pulse falling edge position, and respectively performing fast Fourier transform to generate a frequency spectrum amplitude;
finding the maximum value of the spectrum amplitude specifically comprises the following steps: carrying out absolute value normalization on the frequency spectrum amplitude, eliminating clutter frequency, searching the maximum value of the frequency spectrum amplitude, and extracting 128 frequency points on the left and right of the maximum value position of the frequency spectrum amplitude to form a new frequency spectrogram;
the maximum and minimum normalization processing specifically includes: and carrying out maximum and minimum value normalization of the proportion 1/100 on the new spectrogram to obtain the spectrogram with fine characteristics.
6. The method according to claim 1, wherein the dual-channel heterogeneous convergence network model in step S2 is a neural network, and the basic units of the dual-channel heterogeneous convergence network model include: the device comprises a convolution unit, a batch normalization unit and a linear correction unit.
7. The method of claim 6, wherein extracting the backbone network comprises extracting a primary network and extracting a sub-network; wherein the content of the first and second substances,
the step of extracting the primary network comprises: performing primary contour feature extraction and dimension reduction on the upper part fine feature spectrum for two times, then performing residual deep feature extraction, combining all connection layers and outputting a one-dimensional feature value of the upper part fine feature spectrum;
the step of extracting the sub-network includes: and performing primary contour feature extraction on the lower part of the fine feature spectrum by a convolutional neural network, then sending the lower part of the fine feature spectrum into a circulating network for transverse correlation feature extraction, and finally shaping and outputting a one-dimensional feature value of the lower part of the fine feature spectrum.
8. The method of claim 7, wherein the feature fusing step comprises:
directly carrying out end-to-end splicing fusion on the central frequency one-dimensional characteristic value extracted by the main network and the base frequency one-dimensional characteristic value extracted by the sub-network to obtain a one-dimensional characteristic value of the spectrogram;
and sending the radar signal modulation type and the one-dimensional characteristic value of the spectrogram into a characteristic classifier for storage.
9. The method of claim 8, wherein the prediction output comprises: the feature classifier matches the feature value extracted in the training with the feature value stored in the feature classifier to judge whether the feature value is successfully matched,
if the characteristic values are successfully matched, identifying and outputting the radar signal modulation type of the training;
and if the characteristic value matching is unsuccessful, feeding back the radar signal modulation type and the characteristic value of the training to the dual-channel heterogeneous fusion network model, and adding an unknown radar signal modulation type.
CN202210096729.8A 2022-01-27 2022-01-27 Radar signal modulation type identification method based on dual-channel heterogeneous fusion network Active CN114114227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210096729.8A CN114114227B (en) 2022-01-27 2022-01-27 Radar signal modulation type identification method based on dual-channel heterogeneous fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210096729.8A CN114114227B (en) 2022-01-27 2022-01-27 Radar signal modulation type identification method based on dual-channel heterogeneous fusion network

Publications (2)

Publication Number Publication Date
CN114114227A CN114114227A (en) 2022-03-01
CN114114227B true CN114114227B (en) 2022-05-31

Family

ID=80361982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210096729.8A Active CN114114227B (en) 2022-01-27 2022-01-27 Radar signal modulation type identification method based on dual-channel heterogeneous fusion network

Country Status (1)

Country Link
CN (1) CN114114227B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934597B (en) * 2023-09-14 2023-12-08 中国科学院自动化研究所 FDNN model-based magnetic particle imaging spatial resolution improvement method

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073059A (en) * 2010-12-31 2011-05-25 华中科技大学 Digital pileup waveform processing method and system
CN111903240B (en) * 2010-03-18 2014-02-12 中国电子科技集团公司第五十四研究所 Analysis and identification method based on signal fine feature extraction
CN107463966A (en) * 2017-08-17 2017-12-12 电子科技大学 Radar range profile's target identification method based on dual-depth neutral net
CN107577999A (en) * 2017-08-22 2018-01-12 哈尔滨工程大学 A kind of radar emitter signal intra-pulse modulation mode recognition methods based on singular value and fractal dimension
CN108872962A (en) * 2018-05-10 2018-11-23 南京航空航天大学 Laser radar weak signal extraction and decomposition method based on Fourier Transform of Fractional Order
CN110222748A (en) * 2019-05-27 2019-09-10 西南交通大学 OFDM Radar Signal Recognition method based on the fusion of 1D-CNN multi-domain characteristics
CN111382803A (en) * 2020-03-18 2020-07-07 电子科技大学 Feature fusion method based on deep learning
CN111399002A (en) * 2020-04-09 2020-07-10 西安交通大学 GNSS receiver combined interference classification and identification method based on two-stage neural network
CN111474524A (en) * 2020-04-22 2020-07-31 安徽华可智能科技有限公司 Radar interference equipment interference effect monitoring and decision support system
CN111983567A (en) * 2020-07-21 2020-11-24 中国电子科技集团公司第三十六研究所 Radar signal micro-feature analysis system based on digital filtering
CN112560803A (en) * 2021-01-22 2021-03-26 南京航空航天大学 Radar signal modulation identification method based on time-frequency analysis and machine learning
CN112882009A (en) * 2021-01-12 2021-06-01 西安电子科技大学 Radar micro Doppler target identification method based on amplitude and phase dual-channel network
CN113759318A (en) * 2021-09-28 2021-12-07 南京国立电子科技有限公司 Automatic identification method for intra-pulse modulation type of radar signal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2860882B1 (en) * 2003-10-10 2006-02-03 Thales Sa METHOD FOR PRE-DETECTING RESPONSES IN SECONDARY RADAR AND APPLICATION TO DETECTION OF S MODE RESPONSES
EP1777547A1 (en) * 2005-10-24 2007-04-25 Mitsubishi Electric Information Technology Centre Europe B.V. Signal processing and time delay measurement based on combined correlation and differential correlation

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111903240B (en) * 2010-03-18 2014-02-12 中国电子科技集团公司第五十四研究所 Analysis and identification method based on signal fine feature extraction
CN102073059A (en) * 2010-12-31 2011-05-25 华中科技大学 Digital pileup waveform processing method and system
CN107463966A (en) * 2017-08-17 2017-12-12 电子科技大学 Radar range profile's target identification method based on dual-depth neutral net
CN107577999A (en) * 2017-08-22 2018-01-12 哈尔滨工程大学 A kind of radar emitter signal intra-pulse modulation mode recognition methods based on singular value and fractal dimension
CN108872962A (en) * 2018-05-10 2018-11-23 南京航空航天大学 Laser radar weak signal extraction and decomposition method based on Fourier Transform of Fractional Order
CN110222748A (en) * 2019-05-27 2019-09-10 西南交通大学 OFDM Radar Signal Recognition method based on the fusion of 1D-CNN multi-domain characteristics
CN111382803A (en) * 2020-03-18 2020-07-07 电子科技大学 Feature fusion method based on deep learning
CN111399002A (en) * 2020-04-09 2020-07-10 西安交通大学 GNSS receiver combined interference classification and identification method based on two-stage neural network
CN111474524A (en) * 2020-04-22 2020-07-31 安徽华可智能科技有限公司 Radar interference equipment interference effect monitoring and decision support system
CN111983567A (en) * 2020-07-21 2020-11-24 中国电子科技集团公司第三十六研究所 Radar signal micro-feature analysis system based on digital filtering
CN112882009A (en) * 2021-01-12 2021-06-01 西安电子科技大学 Radar micro Doppler target identification method based on amplitude and phase dual-channel network
CN112560803A (en) * 2021-01-22 2021-03-26 南京航空航天大学 Radar signal modulation identification method based on time-frequency analysis and machine learning
CN113759318A (en) * 2021-09-28 2021-12-07 南京国立电子科技有限公司 Automatic identification method for intra-pulse modulation type of radar signal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Feature extraction using surrounding-line integral bispectrum for radar emitter signal";Tao-wei Chen等;《2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence)》;20081231;全文 *
"基于包络指纹特征的辐射源分类识别技术";金丽洁等;《现代雷达》;20200131;全文 *
"基于卷积神经网络的雷达辐射源识别";牛浩楠;《现代防御技术》;20210630;全文 *
"基于卷积神经网络的雷达辐射源识别方法研究";鱼轩瑞;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20200215;正文第1-2、15、18、31-32、35-36、46-47页 *

Also Published As

Publication number Publication date
CN114114227A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN110826630B (en) Radar interference signal feature level fusion identification method based on deep convolutional neural network
CN110363182B (en) Deep learning-based lane line detection method
US6018728A (en) Method and apparatus for training a neural network to learn hierarchical representations of objects and to detect and classify objects with uncertain training data
CN109636784B (en) Image saliency target detection method based on maximum neighborhood and super-pixel segmentation
CN107145889B (en) Target identification method based on double CNN network with RoI pooling
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
US11468273B2 (en) Systems and methods for detecting and classifying anomalous features in one-dimensional data
CN110826558B (en) Image classification method, computer device, and storage medium
CN114114227B (en) Radar signal modulation type identification method based on dual-channel heterogeneous fusion network
CN112560803A (en) Radar signal modulation identification method based on time-frequency analysis and machine learning
Li et al. Apple grading method based on features fusion of size, shape and color
CN109934088A (en) Sea ship discrimination method based on deep learning
CN116166960A (en) Big data characteristic cleaning method and system for neural network training
CN111814825B (en) Apple detection grading method and system based on genetic algorithm optimization support vector machine
CN114359288A (en) Medical image cerebral aneurysm detection and positioning method based on artificial intelligence
US7620246B2 (en) Method and apparatus for image processing
CN114764801A (en) Weak and small ship target fusion detection method and device based on multi-vision significant features
CN117593193B (en) Sheet metal image enhancement method and system based on machine learning
CN111860488A (en) Method, device, equipment and medium for detecting and identifying bird nest of tower
CN115033721A (en) Image retrieval method based on big data
CN113627481A (en) Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens
CN116246174B (en) Sweet potato variety identification method based on image processing
CN116206208A (en) Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence
CN113688953B (en) Industrial control signal classification method, device and medium based on multilayer GAN network
CN112132104B (en) ISAR ship target image domain enhancement identification method based on loop generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant