CN114818770A - Radar signal identification based on time-frequency image feature fusion - Google Patents

Radar signal identification based on time-frequency image feature fusion Download PDF

Info

Publication number
CN114818770A
CN114818770A CN202210194049.XA CN202210194049A CN114818770A CN 114818770 A CN114818770 A CN 114818770A CN 202210194049 A CN202210194049 A CN 202210194049A CN 114818770 A CN114818770 A CN 114818770A
Authority
CN
China
Prior art keywords
time
radar signal
gradient
image
frequency image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210194049.XA
Other languages
Chinese (zh)
Inventor
李世通
全大英
金小萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Metrology
Original Assignee
China University of Metrology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Metrology filed Critical China University of Metrology
Priority to CN202210194049.XA priority Critical patent/CN114818770A/en
Publication of CN114818770A publication Critical patent/CN114818770A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/435Computation of moments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Nonlinear Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar signal identification method based on time-frequency image feature fusion, which mainly solves the problem that the radar signal identification rate is low in the low signal-to-noise ratio environment in the prior art. The invention has the following steps: (1) performing time-frequency transformation on the radar signal to obtain a time-frequency image; (2) preprocessing a time-frequency image; (3) extracting texture features and shape features of the image; (4) and identifying the radar signals by adopting a support vector machine. The method extracts the textural features and shape features of the radar signal based on the Choi-Williams time-frequency image and the contour map of the fuzzy function, fuses the textural features and the shape features with the contour map to form a new radar signal identification characteristic value, and solves the problem of low radar signal identification rate under low signal-to-noise ratio.

Description

Radar signal identification based on time-frequency image feature fusion
Technical Field
The invention belongs to the field of radar signal processing, and particularly relates to a radar signal identification method based on time-frequency image feature fusion in the technical field of signal identification.
Background
The radar signal identification is a key link of an electronic reconnaissance system, and aims to extract characteristic parameters of sorted radar signals and automatically identify radar radiation source signals. With the increasing complexity of modern battlefield electromagnetic environments and the wide application of novel radar systems, it becomes crucial to quickly and effectively identify radar signals. The traditional method based on five conventional characteristic parameters (carrier frequency, pulse width, pulse amplitude, arrival time and arrival angle) cannot effectively identify, and effective characteristics of signals need to be extracted for effective identification of modulation modes of radar radiation source signals with diversified modulation modes. The characteristic parameters in the radar signal pulse can effectively reflect the essential information of the signal and expand the parameter space of signal identification, so that the research on the characteristic parameters in the radar signal pulse becomes a hot spot in time. At present, under the condition of high signal-to-noise ratio, the existing methods aim at more methods for identifying the signal modulation type and have higher identification rate. However, under the environment with low signal-to-noise ratio, the signal identification rate is low and the methods are few, and the method for identifying the complex modulation type signal is few, the identification rate is low, and the algorithm robustness is poor.
In the existing technical scheme, the fuzzy function is an important mathematical tool for researching radar signals, and can be used for researching different radar waveforms and representing the difference of different radar signals. The time-frequency analysis method is an important method for processing non-stationary signals, and the common time-frequency analysis methods mainly include: short-time Fourier transform (STFT), Wigner-Ville Distribution (Wigner-Ville Distribution), CWD, and the like; the STFT has the defect of poor time-frequency aggregation, the WVD can generate cross terms which interfere with the characteristics of real signals, and the CWD time-frequency analysis method has good time-frequency resolution and can effectively inhibit the cross terms. In order to solve the defects of the existing scheme, the invention designs a radar signal identification method by simultaneously utilizing the advantages of a fuzzy function and CWD time frequency analysis and adopting an SVM classifier with lower complexity so as to more accurately identify radar signals under the condition of low signal-to-noise ratio.
Disclosure of Invention
The invention aims to design a radar signal identification method based on time-frequency image feature fusion aiming at the defects and shortcomings of the prior art and aims to solve the problem of low signal-to-noise ratio radar signal identification rate.
The object of the invention is achieved by the following steps:
(1) obtaining a CWD time-frequency image of the radar signal by using Choi-Williams time-frequency transformation, making a signal three-dimensional graph by using a fuzzy function, and then solving a corresponding fuzzy function contour map;
(2) preprocessing the CWD time-frequency image and the contour image of the fuzzy function obtained in the step (1), comprising the following steps:
(2-1) graying to obtain a grayscale map;
(2-2) filtering the gray level image to obtain a filter image, for example, using wiener filtering;
(2-3) performing bicubic differential scaling on the filtered graph to a set scale, such as 224 x 224 size;
(3) extracting a characteristic value of a signal, comprising:
extracting texture features of the CWD time-frequency image and the fuzzy function contour map by utilizing a gray gradient co-occurrence matrix; extracting shape characteristics of the CWD time-frequency image and the fuzzy function contour map by using the pseudo Zernike moment;
(4) fusing image texture features and shape features;
(5) and (3) carrying out recognition processing by using an SVM:
(5-1) adding labels to the corresponding feature vectors;
(5-2) determining the values of a penalty factor C and a kernel function parameter g of the SVM by adopting a grid parameter optimization method;
and (5-3) identifying the received signal and outputting a result.
The mathematical expression of Choi-Williams Distribution in the step (1) is as follows:
Figure BDA0003526270910000021
where t represents time, s (t) represents radar signal,. indicates complex conjugate operation,. e represents exponential operation with natural constant as base, j represents imaginary unit symbol, and σ represents attenuation coefficient, and is proportional to the amplitude of the cross phase.
The mathematical expression of the fuzzy function in the step (1) is as follows:
Figure BDA0003526270910000031
where τ denotes time delay, t denotes time, f d Denotes the doppler frequency offset, # denotes the complex conjugate operation, e denotes the exponential operation with the natural constant as the base, and j denotes the imaginary unit symbol.
The gray level gradient co-occurrence matrix (GLGCM) in the step (3) is specifically as follows: recording the gray level image as f (M, N), wherein M and N are respectively the row number and the column number of the two-dimensional matrix corresponding to the gray level image; calculating the corresponding GLGCM by:
(1) calculating a normalized gradient matrix of f (M, N), and extracting a gradient matrix g (M, N) of f (M, N) by adopting a Sobel operator of a 3 x 3 window, wherein a gradient value calculation formula of the (k, l) th pixel point is as follows:
Figure BDA0003526270910000032
g x =f(k+1,l-1)+2f(k+1,l)+f(k+1,l+1)-f(k-1,l-1)
-2f(k-1,l)-f(k-1,l+1)
g y =f(k-1,l+1)+2f(k,l+1)+f(k+1,l+1)-f(k-1,l+1)
-2f(k,l-1)-f(k+1,l-1)
wherein k is 1,2 …, M; 1,2 …, N. Normalized gradient matrix utilization formula
Figure BDA0003526270910000033
Figure BDA0003526270910000034
Wherein INT is a rounding operation;g max is the largest gradient value in g (M, N); n is a radical of g Is the maximum expected after normalizing the gradient; preferably 32.
(2) Calculating a normalized gray matrix of f (M, N); is given by the formula
Figure BDA0003526270910000035
Figure BDA0003526270910000036
Wherein f is max Is the maximum gray value in f (M, N); n is a radical of f Is the maximum value expected after the grey value is normalized; n is a radical of f Preferably 32.
(3) The element value H (i, j) of the gray gradient co-occurrence matrix, i.e. F (k, l) is made to be i (i belongs to [1, N) simultaneously in the statistical normalization gray matrix and the gradient matrix f ]) And G (k, l) ═ j (j. epsilon. [1, N) g ]) The normalized GLGCM is obtained using the following formula:
Figure BDA0003526270910000041
the pseudo-Zernike moment in the step (3) is an orthogonal complex moment, the order is p, and the pseudo-Zernike moment with the repetition degree of q is defined as:
Figure BDA0003526270910000042
x 2 +y 2 =1
in the formula, p is a positive integer or zero, q is an integer, | q | < p, and f (x, y) is an image function; wherein the content of the first and second substances,
Figure BDA0003526270910000043
expressed in polar coordinates as
Figure BDA0003526270910000044
Figure BDA0003526270910000045
For kinematic rotation invariance and reducing the dynamic range of pseudo-Zernike moments
Figure BDA0003526270910000046
Extracting texture feature and shape feature of image, i.e. extracting texture feature T of image by using GLGCM CWD 、T AFCL Extracting shape feature Z of the image by using the pseudo Zernike moment CWD 、Z AFCL The two characteristic parameters are combined into a characteristic vector T CWD ,T AFCL ,Z CWD ,Z AFCL ];
The invention has the following outstanding advantages: 1. the method comprises the steps of realizing effective identification of radar signals of 8 different modulation types based on time-frequency image feature fusion, wherein the modulation types comprise frequency modulation, phase modulation and composite modulation signals mixed by two modes; 2. the average correct recognition rate of 8 signals can reach more than 80% in the environment that the signal-to-noise ratio is as low as-8 dB; 3. the radar signal parameters used by the invention are all selected in a dynamic range, so that the radar signal is insensitive to parameter change, has better generalization capability and can meet the actual use requirement.
Drawings
FIG. 1 is a schematic diagram of a radar signal identification method based on time-frequency image feature fusion according to the present invention.
FIG. 2 is a diagram of an SVM training model used in the practice of the present invention to identify 8 radar signals.
Fig. 3 is a graph showing the change of the respective recognition rates of 8 radar signals in the implementation of the present invention.
Fig. 4 is a graph of the overall correct recognition rate of 8 radar signals as a function of the signal-to-noise ratio in the practice of the present invention.
Fig. 5 is a graph of a confusion matrix for 8 radar signals in an implementation of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, the radar signal identification method based on time-frequency image feature fusion of the present invention mainly comprises the following steps: firstly, time-frequency transformation is carried out on radar signals to obtain time-frequency images of signals of different modulation types, then the images are preprocessed to inhibit the influence of noise and the like and reduce the calculated amount, then shape features and texture features of the images are extracted and fused, and finally an intra-pulse modulation mode of the radar signals is identified through an SVM and output.
The specific method for processing the radar signal comprises the following steps:
the radar signal time frequency transformation comprises CWD time frequency transformation and a fuzzy function. Wherein the mathematical expression of CWD time-frequency transformation is
Figure BDA0003526270910000051
Where t represents time, s (t) represents radar signal,. indicates complex conjugate operation,. e represents exponential operation with natural constant as base, j represents imaginary unit symbol, and σ represents attenuation coefficient, and is proportional to the amplitude of the cross phase.
The fuzzy function is another time-frequency distribution function and is an important mathematical tool for researching radar signals. The blur function can be used not only to study different radar waveforms, but also to characterize the differences of different radar signals. Mathematical expression of the fuzzy function is
Figure BDA0003526270910000052
Where τ denotes time delay, t denotes time, f d Denotes the doppler shift, denotes the complex conjugate operation, e denotes the exponential operation with the natural constant as the base, and j denotes the imaginary unit symbol.
Due to the influence of noise and time-frequency cross terms, a signal time-frequency diagram has a large amount of interference information. Therefore, before image feature extraction, the image processing technology is adopted to preprocess the time-frequency graph, so that interference and redundant information can be effectively reduced, and the effectiveness of feature extraction is enhanced. In the preprocessing process, the time-frequency image is grayed to obtain a gray image, then the gray image is subjected to wiener filtering to obtain a filtering image, and finally the filtering image is subjected to bicubic difference scaling to 224 x 224.
The method adopts GLGCM and the pseudo Zernike moment to respectively extract the texture characteristic and the shape characteristic of the radar signal.
1. The texture feature extraction step comprises:
(1) a normalized gradient matrix of f (M, N) is calculated. Extracting a gradient matrix g (M, N) of f (M, N) by using a Sobel operator of a 3 x 3 window, wherein a gradient value calculation formula of a (k, l) th pixel point is
Figure BDA0003526270910000061
g x =f(k+1,l-1)+2f(k+1,l)+f(k+1,l+1)-f(k-1,l-1)
-2f(k-1,l)-f(k-1,l+1)
g y =f(k-1,l+1)+2f(k,l+1)+f(k+1,l+1)-f(k-1,l+1)
-2f(k,l-1)-f(k+1,l-1)
Normalized gradient matrix utilization formula
Figure BDA0003526270910000062
Wherein k ═
1,2 …, M; 1,2 …, N; INT is rounding operation; g max Is the largest gradient value in g (M, N);
N g is the maximum value expected after normalizing the gradient, and the value of the embodiment is 32.
(2) A normalized grayscale matrix of f (M, N) is calculated. Is given by the formula
Figure BDA0003526270910000063
Figure BDA0003526270910000064
Wherein f is max Is the maximum gray value in f (M, N); n is a radical of f Is the maximum value expected after the gray value normalization, and this embodiment takes N f Is 32.
(3) Element values H (i, j) of the gray gradient co-occurrence matrix, i.e. statistical normalized graySimultaneously making F (k, l) i (i ∈ [1, N) in the matrix and gradient matrix f ]) And G (k, l) ═ j (j. epsilon. [1, N) g ]) The number of pixel points. Normalized GLGCM was obtained using the following formula:
Figure BDA0003526270910000071
(4) in the embodiment, 15 characteristic parameters of the GLGCM are selected, specifically including small gradient dominance (T) 1 ) Dominant large gradient (T) 2 ) Gradient distribution inhomogeneity (T) 3 ) Non-uniformity of gray distribution (T) 4 ) Energy (T) 5 ) Mean value of gray (T) 6 ) Mean of gradient (T) 7 ) Standard deviation of gray scale (T) 8 ) Gradient standard deviation (T) 9 ) Correlation (T) 10 ) Entropy of gray scale (T) 11 ) Gradient entropy (T) 12 ) Mixed entropy (T) 13 ) Differential moment (T) 14 ) Inverse difference moment (T) 15 )。
(5) The extracted texture feature vectors of the contour lines of the CWD image and the fuzzy function are respectively T CWD And T AFCL
T CWD =[T 1 ,T 2 ,T 3 ,T 4 ,T 5 ,T 6 ,T 7 ,T 8 ,T 9 ,T 10 ,T 11 ,T 12 ,T 13 ,T 14 ,T 15 ]
T AFCL =[T 1 ,T 2 ,T 3 ,T 4 ,T 5 ,T 6 ,T 7 ,T 8 ,T 9 ,T 10 ,T 11 ,T 12 ,T 13 ,T 14 ,T 15 ]
2. The shape features of the image employ pseudo-Zernike moments. The pseudo-Zernike moments with order p and degree of repetition q are defined as:
Figure BDA0003526270910000072
x 2 +y 2 =1
in the formula, p is a positive integer or zero, q is an integer, | q | ≦ p, and f (x, y) is an image function. Wherein the content of the first and second substances,
Figure BDA0003526270910000073
expressed in polar coordinates as
Figure BDA0003526270910000074
Figure BDA0003526270910000075
For kinematic rotation invariance and reducing the dynamic range of pseudo-Zernike moments
Figure BDA0003526270910000076
Selection of the invention
Figure BDA0003526270910000077
A set of 7-dimensional feature vectors is formed.
Therefore, the characteristic vectors of the CWD image and the contour image of the blur function are respectively Z CWD ,Z AFCL
Figure BDA0003526270910000078
Figure BDA0003526270910000079
The shape feature and texture feature of the combined image constitute the feature vector used in the invention
[T CWD ,T AFCL ,Z CWD ,Z AFCL ]
A specific implementation of the technical scheme of the invention is verified by experiments:
in the verification experiment, 8 typical radar radiation source signals are selected to establish a database: conventional radar signal (CW), a chirp signal (LFM), a binary coded signal (BPSK), a polyphase coded signal (MPSK), a four-phase frequency coded signal (4FSK), a chirp two-phase coded complex modulated signal (LFM/BPSK), a chirp four-term frequency coded complex modulated signal (LFM/4FSK) and a two-phase coded four-term frequency coded complex modulated signal (BPSK/4 FSK). The radar signal parameters are set as follows: pulse width T6 mus, sampling frequency f s The remaining parameters are shown in the table below, where U (·) is based on the sampling frequency f s Is uniformly distributed, e.g. U (1/8,1/4) indicates a parameter in the range of [ f ] s /8,f s /4]A random number in between.
Table-simulation signal parameter set
Figure BDA0003526270910000081
The database of radar signals is built according to the range of signal-to-noise ratio, the range of signal-to-noise ratio is set to-8 dB to 8dB, and the step is 2 dB. At each signal-to-noise ratio, the different types of signals each produce 250 sets of CWD and AFCL images, 200 of which are used in the training set and the rest in the test set.
The method adopts a support vector machine based on the kernel function of the radial basis function to identify the radar signals. The support vector machine is a learning algorithm with target output, belongs to one of supervised learning algorithms, and mainly solves the problem of data classification in the field of data mining or pattern recognition. In the SVM algorithm, different kinds of kernel functions have little influence on the result, and a penalty factor C and a kernel function parameter g play an important role.
The present invention uses a grid parameter optimization method to determine the values of penalty factor C and kernel function parameter g. Fig. 2 is a training model of the SVM. Firstly, setting the parameter range of C and g: c2 i ,g=2 j And (i, j) is ∈ R. In any group (C) i ,g j ) Under the parameters, the training set is divided into 5 parts by default, one part of the training set is used as a test set to train the classifier, the other part of the training set is used for inspection, the method is traversed and circulated, a group of average recognition accuracy rates exist each time, and finally the optimal accuracy is obtainedThe C, g parameter corresponding to the accuracy is selected as the best choice. In the invention, the value range of the parameters C and g is set to be [ -10,10 [)]Setting the step length to be 0.2 in order to improve the model operation efficiency; to effectively terminate training, an error threshold of 10 is set -4 The cross-validation parameter is 5.
Simulation experiment 1:
the above 8 typical radar radiation source signals were identified, each signal yielding 250 sample data at each signal-to-noise ratio, 80% of which was used for training and the remaining 20% for testing. Therefore, 1600 sample data and 400 sample data of different radar signals under the same signal-to-noise ratio are respectively arranged in the training set and the test set. The range of the signal-to-noise ratio is-8 dB to 8dB, an experiment is performed every 2dB, and the identification rate of each radar modulation signal under different signal-to-noise ratios is shown in figure 3. As can be seen from fig. 3, the recognition accuracy of the recognition method of the present invention for 8 typical radar signals increases with the signal-to-noise ratio. Fig. 4 is a plot of the overall recognition rate of 8 exemplary radar signals. It can be seen from fig. 4 that the recognition method of the present invention has a high recognition accuracy at a low signal-to-noise ratio, the overall recognition rate of 8 signals can reach 81.5% in an environment with a signal-to-noise ratio of-8 dB, and the overall recognition rate of 8 signals can reach 100% in an environment with a signal-to-noise ratio of-2 dB, which indicates that the method of the present invention has a good recognition performance.
Simulation experiment 2:
robustness verification was performed on the above 8 typical radar radiation source signals. The snr range was set to-8 dB to 8dB, stepped by 2dB, with 8 signals at each snr generating 10 groups of data samples for training and 5 groups of data samples for testing. Thus, the training set and the test set have 720 combinations and 360 sets of sample data, respectively, and the confusion matrix of the test results is shown in FIG. 5. The diagonal values of fig. 5 indicate the probability of such signals being correctly identified, and it can be seen from fig. 5 that the identification rate of each signal is above 90%, where the identification rate of CW, BPSK/4FSK signals is 100%. The recognition rate of the 8 signals of the total confusion matrix is 96.39%, so that the method has better generalization capability and robustness under the condition of complex signal to noise ratio.

Claims (5)

1. The radar signal identification method based on time-frequency image feature fusion is characterized by comprising the following steps of:
(1) obtaining a CWD time-frequency image of the signal by using Choi-Williams time-frequency transformation on the received signal, making a three-dimensional graph of the signal by using a fuzzy function, and then solving a corresponding fuzzy function contour map;
(2) preprocessing the CWD time-frequency image and the contour image of the fuzzy function obtained in the step (1), comprising the following steps:
(2-1) graying to obtain a grayscale map;
(2-2) filtering the gray level image to obtain a filter image;
(2-3) performing bicubic difference value scaling on the filter graph to a set scale;
(3) extracting a characteristic value of a signal, comprising:
extracting texture features of the CWD time-frequency image and the fuzzy function contour map by utilizing the gray gradient co-occurrence matrix; extracting shape characteristics of the CWD time-frequency image and the fuzzy function contour map by using the pseudo Zernike moment;
(4) fusing image texture features and shape features;
(5) and (3) carrying out recognition processing by using an SVM:
(5-1) adding labels to the corresponding feature vectors;
(5-2) determining the values of a penalty factor C and a kernel function parameter g of the SVM by adopting a grid parameter optimization method;
and (5-3) identifying the received signal and outputting a result.
2. The radar signal identification method based on time-frequency image feature fusion as claimed in claim 1, wherein the mathematical expression of Choi-Williams Distribution in step (1) is as follows:
Figure FDA0003526270900000011
where t represents time, s (t) represents radar signal,. indicates complex conjugate operation,. e represents exponential operation with natural constant as base, j represents imaginary unit symbol, and σ represents attenuation coefficient, and is proportional to the amplitude of the cross phase.
3. The radar signal identification method based on time-frequency image feature fusion as claimed in claim 1, wherein the mathematical expression of the fuzzy function in the step (1) is as follows:
Figure FDA0003526270900000021
where τ denotes time delay, t denotes time, f d Denotes the doppler shift, denotes the complex conjugate operation, e denotes the exponential operation with the natural constant as the base, and j denotes the imaginary unit symbol.
4. The radar signal identification method based on time-frequency image feature fusion according to claim 1, wherein the gray level gradient co-occurrence matrix (GLGCM) of step (3) is specifically: recording the gray level image as f (M, N), wherein M and N are respectively the row number and the column number of the two-dimensional matrix corresponding to the gray level image; calculating the corresponding GLGCM by:
(1) calculating a normalized gradient matrix of f (M, N), and extracting a gradient matrix g (M, N) of f (M, N) by adopting a Sobel operator of a 3 x 3 window, wherein a gradient value calculation formula of the (k, l) th pixel point is as follows:
Figure FDA0003526270900000022
g x =f(k+1,l-1)+2f(k+1,l)+f(k+1,l+1)-f(k-1,l-1)-2f(k-1,l)-f(k-1,l+1)
g y =f(k-1,l+1)+2f(k,l+1)+f(k+1,l+1)-f(k-1,l+1)-2f(k,l-1)-f(k+1,l-1)
wherein k is 1,2 …, M; 1,2 …, N; normalized gradient matrix utilizing formula
Figure FDA0003526270900000023
Figure FDA0003526270900000024
Wherein INT is a rounding operation; g max Is the largest gradient value in g (M, N); n is a radical of g Is the maximum expected after normalizing the gradient;
(2) calculating a normalized gray matrix of f (M, N); is given by the formula
Figure FDA0003526270900000025
Figure FDA0003526270900000026
Wherein f is max Is the maximum gray value in f (M, N); n is a radical of f Is the maximum value expected after the grey value is normalized;
(3) the element value H (i, j) of the gray level gradient co-occurrence matrix, namely F (k, l) is made to be i (i belongs to [1, N) in the statistical normalization gray level matrix and the gradient matrix simultaneously f ]) And G (k, l) ═ j (j. epsilon. [1, N) g ]) The normalized GLGCM is obtained using the following formula:
Figure FDA0003526270900000031
5. the radar signal identification method based on time-frequency image feature fusion as claimed in claim 1, wherein the pseudo-Zernike moments in step (3) are orthogonal complex moments, the order is p, and the pseudo-Zernike moments with repetition degree q are defined as:
Figure FDA0003526270900000032
x 2 +y 2 =1
in the formula, p is a positive integer or zero, q is an integer, | q | < p, and f (x, y) is an image function; wherein the content of the first and second substances,
Figure FDA0003526270900000033
expressed in polar coordinates as
Figure FDA0003526270900000034
Figure FDA0003526270900000035
Get
Figure FDA0003526270900000036
CN202210194049.XA 2022-03-01 2022-03-01 Radar signal identification based on time-frequency image feature fusion Pending CN114818770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210194049.XA CN114818770A (en) 2022-03-01 2022-03-01 Radar signal identification based on time-frequency image feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210194049.XA CN114818770A (en) 2022-03-01 2022-03-01 Radar signal identification based on time-frequency image feature fusion

Publications (1)

Publication Number Publication Date
CN114818770A true CN114818770A (en) 2022-07-29

Family

ID=82528422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210194049.XA Pending CN114818770A (en) 2022-03-01 2022-03-01 Radar signal identification based on time-frequency image feature fusion

Country Status (1)

Country Link
CN (1) CN114818770A (en)

Similar Documents

Publication Publication Date Title
CN107301432B (en) Self-adaptive radiation source modulation identification method based on time-frequency analysis
CN110244271B (en) Radar radiation source sorting and identifying method and device based on multiple synchronous compression transformation
CN107301661B (en) High-resolution remote sensing image registration method based on edge point features
CN109035149B (en) License plate image motion blur removing method based on deep learning
CN107229918B (en) SAR image target detection method based on full convolution neural network
CN107729926B (en) Data amplification method and machine identification system based on high-dimensional space transformation
CN109902715B (en) Infrared dim target detection method based on context aggregation network
CN105957054B (en) A kind of image change detection method
CN113050042B (en) Radar signal modulation type identification method based on improved UNet3+ network
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN103745227A (en) Method for identifying benign and malignant lung nodules based on multi-dimensional information
CN112560803A (en) Radar signal modulation identification method based on time-frequency analysis and machine learning
CN107992891A (en) Based on spectrum vector analysis multi-spectral remote sensing image change detecting method
CN109712149B (en) Image segmentation method based on wavelet energy and fuzzy C-means
CN113673312A (en) Radar signal intra-pulse modulation identification method based on deep learning
CN110853009A (en) Retina pathology image analysis system based on machine learning
CN105913081A (en) Improved PCAnet-based SAR image classification method
CN105718942A (en) Hyperspectral image imbalance classification method based on mean value drifting and oversampling
CN107590785A (en) A kind of Brillouin spectrum image-recognizing method based on sobel operators
CN116047427A (en) Small sample radar active interference identification method
CN112784777B (en) Unsupervised hyperspectral image change detection method based on countermeasure learning
CN108932468B (en) Face recognition method suitable for psychology
CN106446804A (en) ELM-based multi-granularity iris recognition method
CN116482618B (en) Radar active interference identification method based on multi-loss characteristic self-calibration network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination