CN111951611A - ADS-B weak signal detection device and method based on multi-feature fusion - Google Patents
ADS-B weak signal detection device and method based on multi-feature fusion Download PDFInfo
- Publication number
- CN111951611A CN111951611A CN202010629775.0A CN202010629775A CN111951611A CN 111951611 A CN111951611 A CN 111951611A CN 202010629775 A CN202010629775 A CN 202010629775A CN 111951611 A CN111951611 A CN 111951611A
- Authority
- CN
- China
- Prior art keywords
- signal
- ads
- feature
- fusion
- encoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an ADS-B weak signal detection device and method based on multi-feature fusion. The ADS-B weak signal detection method and device based on multi-feature fusion break through the problem that the existing signal detection system is mainly based on the energy features of signals, adopts the conventional mode of threshold detection, and improves the problem of detection performance reduction caused by target signal state information weakening under the condition of strong noise interference by using multi-feature information contained in received signals.
Description
Technical Field
The invention relates to the technical field of signal processing, in particular to an ADS-B weak signal detection device and method based on multi-feature fusion.
Background
The ADS-B broadcast type automatic correlation monitoring system is a technical means widely applied to air traffic control, combines satellite navigation, communication technology, data chain, airborne equipment, ground equipment and other technologies, provides a safer and more efficient air monitoring means, can effectively expand the monitoring coverage range, and provides detailed air vehicle state information for controllers and pilots.
With the gradual and intensive space activities of human electromagnetism and the relaxation of the low-altitude field control of various countries in the world, the space electromagnetic environment is increasingly complicated and worsened. The ADS-B system is very easily influenced by various environments and artificial noises due to the adoption of an omnidirectional broadcasting mode. When the ADS-B signal is suppressed by strong noise, the target ADS-B signal is submerged in the noise, so that even if a receiving system senses the signal, the weak ADS-B target signal is difficult to effectively separate from the strong noise due to a low signal-to-noise ratio, and the identity state information and the air situation information of the aircraft cannot be accurately extracted, thereby causing a very serious threat to air traffic safety.
The general ADS-B detection scheme is mainly divided into two types, one type is spatial processing based on an array antenna and an array theory, and spatial filtering is carried out by a beam forming method to reduce the influence of noise on a target azimuth signal; the other type is a detection technology based on time domain information, which realizes signal detection by carrying out transform domain processing on an acquired time sampling sequence and extracting signal energy characteristic parameters.
At present, for ADS-B signal detection, individual characteristics of signals are basically used as detection bases, mainly time domain, frequency domain or space domain intensity characteristics, signal-to-noise ratio is improved through processing such as filtering and accumulation, and then target judgment is carried out through an intensity threshold under the Pearson criterion. However, in an actual complex electromagnetic environment, a single feature of a target signal often fluctuates, weakens, is interfered, or even disappears, and a single threshold processing technology cannot meet the extraction requirement of the ADS-B weak signal, so that the signal detection rate is not high, or even the detection cannot be realized.
The prior art has the defects of single utilization of signal characteristic information, insufficient detection capability on ADS-B weak signals and difficulty in effectively meeting the perception requirements on air situation information and aircraft identity information under the condition of strong noise.
Disclosure of Invention
Aiming at the defects in the prior art, the ADS-B weak signal detection device and method based on multi-feature fusion provided by the invention improve the detection capability of the ADS-B weak signal.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an ADS-B weak signal detection device based on multi-feature fusion comprises a signal acquisition module, a feature extraction module and a signal detection module which are sequentially connected;
the signal acquisition module is used for acquiring a periodic broadcast ADS-B pulse signal to be detected and dividing the periodic broadcast ADS-B pulse signal into a signal section and a noise section;
the feature extraction module is used for extracting multi-feature components of the signal section and the noise section, standardizing the multi-feature components, realizing feature fusion in a serial connection mode, packing UDP (user datagram protocol) data packets of the feature vectors and the sample label information after the feature vectors and the sample label information are fused in the serial connection mode, and uploading the packed UDP data packets to an upper computer;
the signal detection module is used for receiving network and classifier model parameters sent by an upper computer, and performing binary classification judgment on an input target signal based on a stack self-encoder and SVM binary classifier model generated by training, so that ADS-B weak signal detection is realized.
Further: the signal acquisition module comprises a radio frequency front-end circuit and an intermediate frequency circuit which are connected in sequence;
the radio frequency front-end circuit is used for receiving, amplifying, mixing and filtering ADS-B signals;
the intermediate frequency circuit is used for sampling and carrying out digital down-conversion processing on the intermediate frequency signal;
the A/D sampling chip of the intermediate frequency circuit is an AD9655 chip.
Further: the feature extraction module comprises a first FPGA chip, and the model of the first FPGA chip is Xilinx Zynq 7100.
Further: the signal detection module comprises a Nor Flash off-chip storage chip and a second FPGA chip, the Nor Flash off-chip storage chip is of a micro N25Q0128A13ES model and is used for storing stack self-encoder network and SVM classifier parameters issued by an upper computer, the second FPGA chip is of a Xilinx XC7VX690T model and is used for realizing control of a network interface and receiving a control instruction issued by the upper computer, and a target signal series fusion characteristic sent by the characteristic extraction module and the network and classifier parameters stored by the Nor Flash off-chip storage chip are loaded during operation, and judgment and detection of input signals are realized through network forward operation.
Further: the system further comprises an upper computer high-level semantic feature extraction model and an SVM classifier generation module, wherein the upper computer high-level semantic feature extraction model and the SVM classifier generation module comprise:
the training set generating unit is used for receiving an ADS-B pulse signal series fusion characteristic training set sent by a UDP packet, wherein the training set comprises a plurality of ADS-B signal segment and noise segment training samples, and label information is marked on the training samples;
the model initialization unit is used for carrying out random initialization on model parameters of the stack self-encoder and the SVM two-classifier;
and the model training unit is used for adjusting network parameters of the stack self-encoder by utilizing a series fusion feature training set based on the GPU of the upper computer, realizing the reconstruction of input feature vectors by utilizing low-dimensional high-level semantic features until a preset convergence condition is met, adding an SVM classifier in a top coding layer of the self-encoder, and finely adjusting classification parameters by utilizing label information until the convergence condition is met.
An ADS-B weak signal detection method based on multi-feature fusion comprises the following steps:
s1, acquiring a periodically broadcasted ADS-B pulse signal sample set through a signal acquisition module, and dividing the ADS-B pulse signal into a signal section and a noise section;
s2, extracting multiple characteristic components of the ADS-B pulse signal sample set signal segment and the noise segment through a characteristic extraction module, standardizing the characteristic components, and performing series characteristic fusion to obtain series fusion characteristic vectors;
s3, extracting signal low-dimensional high-level semantic features from the neural network of the encoder by the aid of a stack generated by offline training on the series fusion feature vectors through a signal detection module;
s4, judging the target signal by the signal detection module by using the low-dimensional high-level semantic features of the signal based on the SVM two classifiers generated by off-line training, and realizing the detection of the ADS-B weak signal.
Further: the specific steps of dividing the ADS-B pulse signal in step S1 are as follows: filtering, amplifying and carrying out down-conversion treatment on the acquired ADS-B pulse signal sample set to obtain an intermediate frequency signal; sampling and digital down-conversion processing are carried out on the intermediate frequency signal to obtain a zero intermediate frequency signal; the zero intermediate frequency signal is divided into a signal section containing signal and noise and a noise section containing only noise.
Further: the specific steps of step S2 are:
s21, respectively extracting multi-feature components for carrying out multi-dimensional representation on the signal section and the noise section;
the multi-feature components comprise time domain AR coefficient features, frequency domain energy point gathering features, time-frequency image Renyi entropy features, time-frequency image pseudo Zernike moment features and bispectrum features;
the calculation formula of the time domain AR coefficient characteristic is as follows:
in the above formula, r (i) is a signal autocorrelation value, i is 0,1iThe method is the p-order coefficient characteristic of an AR model of an ADS-B signal set time domain.
The method for extracting the frequency domain energy point focusing characteristics comprises the following steps:
A1. calculating a power spectrum s (ω) of the signal x (n);
A2. given a scale d, utilizing a sliding window to move on a power spectrum sequence s (omega) in a fixed step length manner, and calculating each point omega0Processing the signal power P (omega)0D) and calculating the mean value PaAnd maximum value Pm;
Pa=mean{P(ω0,d)}
Pm=max{P(ω0,d)}
A3. Giving a lambda level value to obtain a lambda level energy convergence point of a signal power spectrum s (omega);
P(ω0,d)≥(1-f(λ))Pa+f(λ)Pm
A4. and establishing a frequency domain characteristic quantity model according to the signal energy at each lambda level energy convergence point.
The method comprises the following specific steps of extracting the Renyi entropy characteristic and the pseudo Zernike moment characteristic:
B1. calculating improved B distribution of signals x (n) to obtain a two-dimensional time-frequency image;
B2. expressing a gray value according to the brightness level of the time-frequency graph, converting the time-frequency graph into a gray graph, and performing gray normalization and median filtering;
B3. extracting 3-order, 5-order, 7-order, 9-order, 11-order and 13-order gray level image Renyi entropy characteristics of the signal;
B4. and binarizing the gray level image, and extracting 9 pseudo Zernike moment characteristics of 1-4 orders of signal time-frequency distribution in the binary image.
The specific steps of the bispectrum feature extraction are as follows:
C1. taking N sections of the same type data in a period of duration, respectively calculating bispectrum estimation, and then averaging to obtain bispectrums of the data to be analyzed;
C2. vectorizing the obtained dual-spectrum data, and reducing the dimension of the dual-spectrum data vector by using K-L transformation.
S22, calculating the mean value and standard deviation of the multi-feature components, and normalizing the feature components;
and S23, performing feature fusion on the normalized feature components in a front-back series connection manner to obtain a series fusion feature vector.
Further: the method for generating the neural network of the stack self-encoder in the step S3 includes: and randomly initializing the parameters of the neural network model of the stack self-encoder, and adjusting the network parameters of the neural network model of the self-encoder based on a self-supervision mechanism of the neural network of the self-encoder and according to the series fusion feature vector until a convergence condition is met.
Further: the method for generating the SVM two-classifier in step S4 includes: adding an SVM two-classifier at the topmost coding layer of the self-encoder neural network, and finely adjusting parameters of the SVM two-classifier by using a training sample class label until a convergence condition is met.
The invention has the beneficial effects that: the ADS-B weak signal detection device and method based on multi-feature fusion of the embodiments of the invention break through the problem that the detection performance is reduced due to the weakening of target signal state information under the condition of strong noise interference by using the conventional mode of threshold detection and utilizing the multi-feature information contained in the received signal, based on the energy features of the signal in the existing signal detection system.
Compared with the prior art, the invention has the following advantages:
1. the invention adopts a multi-feature fusion method to extract multi-feature information of the received signal, and reduces the similarity between the target signal and noise by increasing the feature dimension, thereby improving the detection capability of the system on ADS-B weak signals;
2. the extracted multiple features are subjected to secondary fusion processing, the dimension of the features is reduced through a stack self-encoder network so as to reduce redundant information among the features, and abstract features which are difficult to obtain by manual design and carry out high-level expression on signal essence are extracted;
3. according to the invention, by building a hardware signal receiving unit, abundant actually measured ADS-B signal resources can be fully utilized, and the problems of low reliability of small signal detection samples and simulation signals are effectively solved.
Drawings
FIG. 1 is a schematic view of the apparatus of the present invention;
FIG. 2 is a schematic block diagram of the apparatus of the present invention;
FIG. 3 is a flow chart of a method of the present invention;
FIG. 4 is a diagram illustrating a neural network structure of a stacked self-encoder according to the present invention;
FIG. 5 is a schematic diagram of a high-level semantic feature extraction model for signals used in the present invention;
FIG. 6 is a schematic diagram illustrating comparison of performance of a simulation data method provided by the present invention;
fig. 7 is a schematic diagram showing comparison of detection results of the actual measurement data method provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, an ADS-B weak signal detection apparatus based on multi-feature fusion includes a signal acquisition module, a feature extraction module, and a signal detection module, which are connected in sequence;
the signal acquisition module is used for acquiring a periodic broadcast ADS-B pulse signal to be detected and dividing the periodic broadcast ADS-B pulse signal into a signal section and a noise section;
the feature extraction module is used for extracting multi-feature components of the signal section and the noise section, standardizing the multi-feature components, realizing feature fusion in a serial connection mode, packing UDP (user datagram protocol) data packets of the feature vectors and the sample label information after the feature vectors and the sample label information are fused in the serial connection mode, and uploading the packed UDP data packets to an upper computer;
the signal detection module is used for receiving network and classifier model parameters sent by an upper computer, and performing binary classification judgment on an input target signal based on a stack self-encoder and SVM binary classifier model generated by training, so that ADS-B weak signal detection is realized.
As shown in fig. 2, the signal obtaining module is configured to obtain a 1090MHz ADS-B pulse signal sample set of periodic broadcast, and divide the ADS-B pulse signal into a signal segment and a noise segment; the ADS-B signal receiving circuit comprises a radio frequency front end circuit for receiving, amplifying, mixing and filtering an ADS-B signal; and the intermediate frequency circuit is used for sampling and digitally downconverting the 140MHz intermediate frequency signal.
Specifically, the radio frequency circuit comprises a receiving antenna, a low noise amplifier, a mixer and a band-pass filter which are connected in sequence; the intermediate frequency circuit comprises an intermediate frequency amplifier, an A/D chip and a digital down-conversion circuit which are connected in sequence; the A/D chip is an AD9655 chip, and the sampling rate is set to be 112 MHz.
The characteristic extraction module is used for carrying out multi-characteristic extraction on the digital zero intermediate frequency signal, and the extracted multi-characteristics comprise time domain AR coefficient characteristics, frequency domain energy convergence point characteristics, time-frequency domain Renyi entropy characteristics, pseudo Zernike moment characteristics and bispectrum characteristics; and respectively standardizing the extracted characteristic components, and performing series characteristic fusion to obtain a fused characteristic vector.
The feature extraction module comprises an FPGA processing chip, wherein the FPGA processing chip adopts a Xilinx Zynq7100 chip and comprises two parts of PS and PL, wherein the PS part is used for carrying out sampling rate setting, network communication control and the like; the PL part is responsible for realizing operations such as digital down-conversion processing, signal multi-feature component extraction, feature standardization, series feature fusion and the like.
The signal detection module is used for receiving network and classifier parameters sent by an upper computer, and performing signal or noise binary classification judgment on a target signal sent by the feature extraction module based on a stack self-encoder and an SVM model generated by training, so that ADS-B weak signal detection is realized.
The signal detection module comprises a Nor Flash off-chip storage chip and an FPGA chip; the Nor Flash off-chip storage chip is a Micron N25Q0128A13ES chip and is used for storing parameters of a stack self-encoder network and an SVM classifier which are issued by an upper computer; the FPGA chip adopts a Xilinx XC7VX690T chip, and can realize network interface control and receive a control instruction issued by an upper computer; and loading the target signal serial fusion characteristics sent by the characteristic extraction module and the network and classifier parameters stored in the Nor Flash off-chip storage chip during operation, and realizing judgment and detection on the input signal through network forward operation.
The stack self-encoder network and the SVM classifier are generated by training a large amount of sample data based on a BP algorithm in advance, and have good high-level semantic feature extraction capability and signal noise judgment classification capability.
The ADS-B weak signal detection device based on multi-feature fusion further comprises an upper computer signal high-level semantic feature extraction model and an SVM classifier generation module, wherein the upper computer signal high-level semantic feature extraction model and SVM classifier generation module comprise:
the training set generating unit is used for receiving an ADS-B pulse signal series fusion characteristic training set sent by a UDP packet, wherein the training set comprises a plurality of ADS-B signal segment and noise segment training samples, and label information is manually marked on the training samples;
the model initialization unit is used for randomly initializing model parameters of the stack self-encoder and the SVM two-classifier;
the model training unit is used for adjusting the network parameters of the stack self-encoder according to the series fusion feature training set, so that the input feature vector is reconstructed by using the low-dimensional high-level semantic features until a preset convergence condition is met; and adding an SVM classifier at the top coding layer of the self-encoder, and finely adjusting classification parameters by using label information until a convergence condition is met.
As shown in fig. 3, an ADS-B weak signal detection method based on multi-feature fusion includes the following steps:
s1, acquiring a periodically broadcasted ADS-B pulse signal sample set through a signal acquisition module, and dividing the ADS-B pulse signal into a signal section and a noise section;
filtering, amplifying and carrying out down-conversion treatment on the acquired 1090MHz ADS-B signal set to obtain a 140MHz intermediate frequency signal; carrying out 112MHz sampling and digital down-conversion processing on the intermediate frequency signal to obtain a zero intermediate frequency signal; the zero intermediate frequency signal is divided into a signal section containing the signal and the noise, and a noise section containing only the noise.
S2, extracting multiple characteristic components of the ADS-B pulse signal sample set signal segment and the noise segment through a characteristic extraction module, standardizing the characteristic components, and performing series characteristic fusion to obtain series fusion characteristic vectors;
the specific steps of step S2 are:
s21, respectively extracting multi-feature components for carrying out multi-dimensional representation on the signal section and the noise section;
the multi-feature components comprise time domain AR coefficient features, frequency domain energy point gathering features, time-frequency image Renyi entropy features, time-frequency image pseudo Zernike moment features and bispectrum features;
the calculation formula for extracting the time domain AR coefficient characteristics of the signal segment and the noise segment of the ADS-B signal set is as follows:
in the above formula, r (i) is a signal autocorrelation value, i is 0,1iThe method is the p-order coefficient characteristic of an AR model of an ADS-B signal set time domain.
The extraction method for extracting the frequency domain energy point focusing characteristics of the ADS-B signal set signal segment and the noise segment comprises the following steps:
A1. calculating a power spectrum s (ω) of the signal x (n);
A2. given a scale d, utilizing a sliding window to move on a power spectrum sequence s (omega) in a fixed step length manner, and calculating each point omega0Processing the signal power P (omega)0D) and calculating the mean value PaAnd maximum value Pm;
Pa=mean{P(ω0,d)}
Pm=max{P(ω0,d)}
A3. Giving a lambda level value to obtain a lambda level energy convergence point of a signal power spectrum s (omega);
P(ω0,d)≥(1-f(λ))Pa+f(λ)Pm
A4. and establishing a frequency domain characteristic quantity model according to the signal energy at each lambda level energy convergence point.
Carrying out time-frequency transformation on the ADS-B signal set to generate a two-dimensional time-frequency image, preprocessing the time-frequency image, and extracting the Renyi entropy characteristic and the pseudo-Zernike moment characteristic of the preprocessed time-frequency image, wherein the method comprises the following specific steps of:
B1. calculating improved B distribution of signals x (n) to obtain a two-dimensional time-frequency image;
B2. expressing a gray value according to the brightness level of the time-frequency graph, converting the time-frequency graph into a gray graph, and performing gray normalization and median filtering;
B3. extracting 3-order, 5-order, 7-order, 9-order, 11-order and 13-order gray level image Renyi entropy characteristics of the signal;
B4. and binarizing the gray level image, and extracting 9 pseudo Zernike moment characteristics of 1-4 orders of signal time-frequency distribution in the binary image.
Extracting bispectrum characteristics of a signal segment and a noise segment of the ADS-B signal set, and performing characteristic dimension reduction by using K-L transformation.
C1. Taking N sections of the same type data in a period of duration, respectively calculating bispectrum estimation, and then averaging to obtain bispectrums of the data to be analyzed;
C2. vectorizing the obtained dual-spectrum data, and reducing the dimension of the dual-spectrum data vector by using K-L transformation.
S22, calculating the mean value and standard deviation of the multi-feature components, and normalizing the feature components;
and S23, performing feature fusion on the normalized feature components in a front-back series connection manner to obtain a series fusion feature vector.
S3, extracting signal low-dimensional high-level semantic features from the neural network of the self-encoder by using the stack generated by off-line training for the series fusion feature vector;
the method for generating the stack self-encoder neural network by training a plurality of training sample data based on the BP algorithm comprises the following steps:
D1. acquiring an ADS-B pulse signal series fusion characteristic training set, wherein the training set comprises a plurality of ADS-B signal segments and noise segment training samples;
D2. carrying out random initialization on the neural network model parameters of the stack self-encoder;
D3. and adjusting the network parameters of the self-encoder based on a self-monitoring mechanism of a neural network of the self-encoder and according to the series fusion feature training set, so as to reconstruct the input feature vector by using low-dimensional high-level semantic features until a preset convergence condition is met.
As shown in fig. 4, the neural network structure of the stacked self-encoder can be divided into an encoder and a decoder, and when learning the input samples, the training target is to reconstruct the input samples according to the target expression, so that the target output is set as the input signal itself during training. And optimizing network parameters by minimizing the error between the input sample and the restored sample, wherein the output value of each encoder neuron is abstract characteristics of different levels of signals obtained after processing by the neural network.
Training samples of the neural network of the stack-type self-encoder have an input signal of x and an output signal of y, and weight values in the neural network are continuously iteratively adjusted and updated by using a back propagation BP algorithm, so that y can be as close to x as possible. In general, y is not an exact reconstruction of the input variable x, but approximates x as closely as possible from a probability distribution, so the objective function of the auto-encoder neural network can be expressed as an optimization of the reconstruction error of the following formula: min (J (x, y))
J is a specific cost function, classical cost functions such as cross entropy and a minimum method can be selected, and the cost function can be customized according to specific conditions.
As shown in fig. 5, after the training of the stack self-coding neural network is completed, only the encoder is retained, and the neural network structure for realizing the deep abstract feature extraction is obtained, and the output vector of the neural network structure is the signal low-dimensional high-level semantic feature.
S4, judging the target signal by the signal detection module by using the low-dimensional high-level semantic features of the signal based on the SVM two classifiers generated by off-line training, and realizing the detection of the ADS-B weak signal.
The method for generating the SVM two-classifier generated by the off-line training specifically comprises the following steps:
E1. obtaining serial fusion characteristic training of ADS-B pulse signals, wherein the training set comprises a plurality of ADS-B signal segment and noise segment training samples, and labeling the training samples with label information manually;
E2. adding an SVM classifier at the topmost coding layer of the stack self-encoder, and finely adjusting parameters of the SVM classifier by using a training sample class label until a convergence condition is met;
E3. and performing two-classification judgment on the target signal based on the trained SVM classifier to realize the ADS-B weak signal detection task.
After digital zero intermediate frequency processing is carried out on ADS-B response pulse signals in 1090MHz S mode generated by simulation by using Monte Carlo simulation, comparison tests are respectively carried out by using the method disclosed by the invention and a currently common envelope detection method. Compare the false alarm rate to 10-3In the process, the detection performance under different signal-to-noise ratios is shown in fig. 6, when the signal-to-noise ratio is greater than 6dB, the detection rate of the envelope detection method can be rapidly reduced when the signal-to-noise ratio is lower than 3dB, and when the signal-to-noise ratio is 2dB, the detection rate is lower than 85%, but under the condition of 0dB, the detection rate of the method is still higher than 95%. The method can effectively improve the detection capability of the ADS-B weak signal under the condition of low signal-to-noise ratio.
As shown in fig. 7, for an ADS-B received signal in 1090MHz S mode actually acquired at a certain time, the method of the present invention and the currently used envelope detection method are respectively used for a comparison test, and it can be seen from the figure that the method of the present invention can effectively improve the detection capability of the ADS-B weak pulse signal on the premise of keeping a low false alarm, and based on the detection result, the estimation accuracy of the pulse time domain parameter can be improved.
Claims (10)
1. An ADS-B weak signal detection device based on multi-feature fusion is characterized by comprising a signal acquisition module, a feature extraction module and a signal detection module which are sequentially connected;
the signal acquisition module is used for acquiring a periodic broadcast ADS-B pulse signal to be detected and dividing the periodic broadcast ADS-B pulse signal into a signal section and a noise section;
the feature extraction module is used for extracting multi-feature components of the signal section and the noise section, standardizing the multi-feature components, realizing feature fusion in a serial connection mode, packing UDP (user datagram protocol) data packets of the feature vectors and the sample label information after the feature vectors and the sample label information are fused in the serial connection mode, and uploading the packed UDP data packets to an upper computer;
the signal detection module is used for receiving network and classifier model parameters sent by an upper computer, and performing binary classification judgment on an input target signal based on a stack self-encoder and SVM binary classifier model generated by training, so that ADS-B weak signal detection is realized.
2. The ADS-B weak signal detection device based on multi-feature fusion of claim 1, wherein the signal acquisition module comprises a radio frequency front end circuit and an intermediate frequency circuit which are connected in sequence;
the radio frequency front-end circuit is used for receiving, amplifying, mixing and filtering ADS-B signals;
the intermediate frequency circuit is used for sampling and carrying out digital down-conversion processing on the intermediate frequency signal.
3. The ADS-B weak signal detection device based on multi-feature fusion of claim 1, wherein the feature extraction module comprises a first FPGA chip.
4. The ADS-B weak signal detection device based on multi-feature fusion of claim 1, wherein the signal detection module comprises a Nor Flash off-chip memory chip and a second FPGA chip, the Nor Flash off-chip memory chip is used for storing stack self-encoder network and SVM classifier parameters issued by an upper computer, the second FPGA chip is used for realizing control of a network interface and receiving a control instruction issued by the upper computer, and loads a target signal series fusion feature sent by the feature extraction module and network and classifier parameters stored by the Nor Flash off-chip memory chip during operation, and realizes judgment detection of an input signal through network forward operation.
5. The ADS-B weak signal detection device based on multi-feature fusion of claim 1, wherein the upper computer comprises a high-level semantic feature extraction model and SVM classifier generation module, the high-level semantic feature extraction model and SVM classifier generation module comprising:
the training set generating unit is used for receiving an ADS-B pulse signal series fusion characteristic training set sent by a UDP packet, wherein the training set comprises a plurality of ADS-B signal segment and noise segment training samples, and label information is marked on the training samples;
the model initialization unit is used for carrying out random initialization on model parameters of the stack self-encoder and the SVM two-classifier;
and the model training unit is used for adjusting network parameters of the stack self-encoder by utilizing a series fusion feature training set based on the GPU of the upper computer, realizing the reconstruction of input feature vectors by utilizing low-dimensional high-level semantic features until a preset convergence condition is met, adding an SVM classifier in a top coding layer of the self-encoder, and finely adjusting classification parameters by utilizing label information until the convergence condition is met.
6. An ADS-B weak signal detection method based on multi-feature fusion is characterized by comprising the following steps:
s1, acquiring a periodically broadcasted ADS-B pulse signal sample set through a signal acquisition module, and dividing the ADS-B pulse signal into a signal section and a noise section;
s2, extracting multiple characteristic components of the ADS-B pulse signal sample set signal segment and the noise segment through a characteristic extraction module, standardizing the characteristic components, and performing series characteristic fusion to obtain series fusion characteristic vectors;
s3, extracting signal low-dimensional high-level semantic features from the neural network of the encoder by the aid of a stack generated by offline training on the series fusion feature vectors through a signal detection module;
s4, judging the target signal by the signal detection module by using the low-dimensional high-level semantic features of the signal based on the SVM two classifiers generated by off-line training, and realizing the detection of the ADS-B weak signal.
7. The ADS-B weak signal detection method based on multi-feature fusion of claim 6, wherein the ADS-B pulse signal division in the step S1 comprises the following specific steps: filtering, amplifying and carrying out down-conversion treatment on the acquired ADS-B pulse signal sample set to obtain an intermediate frequency signal; sampling and digital down-conversion processing are carried out on the intermediate frequency signal to obtain a zero intermediate frequency signal; the zero intermediate frequency signal is divided into a signal section containing signal and noise and a noise section containing only noise.
8. The ADS-B weak signal detection method based on multi-feature fusion according to claim 6, wherein the step S2 comprises the following steps:
s21, respectively extracting multi-feature components for carrying out multi-dimensional representation on the signal section and the noise section;
the multi-feature components comprise time domain AR coefficient features, frequency domain energy point gathering features, time-frequency image Renyi entropy features, time-frequency image pseudo Zernike moment features and bispectrum features;
s22, calculating the mean value and standard deviation of the multi-feature components, and normalizing the feature components;
and S23, performing feature fusion on the normalized feature components in a front-back series connection manner to obtain a series fusion feature vector.
9. The ADS-B weak signal detection method based on multi-feature fusion of claim 6, wherein the generation method of the stacked self-encoder neural network in the step S3 is: and randomly initializing the parameters of the neural network model of the stack self-encoder, and adjusting the network parameters of the neural network model of the stack self-encoder based on a neural network self-supervision mechanism of the stack self-encoder and according to the series fusion feature vector until a convergence condition is met.
10. The ADS-B weak signal detection method based on multi-feature fusion of claim 6, wherein the generation method of the SVM two-classifier in the step S4 is as follows: adding an SVM two-classifier at the topmost coding layer of the neural network of the self-stacking encoder, and finely adjusting parameters of the SVM two-classifier by using a training sample class label until a convergence condition is met.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010629775.0A CN111951611A (en) | 2020-07-03 | 2020-07-03 | ADS-B weak signal detection device and method based on multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010629775.0A CN111951611A (en) | 2020-07-03 | 2020-07-03 | ADS-B weak signal detection device and method based on multi-feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111951611A true CN111951611A (en) | 2020-11-17 |
Family
ID=73337022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010629775.0A Pending CN111951611A (en) | 2020-07-03 | 2020-07-03 | ADS-B weak signal detection device and method based on multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111951611A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801065A (en) * | 2021-04-12 | 2021-05-14 | 中国空气动力研究与发展中心计算空气动力研究所 | Space-time multi-feature information-based passive sonar target detection method and device |
CN113344093A (en) * | 2021-06-21 | 2021-09-03 | 成都民航空管科技发展有限公司 | Multi-source ADS-B data abnormal time scale detection method and system |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799757A (en) * | 2012-06-12 | 2012-11-28 | 哈尔滨工程大学 | Weak signal extraction method for removing interferences of strong trend term and transient-state pulse |
CN105738929A (en) * | 2016-03-02 | 2016-07-06 | 北京盈想东方科技发展有限公司 | Beido communication navigation integrated airborne terminal |
CN105894033A (en) * | 2016-04-01 | 2016-08-24 | 大连理工大学 | Weak target detection method and weak target detection system under background of sea clutter |
CN106230544A (en) * | 2016-07-27 | 2016-12-14 | 佛山科学技术学院 | The monitoring identification of a kind of automobile remote-control interference signal and localization method |
CN106504588A (en) * | 2016-10-25 | 2017-03-15 | 中国民航大学 | Based on Beidou II and the multi-platform low latitude domain monitoring system and method for mobile network |
US9773504B1 (en) * | 2007-05-22 | 2017-09-26 | Digimarc Corporation | Robust spectral encoding and decoding methods |
CN107578646A (en) * | 2017-08-28 | 2018-01-12 | 梁晓龙 | low slow small target detection monitoring management system and method |
CN108737030A (en) * | 2018-05-17 | 2018-11-02 | 中国电子科技集团公司第五十四研究所 | A kind of ADS-B signal muting sensitivity method of reseptances based on spaceborne scene |
CN108764331A (en) * | 2018-05-25 | 2018-11-06 | 哈尔滨工程大学 | Joint classification device multi signal Modulation Identification method based on Fourier Transform of Fractional Order |
CN108960417A (en) * | 2018-06-28 | 2018-12-07 | 广东技术师范学院 | A kind of high-effect processing method of wearable small-signal |
CN108985454A (en) * | 2018-06-28 | 2018-12-11 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Airline carriers of passengers individual goal recognition methods |
CN109117876A (en) * | 2018-07-26 | 2019-01-01 | 成都快眼科技有限公司 | A kind of dense small target deteection model building method, model and detection method |
CN109544555A (en) * | 2018-11-26 | 2019-03-29 | 陕西师范大学 | Fine cracks dividing method based on production confrontation network |
CN109583499A (en) * | 2018-11-30 | 2019-04-05 | 河海大学常州校区 | A kind of transmission line of electricity target context categorizing system based on unsupervised SDAE network |
CN109800700A (en) * | 2019-01-15 | 2019-05-24 | 哈尔滨工程大学 | A kind of underwater sound signal target classification identification method based on deep learning |
CN110109080A (en) * | 2019-05-29 | 2019-08-09 | 南京信息工程大学 | Method for detecting weak signals based on IA-SVM model |
CN110826630A (en) * | 2019-11-08 | 2020-02-21 | 哈尔滨工业大学 | Radar interference signal feature level fusion identification method based on deep convolutional neural network |
-
2020
- 2020-07-03 CN CN202010629775.0A patent/CN111951611A/en active Pending
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9773504B1 (en) * | 2007-05-22 | 2017-09-26 | Digimarc Corporation | Robust spectral encoding and decoding methods |
CN102799757A (en) * | 2012-06-12 | 2012-11-28 | 哈尔滨工程大学 | Weak signal extraction method for removing interferences of strong trend term and transient-state pulse |
CN105738929A (en) * | 2016-03-02 | 2016-07-06 | 北京盈想东方科技发展有限公司 | Beido communication navigation integrated airborne terminal |
CN105894033A (en) * | 2016-04-01 | 2016-08-24 | 大连理工大学 | Weak target detection method and weak target detection system under background of sea clutter |
CN106230544A (en) * | 2016-07-27 | 2016-12-14 | 佛山科学技术学院 | The monitoring identification of a kind of automobile remote-control interference signal and localization method |
CN106504588A (en) * | 2016-10-25 | 2017-03-15 | 中国民航大学 | Based on Beidou II and the multi-platform low latitude domain monitoring system and method for mobile network |
CN107578646A (en) * | 2017-08-28 | 2018-01-12 | 梁晓龙 | low slow small target detection monitoring management system and method |
CN108737030A (en) * | 2018-05-17 | 2018-11-02 | 中国电子科技集团公司第五十四研究所 | A kind of ADS-B signal muting sensitivity method of reseptances based on spaceborne scene |
CN108764331A (en) * | 2018-05-25 | 2018-11-06 | 哈尔滨工程大学 | Joint classification device multi signal Modulation Identification method based on Fourier Transform of Fractional Order |
CN108960417A (en) * | 2018-06-28 | 2018-12-07 | 广东技术师范学院 | A kind of high-effect processing method of wearable small-signal |
CN108985454A (en) * | 2018-06-28 | 2018-12-11 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Airline carriers of passengers individual goal recognition methods |
CN109117876A (en) * | 2018-07-26 | 2019-01-01 | 成都快眼科技有限公司 | A kind of dense small target deteection model building method, model and detection method |
CN109544555A (en) * | 2018-11-26 | 2019-03-29 | 陕西师范大学 | Fine cracks dividing method based on production confrontation network |
CN109583499A (en) * | 2018-11-30 | 2019-04-05 | 河海大学常州校区 | A kind of transmission line of electricity target context categorizing system based on unsupervised SDAE network |
CN109800700A (en) * | 2019-01-15 | 2019-05-24 | 哈尔滨工程大学 | A kind of underwater sound signal target classification identification method based on deep learning |
CN110109080A (en) * | 2019-05-29 | 2019-08-09 | 南京信息工程大学 | Method for detecting weak signals based on IA-SVM model |
CN110826630A (en) * | 2019-11-08 | 2020-02-21 | 哈尔滨工业大学 | Radar interference signal feature level fusion identification method based on deep convolutional neural network |
Non-Patent Citations (26)
Title |
---|
HONG YANG等: "A method of weak signal chaotic detection based on Labview", 《2012 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION PROCESSING (CSIP)》 * |
KELING FEI: "Automatic Detection of Conversion Blindness on Functional Brain Network Information", 《2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)》 * |
SUMIT KUMAR等: "Weak signal detection from noisy signal using stochastic resonance with particle swarm optimization technique", 《2017 INTERNATIONAL CONFERENCE ON NOISE AND FLUCTUATIONS (ICNF)》 * |
吴小丹: "一种ADS-B空间信息采集系统的设计", 《数字技术与应用》 * |
孙佳佳等: "基于一维距离像序列的弹道目标融合识别研究", 《微波学报》 * |
崔光照等: "基于小波变换的基因表达数据去噪聚类分析", 《信号处理》 * |
张穆清等: "基于深度学习与支持向量机的低截获概率雷达信号识别", 《科技导报》 * |
张靖: "基于特征提取的目标分类研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
曹东等: "基于分数阶傅立叶变换的湍流退化图像相位恢复算法研究", 《空气动力学学报》 * |
李兆飞: "振动故障分形特征提取及诊断方法研究", 《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
李楠: "水下弱目标信号的Duffing振子检测方法研究", 《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
李秀坤等: "水下目标回波与混响的分数阶Fourier域盲分离", 《哈尔滨工程大学学报》 * |
李陆军等: "基于时频分布的弹道导弹目标识别方法", 《火力与指挥控制》 * |
杜京义: "混沌背景中微弱谐波信号检测的SVM方法", 《仪器仪表学报 》 * |
杨兴宇: "雷达欺骗干扰信号识别技术研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
杨兴宇等: "基于栈式稀疏自编码器的新型干扰识别", 《现代雷达》 * |
杨少奇等: "应用双谱分析和分形维数的雷达欺骗干扰识别", 《西安交通大学学报》 * |
江志浩: "1090ES模式ADS-B信号自适应门限检测算法", 《无线电通信技术 》 * |
牟洋: "基于稀疏表示分类器的极化SAR图像地物分类", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
白航等: "基于Choi-Williams时频图像特征的雷达辐射源识别", 《数据采集与处理》 * |
罗晓清等: "应用多特征的红外弱小目标检测", 《计算机工程与应用》 * |
胡伟文等: "一种基于能量聚点的目标检测方法", 《武汉理工大学学报(交通科学与工程版)》 * |
胡伟文等: "能量聚点特征分析法及其在微弱目标信号检测中的应用", 《数据采集与处理》 * |
邓博: "基于深度学习的红外弱小目标相关滤波跟踪算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
金森林: "基于压缩感知的微弱信号测量系统的研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
闫琰等: "基于多特征联合处理的灵巧噪声干扰识别", 《雷达科学与技术》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801065A (en) * | 2021-04-12 | 2021-05-14 | 中国空气动力研究与发展中心计算空气动力研究所 | Space-time multi-feature information-based passive sonar target detection method and device |
CN112801065B (en) * | 2021-04-12 | 2021-06-25 | 中国空气动力研究与发展中心计算空气动力研究所 | Space-time multi-feature information-based passive sonar target detection method and device |
CN113344093A (en) * | 2021-06-21 | 2021-09-03 | 成都民航空管科技发展有限公司 | Multi-source ADS-B data abnormal time scale detection method and system |
CN113344093B (en) * | 2021-06-21 | 2022-07-05 | 成都民航空管科技发展有限公司 | Multi-source ADS-B data abnormal time scale detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114201988B (en) | Satellite navigation composite interference signal identification method and system | |
Ozturk et al. | RF-based low-SNR classification of UAVs using convolutional neural networks | |
CN110088635B (en) | Cognitive signal processor, method and medium for denoising and blind source separation | |
US11468273B2 (en) | Systems and methods for detecting and classifying anomalous features in one-dimensional data | |
CN110532932B (en) | Method for identifying multi-component radar signal intra-pulse modulation mode | |
CN110109060A (en) | A kind of radar emitter signal method for separating and system based on deep learning network | |
CN113298846B (en) | Interference intelligent detection method based on time-frequency semantic perception | |
CN109543643B (en) | Carrier signal detection method based on one-dimensional full convolution neural network | |
CN116866129A (en) | Wireless communication signal detection method | |
CN112668498A (en) | Method, system, terminal and application for identifying individual intelligent increment of aerial radiation source | |
CN111951611A (en) | ADS-B weak signal detection device and method based on multi-feature fusion | |
US20190377063A1 (en) | Method and device for adaptively configuring threshold for object detection by means of radar | |
CN112749633B (en) | Separate and reconstructed individual radiation source identification method | |
Konan et al. | Machine learning techniques to detect and characterise whistler radio waves | |
Williams et al. | Maritime radar target detection using convolutional neural networks | |
Yin et al. | Co-channel multi-signal modulation classification based on convolution neural network | |
CN111046697A (en) | Adaptive modulation signal identification method based on fuzzy logic system | |
CN112859025B (en) | Radar signal modulation type classification method based on hybrid network | |
CN112801065B (en) | Space-time multi-feature information-based passive sonar target detection method and device | |
CN114936570A (en) | Interference signal intelligent identification method based on lightweight CNN network | |
Cutajar et al. | Track detection of high-velocity resident space objects in Low Earth Orbit | |
Al Mudhafar et al. | Image Noise Detection and Classification Based on Combination of Deep Wavelet and Machine Learning | |
Wu et al. | Radar small/mini target detection technology in strong clutter environment | |
Li et al. | RF-Based on Feature Fusion and Convolutional Neural Network Classification of UAVs | |
Fan et al. | Improving gravitational wave detection with 2d convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |