CN111914802A - Ionosphere return scattering propagation pattern identification method based on transfer learning - Google Patents

Ionosphere return scattering propagation pattern identification method based on transfer learning Download PDF

Info

Publication number
CN111914802A
CN111914802A CN202010827596.8A CN202010827596A CN111914802A CN 111914802 A CN111914802 A CN 111914802A CN 202010827596 A CN202010827596 A CN 202010827596A CN 111914802 A CN111914802 A CN 111914802A
Authority
CN
China
Prior art keywords
data
training
mode
layer
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010827596.8A
Other languages
Chinese (zh)
Other versions
CN111914802B (en
Inventor
华彩成
史军强
李雪
雍婷
鲁转侠
冯静
娄鹏
杨东升
王俊江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Institute of Radio Wave Propagation CETC 22 Research Institute
Original Assignee
China Institute of Radio Wave Propagation CETC 22 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Institute of Radio Wave Propagation CETC 22 Research Institute filed Critical China Institute of Radio Wave Propagation CETC 22 Research Institute
Priority to CN202010827596.8A priority Critical patent/CN111914802B/en
Publication of CN111914802A publication Critical patent/CN111914802A/en
Application granted granted Critical
Publication of CN111914802B publication Critical patent/CN111914802B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an ionospheric return scattering propagation pattern recognition method based on transfer learning, which comprises the following steps: step 1, preprocessing training data; step 2, constructing a deep convolution network based on model migration; step 3, constructing a domain confusion network; and 4, setting network training parameters. The ionosphere return scattering propagation pattern recognition method based on transfer learning disclosed by the invention can accurately perform pattern recognition on ionosphere return scattering ionization map data, so that the ionosphere state can be mastered, and important support is rapidly and effectively provided for a short-wave equipment information system.

Description

Ionosphere return scattering propagation pattern identification method based on transfer learning
Technical Field
The invention relates to the field of ionosphere detection return scattering ionization diagram interpretation and identification, in particular to an ionosphere return scattering propagation pattern identification method based on transfer learning in the field.
Background
The ionospheric backscatter detection technology (called backscatter for short) projects radio waves in high frequency range obliquely to the ionosphere, and the radio waves are reflected to the remote ground (or sea surface) through the ionosphere, the waves are scattered in all directions due to the uneven fluctuation and non-uniform electrical characteristics of the ground (or sea surface), and a part of the radio waves are reflected back to a transmitting point through the ionosphere along the original (or other possible) path and are received by a receiver at the transmitting point, so that the detection of the states of the ionosphere and the ground (or sea surface) and the over-the-horizon target information is realized.
Echo signals detected by the return scattering mainly come from backscattering of large-area ground and sea surfaces, and two reflections passing through an ionosphere are received by a receiver. The ionosphere comprises different layered structures, such as an accidental E layer and an accidental F layer, and the electromagnetic waves are transmitted in different propagation modes, and the distribution of the energy of the return echo received by scattering is different. Because the ionosphere has random, dispersive, non-uniform and anisotropic properties, these properties are conducted to the electric wave signals propagated through the ionosphere through the interaction of the ionosphere and the electric wave, and the mode of the ionosphere returning a scattering ionization map is difficult to correctly interpret and recognize.
The existing propagation mode identification method based on return scattering is influenced by signals such as noise, interference and the like in the detection process, and the following defects also exist in the accurate identification process:
a) backscatter ionization map pattern recognition is less time-efficient. The traditional returned scattering ionization image mode identification is subjected to processing operations such as image sharpening, interference identification and elimination, singular point identification, global denoising and the like, the processing elements are more, the algorithm is complex, and the requirement of real-time use cannot be well met;
b) pattern recognition accuracy of the return scatter detection is insufficient. In the process of identifying the propagation mode of the returned scattering ionization map, the extraction of the energy distribution characteristics and the scattering point distribution of the echo cannot be completely carried out due to the limitation of the traditional image method, and the ionization map is influenced by the noise-to-noise ratio of the echo, so that the accuracy rate of identifying the mode of the returned scattering ionization map is difficult to improve.
Disclosure of Invention
The invention aims to solve the technical problem of providing an ionospheric return scattering propagation pattern recognition method based on transfer learning.
The invention adopts the following technical scheme:
in a method for ionospheric backscatter propagation pattern recognition based on transfer learning, the improvement comprising the steps of:
step 1, preprocessing training data: the types of the return scattering propagation modes are set to be 4 types, which are respectively: an Es mode, an F mode, an Es + F mode, and a no mode, wherein the Es mode includes: day time, night time; the F mode includes: sunrise time period, daytime time period, sunset time period, night time period; the Es + F patterns include: day time, night time; the no mode includes: any period of time;
step 2, constructing a deep convolution network based on model migration: the adopted deep convolutional neural network architecture has an N-layer structure, N is an integer more than or equal to 5, the deep convolutional neural network architecture comprises M convolutional layers, M is an integer, N is more than M, N-M layers of full-connected layers, the output end of the full-connected layers comprises a fixed number of categories, all convolutional layers in a deep convolutional neural network model are fixed, the full-connected layer at the rear end in an original model is replaced by a new network structure meeting the requirement of returning scattering mode identification characteristics, the deep convolutional neural network architecture comprises a full-connected layer, a softmax layer and a classification output layer, and the formula of the softmax layer is as follows:
Figure BDA0002636785860000021
wherein g is the number of types, d represents the training parameter of g, each d corresponds to a vector, j is the current data number, E represents the probability output value of the current data type, and E is a natural index; the corresponding classification output layer can freely set the number of output types, but not fixed output types; the full connection layer needs to carry out parameter configuration according to the training data corresponding to different modes of the returned scattering;
step 3, constructing a domain confusion network: on the newly-added full-connection network structure, an adaptation layer is added, the adaptation layer can carry out source domain and ionosphere return scattering data domain supervision on the feature extraction capability of a front-end convolutional layer in the network structure, if the recognition result of network training is poor, the adaptation layer can provide weak learning and recognition capability of the network structure, the learned features cannot separate the data of a source domain data domain from the data of an ionosphere return scattering data domain, and the maximum average deviation is adopted on the measurement of the source domain and the target domain, and the formula is as follows:
Figure BDA0002636785860000022
wherein MMD represents the maximum average deviation between the features in the source domain and the target domain mapped to the last two classes of data in the re-generated kernel Hilbert space RKHS, φ (. + -) represents the feature map associated with the kernel map, DS、DTRepresenting S, T sets of data samples in two domains, S and T representing data distributions of a source domain and a target domain, respectively, dSAnd dTA base core representing a source domain and a target domain, respectively;
step 4, setting network training parameters: training ionospheric return scattering data samples by adopting an exponential slowing mode, wherein the learning rate is decreased by exponential interpolation according to the increase of training rounds, and the original learning rate is set to be l0T is the training round, |rFor a real-time learning rate, the learning rate formula is as follows:
lr=0.95t·l0
the number of iterations for ionospheric return scatter data was set to 500 and the batch size was set to 64.
Further, in the training process of step 4, a cross validation mode is adopted, 70% of data is randomly used for model training, and the remaining 30% of data is used for validation.
The invention has the beneficial effects that:
the ionosphere return scattering propagation pattern recognition method based on transfer learning disclosed by the invention can accurately perform pattern recognition on ionosphere return scattering ionization map data, so that the ionosphere state can be mastered, and important support is rapidly and effectively provided for a short-wave equipment information system.
Drawings
FIG. 1 is a schematic diagram of the process of ionospheric backscatter ionization maps through DCNN convolution and pooling operations;
FIG. 2 is a diagram of ionospheric backscatter probe ionization in different propagation modes;
FIG. 3 is a schematic diagram of the fine tuning of a deep convolutional neural network;
fig. 4 is a graph of accuracy distribution during training.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The Deep Convolutional Neural Network (DCNN) is a neural network model for multilayer deep learning, different layers adopt a local connection (local receptive field) mode, and the model is different from the full connection between the traditional neural network model layers, each layer in the DCNN can be used as the input of the next layer, and no feedback exists in the whole network. In the feature extraction process, the feature map is obtained by using operations such as convolution, pooling and the like, and the ionosphere return scattering convolution and pooling process is shown in figure 1. Due to the strong feature extraction capability, the method is quickly applied to image problems such as object recognition, target classification and the like.
The transfer learning refers to the fact that knowledge learned from one environment is transplanted into a new environment, and certain help is provided for learning tasks in the new environment. The DCNN-based transfer learning mainly trains a pre-trained DCNN model on a data set of a new target task again, namely network fine tuning. In the DCNN after the large-scale image data set is pre-trained, a neural network structure before a classifier is used as a general Feature extractor, and a test picture input into the DCNN model generates a depth Feature expression vector similar to a sift (scale artifact Feature transform) Feature, which has extremely strong generalization capability.
Fig. 2 is an ionospheric backscatter ionization map in different propagation modes, and it can be seen from the diagram that the backscatter ionization maps in different propagation modes have differences, and are interfered by different degrees in the signal propagation process, which may affect the result of the mode identification differently. Meanwhile, the ionization diagram formed by the data without echo energy in the return scattering has certain interference, and when the return scattering propagation mode type is set, the data is taken as a mode type.
At present, published ionospheric return scattering detection pattern recognition documents based on transfer learning are not looked up at home and abroad.
The echo signals received by sky wave return scattering are reflected by the ionosphere twice, contain important characteristic parameters and propagation mode information of the ionosphere, and the propagation mode is accurately extracted, so that the method is an important guarantee for the short-wave equipment system to carry out communication, detection, reconnaissance and other applications based on the ionosphere. Due to the time-varying and dispersive characteristics of the ionosphere and the diversity and difference of the backscatter ionization map, the traditional method is difficult to adapt to the change of the ionosphere well, so that the accuracy of pattern recognition based on backscatter is not high. The invention provides a returned scattering ionization map pattern recognition method based on transfer learning by combining the characteristics of a deep convolutional neural network, and the propagation pattern information of the returned scattering ionization map can be more accurately recognized.
In embodiment 1, this embodiment discloses an ionospheric backscatter propagation pattern recognition method based on transfer learning, which sets multiple types of ionospheric backscatter propagation patterns, collects 10291 sets of data from accumulated backscatter probe data samples by manual labeling, and performs network training and network model testing after construction by using labeled sample data, including the following steps:
step 1, preprocessing training data:
the ionosphere is a time-varying medium, so for backscatter detection, the coverage characteristics of echoes received at different times vary and have different characteristics, and therefore, preprocessing operation needs to be performed on data participating in training. In the returned scattering detection data, mode information of an ionosphere of a propagation medium is contained, read-out mode information can be seen from returned scattering image information, in order to ensure the effectiveness of a training result, a manual labeling mode is adopted to classify returned scattering ionization maps of different mode information, and in combination with the propagation characteristic of returned scattering, the types of returned scattering propagation modes are set to be 4 types, which are respectively: the electron emission device comprises an Es mode, an F mode, an Es + F mode and a non-mode, wherein the Es mode mainly appears at the daytime and night in summer generally, the intensity is different, the formed backscatter ionization map also has different forms, and the formed backscatter ionization map belongs to the Es mode; in the F mode, the common day time exists all the time, but the distribution of the F mode existing at the sunrise time and the distribution of the day time are different; the Es + F mode is mainly that when the Es mode appears, the intensity is not large, and the F mode is half-shielded, so, in the process of manual labeling, data classification under each mode needs to be discriminated, and specifically, the Es mode includes: day time, night time; the F mode includes: sunrise time period, daytime time period, sunset time period, night time period; the Es + F patterns include: day time, night time; the no mode includes: any period of time;
step 2, constructing a deep convolution network based on model migration:
based on the deep convolutional network model (the number of network layers is greater than 5) which is excellent in feature extraction at present, according to the network characteristics, the adopted deep convolutional neural network architecture has an N-layer structure which comprises M convolutional layers, N > M, N-M fully-connected layers, and the output end comprises a fixed number of categories, such as 2000 categories, and the basic composition example is shown in FIG. 3. The data feature extraction of the convolution layer is a common requirement of data analysis, all convolution layers in the deep convolutional neural network model are fixed according to the requirement of ionosphere return scattering pattern recognition, a full connection layer at the rear end in the original model is replaced by a new network structure meeting the requirement of return scattering pattern recognition characteristics, the full connection layer, the softmax layer and the classification output layer are included, and the formula of the softmax layer is as follows:
Figure BDA0002636785860000051
wherein g is the number of types, d represents the training parameter of g, and each d corresponds to a vector; the corresponding classification output layer can freely set the number of output types, but not fixed output types; the full connection layer needs to carry out parameter configuration according to the training data corresponding to different modes of the returned scattering;
step 3, constructing a domain confusion network:
an Adaptation Layer (Adaptation Layer) is added on a newly added full-connection network structure, the Adaptation Layer can carry out source domain and ionosphere return scattering data domain supervision on the feature extraction capability of a front end convolution Layer in the network structure, if the recognition result of network training is poor, the Adaptation Layer can provide weak learning recognition capability of the network structure, and the learned features can not separate the data of a source domain from the data of an ionosphere return scattering data domain, so that the Adaptation Layer can better represent the data sensitivity in the learning field and provide help for data selection and network parameter setting.
The deep confusion network can independently examine the recognition capability of the network on the training domain of the existing model and the ionosphere returned scattering data, and if the recognition capability is poor, the learned characteristics of the network are not enough to distinguish the data of the two fields, so that the characteristic representation of the insensitive data of the network learning field is helped to a certain extent.
On the measurement of the source domain and the target domain, the Maximum Mean Deviation (MMD) is used for measurement analysis, and the formula is as follows:
Figure BDA0002636785860000052
step 4, setting network training parameters:
the setting of the network hyper-parameters has a great relationship with the training of the neural network, and the training data and the parameter setting are different.
The learning rate parameter initialelearnrate is a hyper-parameter that guides us how to adjust the network weight by the gradient of the loss function, and the learning rate can control the speed of parameter update. The hyper-parameters of the model need to be reconfigured when training on the data after the transfer learning, in order to improve the training efficiency, the initial learning rate InitialLearnRate of the model can be properly reduced, and the dynamic adjustment is carried out until the optimal value is reached when the network is trained.
Training ionospheric return scattering data samples by adopting an exponential slowing mode, wherein the learning rate is decreased by exponential interpolation according to the increase of training rounds, and the original learning rate is set to be l0And t is the training round, and the learning rate formula is as follows:
lr=0.95t·l0
the setting of the iteration times can be dynamically adjusted according to the convergence speed of the network, the time cost of network model training can be saved by reducing the iteration times, the model can be well converged by proper iteration times, and the iteration times for returning scattering data of the ionized layer is set to be 500 times through experiments.
The batch size (minibatch size) is the number of samples sent into the model each time the neural network is trained, and a large batch of samples can allow for rapid convergence of the network model. The configuration of the batch size can be adjusted according to the hardware configuration of the computer, and the larger the MiniBatchSize is, the faster the training is and the higher the memory usage rate is. The batch size is set to 64 for ionospheric return scatter data and training requirements.
In the cross-validation mode, 70% of the data is randomly used for model training, and the remaining 30% of the data is used for validation. Through multiple training verification, the recognition accuracy of the data sample is 93.4% on average, and the accuracy distribution curve in the training process is shown in fig. 4.

Claims (2)

1. An ionospheric return scattering propagation pattern recognition method based on transfer learning is characterized by comprising the following steps:
step 1, preprocessing training data: the types of the return scattering propagation modes are set to be 4 types, which are respectively: an Es mode, an F mode, an Es + F mode, and a no mode, wherein the Es mode includes: day time, night time; the F mode includes: sunrise time period, daytime time period, sunset time period, night time period; the Es + F patterns include: day time, night time; the no mode includes: any period of time;
step 2, constructing a deep convolution network based on model migration: the adopted deep convolutional neural network architecture has an N-layer structure, N is an integer more than or equal to 5, the deep convolutional neural network architecture comprises M convolutional layers, M is an integer, N is more than M, N-M layers of full-connected layers, the output end of the full-connected layers comprises a fixed number of categories, all convolutional layers in a deep convolutional neural network model are fixed, the full-connected layer at the rear end in an original model is replaced by a new network structure meeting the requirement of returning scattering mode identification characteristics, the deep convolutional neural network architecture comprises a full-connected layer, a softmax layer and a classification output layer, and the formula of the softmax layer is as follows:
Figure FDA0002636785850000011
wherein g is the number of types, d represents the training parameter of g, each d corresponds to a vector, j is the current data number, E represents the probability output value of the current data type, and E is a natural index; the corresponding classification output layer can freely set the number of output types, but not fixed output types; the full connection layer needs to carry out parameter configuration according to the training data corresponding to different modes of the returned scattering;
step 3, constructing a domain confusion network: on the newly-added full-connection network structure, an adaptation layer is added, the adaptation layer can carry out source domain and ionosphere return scattering data domain supervision on the feature extraction capability of a front-end convolutional layer in the network structure, if the recognition result of network training is poor, the adaptation layer can provide weak learning and recognition capability of the network structure, the learned features cannot separate the data of a source domain data domain from the data of an ionosphere return scattering data domain, and the maximum average deviation is adopted on the measurement of the source domain and the target domain, and the formula is as follows:
Figure FDA0002636785850000012
wherein MMD represents the maximum average deviation between the features in the source domain and the target domain mapped to the last two classes of data in the re-generated kernel Hilbert space RKHS, φ (. + -) represents the feature map associated with the kernel map, DS、DTRepresenting S, T sets of data samples in two domains, S and T representing data distributions of a source domain and a target domain, respectively, dSAnd dTA base core representing a source domain and a target domain, respectively;
step 4, setting network training parameters: training ionospheric return scattering data samples by adopting an exponential slowing mode, wherein the learning rate is decreased by exponential interpolation according to the increase of training rounds, and the original learning rate is set to be l0T is the training round, |rFor a real-time learning rate, the learning rate formula is as follows:
lr=0.95t·l0
the number of iterations for ionospheric return scatter data was set to 500 and the batch size was set to 64.
2. The ionospheric return-scatter propagation pattern recognition method based on transfer learning of claim 1, characterized in that: in the training process of step 4, a cross validation mode is adopted, 70% of data is randomly used for model training, and the rest 30% of data is used for validation.
CN202010827596.8A 2020-08-17 2020-08-17 Ionosphere return scattering propagation pattern identification method based on transfer learning Active CN111914802B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010827596.8A CN111914802B (en) 2020-08-17 2020-08-17 Ionosphere return scattering propagation pattern identification method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010827596.8A CN111914802B (en) 2020-08-17 2020-08-17 Ionosphere return scattering propagation pattern identification method based on transfer learning

Publications (2)

Publication Number Publication Date
CN111914802A true CN111914802A (en) 2020-11-10
CN111914802B CN111914802B (en) 2023-02-07

Family

ID=73278209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010827596.8A Active CN111914802B (en) 2020-08-17 2020-08-17 Ionosphere return scattering propagation pattern identification method based on transfer learning

Country Status (1)

Country Link
CN (1) CN111914802B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108508411A (en) * 2018-03-22 2018-09-07 天津大学 Passive radar external sort algorithm signal recognition method based on transfer learning
CN110263863A (en) * 2019-06-24 2019-09-20 南京农业大学 Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108508411A (en) * 2018-03-22 2018-09-07 天津大学 Passive radar external sort algorithm signal recognition method based on transfer learning
CN110263863A (en) * 2019-06-24 2019-09-20 南京农业大学 Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闫美阳等: "多源域混淆的双流深度迁移学习", 《中国图象图形学报》 *

Also Published As

Publication number Publication date
CN111914802B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN110222748B (en) OFDM radar signal identification method based on 1D-CNN multi-domain feature fusion
CN109597043B (en) Radar signal identification method based on quantum particle swarm convolutional neural network
CN110109059B (en) Radar radiation source signal identification method based on deep learning network
CN112949387B (en) Intelligent anti-interference target detection method based on transfer learning
Li et al. IncepTCN: A new deep temporal convolutional network combined with dictionary learning for strong cultural noise elimination of controlled-source electromagnetic data
CN110221256B (en) SAR interference suppression method based on deep residual error network
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN112859014A (en) Radar interference suppression method, device and medium based on radar signal sorting
CN110098882B (en) Multi-antenna broadband spectrum detection method based on compressed sensing and entropy
CN113657491A (en) Neural network design method for signal modulation type recognition
CN103885050A (en) Echo signal parameter estimation method based on scaled-down dictionary
CN109932717B (en) ISAR high-resolution imaging method based on environmental statistics modeling
CN104680169A (en) Semi-supervised diagnostic characteristic selecting method aiming at thematic information extraction of high-spatial resolution remote sensing image
CN110133643A (en) Root system of plant detection method and device
CN111507047A (en) Inverse scattering imaging method based on SP-CUnet
CN110929842A (en) Accurate intelligent detection method for burst time region of non-cooperative radio signal
CN114117912A (en) Sea clutter modeling and inhibiting method under data model dual drive
CN115685096A (en) Secondary radar side lobe suppression method based on logistic regression
Hou et al. Jamming Recognition of carrier-free UWB cognitive radar based on MANet
Li et al. Magnetotelluric noise suppression via convolutional neural network
CN111914802B (en) Ionosphere return scattering propagation pattern identification method based on transfer learning
CN108983158A (en) A kind of Ground Penetrating Radar noise suppressing method based on Hankel Singular Value Decomposition Using
CN115905919A (en) Radar signal sample data processing method based on AFGAN, target identification method and system
Guo et al. Target depth estimation by deep neural network based on acoustic interference structure in deep water

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant