CN109409442A - Convolutional neural networks model selection method in transfer learning - Google Patents

Convolutional neural networks model selection method in transfer learning Download PDF

Info

Publication number
CN109409442A
CN109409442A CN201811396539.8A CN201811396539A CN109409442A CN 109409442 A CN109409442 A CN 109409442A CN 201811396539 A CN201811396539 A CN 201811396539A CN 109409442 A CN109409442 A CN 109409442A
Authority
CN
China
Prior art keywords
model
training
accuracy rate
pretest
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811396539.8A
Other languages
Chinese (zh)
Inventor
王秋然
柴聪聪
郭磊
张克乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811396539.8A priority Critical patent/CN109409442A/en
Publication of CN109409442A publication Critical patent/CN109409442A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

Transfer learning (Transfer Learning, TL) is the method handled using the existing training pattern of other field this field task, but existing network model is excessively abundant, causes confusion choosing Shi Zhongyi, is unfavorable for the completion of task.The invention proposes the methods selected in transfer learning convolutional neural networks (Convolutional Neural Networks, CNN) model.Key step are as follows: step 1 sets task object and Primary Reference index;Step 2 selects pretest model according to every million parameter accuracy rate of the CNN model under former training set;Step 3, pretest model are tested under task target detection collection, are obtained every million parameters accuracy rate, are selected pre-training model;Step 4, the fine tuning of pre-training model, the later training under task object training set;Step 5, pre-training model measurement see whether to meet target.The present invention can be widely used in image classification process field, such as the processing of low probability of intercept radar (Low Probability Intercept, LPI) image classification, medical conditions classification etc..

Description

Convolutional neural networks model selection method in transfer learning
Technical field
The present invention relates to the transfer learning in machine learning, the method for specifically a kind of convolutional neural networks model selection.
Background technique
Transfer learning (Transfer Learning, TL) is using the existing training pattern of other field to this field task It is handled.Currently, various industries all begin to use transfer learning to solve the problems, such as, such as field of biomedicine, Haijun Lei Et al identifies Hep-2 cell by transfer learning;In transport field, Javad Abbasi Aghamaleki et Al identifies noise-containing ground traffic tools picture by transfer learning;In police field, Christian Galea Et al et al. matches suspect by transfer learning and its relevant personage draws a portrait.But current existing deep learning net Network model is abundant, is easy to appear confusion during choosing training pattern, is unfavorable for fast and efficiently completing task object.Cause This, chooses corresponding network model for different task objects, for fast and efficiently completing task object with important Realistic meaning and application value.
Convolutional neural networks (Convolution neural network, CNN) have been widely used in image classification, With higher accuracy compared with conventional method.CNN is a kind of multilevel structure being made of Multilevel method unit, mainly includes There are many convolution kernels, these convolution kernels respectively carry out input special in convolutional layer, pond layer and nonlinear transformation convolutional layer Sign is extracted, and a variety of different features can be extracted.Pond layer screens feature, filters out more representational feature, Simultaneously to input dimensionality reduction, reduce complexity.Nonlinear transformation mainly carries out nonlinear transformation to input, and it is empty to change feature representation Between.Currently, CNN network structure develops to LeNet structure from two convolutional layers of beginning, to the system developed in recent years State-of-art network is arranged, each network all has the characteristics that very outstanding.
The models such as existing CNN, such as AlexNet, VGG, Inception, ResNet are the training on ImageNet Collection is trained, therefore very different with the target sample in many Practical Projects, cannot directly be used.And it is proposed by the present invention A kind of convolutional neural networks model selection method based on every million parameters accuracy rate, by comparing every under former training dataset Million parameter accuracys rate select the pre-training model for meeting task feature, cut to it, then under task target data set into Row training, the convolutional neural networks model of most suitable task object is selected by comparing every million parameters accuracy rate.Energy of the present invention The model for completing task object is fast and efficiently selected, is widely used in image classification process field, such as low probability of intercept thunder Up to the processing of (Low Probability Intercept, LPI) image classification, medical conditions classification etc..
Summary of the invention
The problem to be solved in the present invention is: existing convolutional neural networks model is abundant, and is all specifically to train It is trained under data set, when needing to apply in a certain particular task target, network model chooses difficulty, and needs to spend big The time of amount, calculation power, manpower are one by one compared model, are unfavorable for fast and efficiently completing task object.
To solve the above problems, the invention provides the following technical scheme:
Present application example first aspect provides a kind of classification method that convolutional neural networks model is chosen, and specific steps are such as Under:
Set task object and Primary Reference index: accuracy rate, number of parameters etc.;
Pretest model is tentatively chosen according to every million parameter accuracy rate of the CNN model under former training set;
Pretest model is tested under task target detection collection, is obtained every million parameters accuracy rate, is selected pre-training network Model;
The fine tuning of pre-training model, the later training under task object training set;
Pre-training model measurement sees whether to meet target.
The first aspect of present application example, the accuracy rate in Primary Reference index refers in particular task target data set The correct probability of lower picture classification.
The first aspect of present application example, every million parameters accuracy rate refers to the ratio between accuracy rate and number of parameters, wherein joining Number unit of quantity is million, and formula is as follows:
The first aspect of present application example, in a kind of embodiment tentatively chosen to pretest model,
Using a variety of CNN models being trained on other data sets as the alternative collection of pretest model;Simultaneously Set accuracy rate to the accuracy of the picture classification under former training dataset;Finally by every million parameter for comparing these models Accuracy rate selects pretest model.
The first aspect of present application example, a kind of embodiment that pretest model is tested under task target detection collection In,
For the test carried out under task target detection collection, cutting appropriate is done to pretest network architecture, it is main It include: to remove the last layer of network, while freezing all parameters in network;One layer of new full articulamentum of addition, and Species number to be identified is set by neuron number, and the weight of newly added full articulamentum is adjusted;Model is in office later It is tested under business target detection collection, obtains every million parameters accuracy rate, be compared and select pre-training model.
The first aspect of present application example, it is selected according to every million parameter accuracy rate of the model under task target detection collection Out in a kind of embodiment of pre-training model, the accuracy rate in every million parameters accuracy rate is revised accuracy rate.
The first aspect of present application example, in a kind of embodiment of pre-training network model fine tuning, only by network the One layer of parameter is freezed, and is trained under task object training set;
The first aspect of present application example, in another embodiment of pre-training network model fine tuning, network is all Parameter is not freezed, and is trained under task object training set;
Present application example second aspect provides a kind of accuracy rate calculation method, and this method is used in evaluation model performance Be modified to accuracy rate: aiming at the problem that Known Species quantity is classified, when calculating accuracy rate, needing to remove does not make With any method, pass through the influence for the accuracy rate that random guess obtains.
In a kind of embodiment of accuracy rate calculation method, following formula is can be used in the second aspect of present application example:
Accuracy rate=network class accuracy-random guess accuracy
Wherein, network class accuracy refers to the accuracy obtained using CNN network model when carrying out classification task.From Above technical scheme and experimental result discovery, the embodiment of the present invention have the advantage that
The network model for being more suitable for task object can be found in current existing CNN model, and accuracy rate is higher, it can Fast and efficiently complete task object.
Detailed description of the invention
Technological invention scheme in order to illustrate the embodiments of the present invention more clearly, below will be to embodiment or the prior art Attached drawing needed in description briefly describes, it is therefore apparent that the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without creative efforts, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is that network model of the invention selects flow chart;
Fig. 2 is that the CNN model that the LPI radar waveform of the embodiment of the present invention identifies selects flow chart;
Fig. 3 is that the pretest model measurement of present example and pre-training model choose flow chart;
Fig. 4 time-frequency figure of each radar signal by PWVD processing when being the noiseless of present example;
Fig. 5 is classification accuracy and wave pattern of the 10 kinds of LPI signals of present example under different models;
Fig. 6 is the effect contrast figure of the MobileNetV2 and kongnet of present example;
Specific embodiment
It elaborates below to the embodiment of the present invention, the present embodiment carries out under the premise of the technical scheme of the present invention Implement, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to following implementation Example.
As shown in Fig. 2, the present embodiment key step includes: the first step, task object is set: to 10 kinds of low probability of intercept (Low Probability Intercept, LPI) radar signal is classified, and classification accuracy is not less than 95%;Second Step selects 5 from the model that MxNet is provided by comparing every million parameter accuracy rate of the model under ImageNet training set Kind pretest model;Third step, 5 kinds of pretest models are tested under LPI radar test collection, obtain every million parameters accuracy rate, choosing Select out pre-training model;4th step, the fine tuning of pre-training model row, the later training under LPI radar training collection;5th step, it is pre- to instruct Practice model to test under LPI radar test collection, sees whether to meet expected setting target.Specific implementation step is as follows:
Step 1: setting task object: to 10 kinds of low probability of intercept (Low Probability Intercept, LPI) thunder Classify up to signal, and classification accuracy is not less than 95%:
Step 1.1: at -10dB, -8dB, -6dB, -4dB, -2dB totally 5 kinds of LPI signal-to-noise ratio, 10 kinds of generation (BPSK, FMCW, P1, P2, P3, P4, T1, T2, T3, T4) LPI radar signal initial data, and it is each under each signal-to-noise ratio environment Kind signal has 1700 samples, and each signal randomly selects 70% for training, and 30% for testing.
Step 2: by comparing every million parameter accuracy rate of the model under ImageNet training set, from the mould of MxNet offer 5 kinds of pretest models are selected in type:
Step 2.1: by comparing the CNN model in MxNet applied to signal waveform classification under ImageNet data set Every million parameters accuracy rate, select for the first time AlexNet, VGGNet-16, Inception v3, ResNet-50V2, This 5 kinds of network structures of MobileNetV2-1.0 as pretest model, while using the kongnet model finely tuned as reference Model;
Step 3:5 kind pretest model is tested under LPI radar test collection, is obtained every million parameters accuracy rate, is selected pre- Training pattern:
Step 3.1: to 10 kinds of radar signal initial data processing sides PWVD under 5 kinds of signal-to-noise ratio in step 1.1 Method processing, obtains time-frequency image, as shown in figure 4, each radar signal is by PWVD treated time-frequency figure when Fig. 4 is noiseless. Wherein PWVD processing method derives as follows:
Winger-Vile distribution (Winger-Vile Distribution, WVD) is one and is become with time and frequency for oneself The three-dimensional function of the description signal amplitude of amount.One continuous one-dimensional WVD function are as follows:
In formula: x (t) is original signal, and t is time variable, and ω is angular frequency, and * indicates conjugation.Formula (1) shows the meter of WVD It is non-causal at last.Therefore, which can not be used for actual WVD calculating.This limitation can be by by WVD analytic process Middle adding window and improved, referred to as puppet Winger-Vile be distributed (Pseudo Winger-Vile Distribution, PWVD). PWVD analysis to discrete signal are as follows:
In formula: ω (n) is the real window function and ω (0)=1 that a length is 2N-1.Use fl(n) kernel function, i.e. f are indicatedl (n)=x (l+n) x*(l-n) then PWVD becomes ω (n) ω (- n)
The wherein selection (usually 2 of Nk, k is positive integer) it is very big on the operand of PWVD and the influence of time-frequency resolving power.By Formula (3) is it is found that big N value can obtain high time-frequency resolving power, to generate more smooth as a result, wherein this example uses N be 1024.Step 3.2: signal does adapting operation to data after PWVD is converted and is mapped: the figure for first changing time-frequency As replicating one time in each channel of RGB, the image of a triple channel is formed.Then it is wanted according to the use of pre-training model It asks, by each of the image in tri- channels RGB channel normalization to [0,1] section, and according to the requirement in each channel, Image is carried out regular.It wherein, is 3*299*299 except Inception-v3 requires the image dimension of input, other model needs The image dimension of input is 3*224*224;
Step 3.3: 5 kinds of pretest models in step 2 being cut: the last layer of network being removed, is added Add one layer of new full articulamentum, and set species number to be identified for neuron number, while freezing the parameter value in primitive network, Only the weight of newly added full articulamentum is adjusted;
Step 3.4: the LPI radar signal of the pretest model and reference model that cut in different signal-to-noise ratio is tested Collection is tested, and is repeated test 20 times, is obtained accuracy rate and fluctuation, concrete condition is as shown in Figure 5, wherein subject to ordinate True rate, abscissa are signal-to-noise ratio;
Step 3.5: observation and analysis chart 5 are it is found that in all pretest models, and MobileNetV2 is for extremely low noise Discrimination than signal waveform is only second to Inception-v3, and the signal waveform discrimination compared with high s/n ratio is only second to AlexNet, while the stability bandwidth of MobileNetV2 is not high, and every million parameters accuracy rate is highest in all pre-training models. Comprehensively consider, selects MobileNetV2 as pre-training model.
Overall process is as shown in Figure 3.
Step 4: the fine tuning of pre-training model, the training under LPI radar training collection later:
Step 4.1: MobileNetV2 being finely adjusted: removing the last layer pond layer of MobileNetV2, and freezes The convolution kernel of first layer convolutional layer;
Step 4.2: the MobileNetV2 finely tuned training under LPI radar signal training set.
Step 5: pre-training model is tested under LPI radar test collection, sees whether to meet expected setting target:
Step 5.1: the MobileNetV2 and reference model kongnet trained is surveyed under LPI radar signal test set Examination, test results are shown in figure 6;
Step 5.2: observation and analysis chart 6, for the LPI signal in the case of -10dB, MobileNetV2 classification is quasi- for discovery True rate ratio kongnet is higher by about 30%, and especially to bpsk signal, classifying quality improves nearly 40%, simultaneously for signal-to-noise ratio Signal higher than -8dB, for MobileNetV2 discrimination close to 100%, overall effect reaches expected setting target.

Claims (5)

1. convolutional neural networks (Convolutional Neural Networks, CNN) model selects in a kind of transfer learning Method characterized by comprising
Set task object and Primary Reference index: accuracy rate, number of parameters etc.;
Pretest model is tentatively chosen according to every million parameter accuracy rate of the CNN model under former training set;
Pretest model is tested under task target detection collection, is obtained every million parameters accuracy rate, is selected pre-training model;
The fine tuning of pre-training model, the later training under task object training set;
Pre-training model measurement sees whether to meet target.
2. the method according to claim 1, wherein
Accuracy rate in Primary Reference index refers to the correct probability of the picture classification under particular task target data set;
Every million parameters accuracy rate under former training set refers to the ratio between accuracy rate and number of parameters, and every million refer to number of parameters unit, Accuracy rate refers to the correct probability of the picture classification under former training dataset;
Pretest model selects to be a variety of CNN models that will be trained on other data sets as pretest model Alternative collection, while setting accuracy rate to the accuracy of the picture classification under former training dataset, by comparing these models Every million parameters accuracy rate, selects pretest model;
Test of the pretest model under task target detection collection needs to cut model structure, specifically includes that network The last layer remove, add one layer of new full articulamentum, and set species number to be identified for neuron number, while freezing original Parameter value in beginning network is only adjusted the weight of newly added full articulamentum;Model is in task target detection collection later Lower test obtains every million parameters accuracy rate, is compared and selects pre-training model;
There are two types of the fine tunings of pre-training model: only freezing the parameter of network first tier, the trained or net under task object training set All parameters of network are not freezed, the training under task object training set.
3. according to the method described in claim 2, it is characterized in that,
It is revised accuracy rate that every million parameters accuracy rate, which selects the accuracy rate in pre-training model,.
4. according to the method described in claim 3, it is characterized in that,
When calculating accuracy rate, need to remove the influence of the accuracy rate obtained when without using any method by random guess, Revised accuracy rate is the difference of network class accuracy and random guess accuracy.
5. according to the method described in claim 4, it is characterized in that,
Network class accuracy refers to the correct probability obtained using CNN network model when carrying out classification task.
CN201811396539.8A 2018-11-21 2018-11-21 Convolutional neural networks model selection method in transfer learning Pending CN109409442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811396539.8A CN109409442A (en) 2018-11-21 2018-11-21 Convolutional neural networks model selection method in transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811396539.8A CN109409442A (en) 2018-11-21 2018-11-21 Convolutional neural networks model selection method in transfer learning

Publications (1)

Publication Number Publication Date
CN109409442A true CN109409442A (en) 2019-03-01

Family

ID=65474339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811396539.8A Pending CN109409442A (en) 2018-11-21 2018-11-21 Convolutional neural networks model selection method in transfer learning

Country Status (1)

Country Link
CN (1) CN109409442A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110220709A (en) * 2019-06-06 2019-09-10 北京科技大学 Fault Diagnosis of Roller Bearings based on CNN model and transfer learning
CN110457274A (en) * 2019-08-14 2019-11-15 北京思图场景数据科技服务有限公司 A kind of data file processing method based on transfer learning, device, equipment and computer storage medium
CN111582236A (en) * 2020-05-27 2020-08-25 哈尔滨工程大学 LPI radar signal classification method based on dense convolutional neural network
CN112070535A (en) * 2020-09-03 2020-12-11 常州微亿智造科技有限公司 Electric vehicle price prediction method and device
CN112434462A (en) * 2020-10-21 2021-03-02 华为技术有限公司 Model obtaining method and device
CN114298278A (en) * 2021-12-28 2022-04-08 河北工业大学 Electric equipment performance prediction method based on pre-training model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451661A (en) * 2017-06-29 2017-12-08 西安电子科技大学 A kind of neutral net transfer learning method based on virtual image data collection
CN107742061A (en) * 2017-09-19 2018-02-27 中山大学 A kind of prediction of protein-protein interaction mthods, systems and devices
US20180114114A1 (en) * 2016-10-21 2018-04-26 Nvidia Corporation Systems and methods for pruning neural networks for resource efficient inference
CN108038471A (en) * 2017-12-27 2018-05-15 哈尔滨工程大学 A kind of underwater sound communication signal type Identification method based on depth learning technology
CN108596138A (en) * 2018-05-03 2018-09-28 南京大学 A kind of face identification method based on migration hierarchical network
CN108647702A (en) * 2018-04-13 2018-10-12 湖南大学 A kind of extensive food materials image classification method based on transfer learning
CN108818537A (en) * 2018-07-13 2018-11-16 南京工程学院 A kind of robot industry method for sorting based on cloud deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180114114A1 (en) * 2016-10-21 2018-04-26 Nvidia Corporation Systems and methods for pruning neural networks for resource efficient inference
CN107451661A (en) * 2017-06-29 2017-12-08 西安电子科技大学 A kind of neutral net transfer learning method based on virtual image data collection
CN107742061A (en) * 2017-09-19 2018-02-27 中山大学 A kind of prediction of protein-protein interaction mthods, systems and devices
CN108038471A (en) * 2017-12-27 2018-05-15 哈尔滨工程大学 A kind of underwater sound communication signal type Identification method based on depth learning technology
CN108647702A (en) * 2018-04-13 2018-10-12 湖南大学 A kind of extensive food materials image classification method based on transfer learning
CN108596138A (en) * 2018-05-03 2018-09-28 南京大学 A kind of face identification method based on migration hierarchical network
CN108818537A (en) * 2018-07-13 2018-11-16 南京工程学院 A kind of robot industry method for sorting based on cloud deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BONELEE: "一文看懂迁移学习:怎样用预训练模型搞定深度学习?——重用神经网络的结构", 《网络在线公开:HTTPS://WWW.CNBLOGS.COM/BONELEE/P/8921311.HTML》 *
DISHASHREE GUPTA: "Transfer learning and the art of using Pre-trained Models in Deep Learning", 《网络在线公开: HTTPS://WWW.ANALYTICSVIDHYA.COM/BLOG/2017/06/TRANSFER-LEARNING-THE-ART-OF-FINE-TUNING-A-PRE-TRAINED-MODEL/#》 *
HAIJUNLEI 等: "A deeply supervised residual network for HEp-2 cell classification via cross-modal transfer learning", 《PATTERN RECOGNITION》 *
S. H. HASANPOUR 等: "Let’s keep it simple, Using simple architectures to outperform deeper and more complex architectures", 《ARXIV平台公开:HTTPS://ARXIV.ORG/ABS/1608.06037》 *
王秋然: "低信噪比低截获雷达信号的调制方式识别方法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
龙满生 等: "基于卷积神经网络与迁移学习的油茶病害图像识别", 《农业工程学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110220709A (en) * 2019-06-06 2019-09-10 北京科技大学 Fault Diagnosis of Roller Bearings based on CNN model and transfer learning
CN110220709B (en) * 2019-06-06 2020-04-21 北京科技大学 Rolling bearing fault diagnosis method based on CNN model and transfer learning
CN110457274A (en) * 2019-08-14 2019-11-15 北京思图场景数据科技服务有限公司 A kind of data file processing method based on transfer learning, device, equipment and computer storage medium
CN111582236A (en) * 2020-05-27 2020-08-25 哈尔滨工程大学 LPI radar signal classification method based on dense convolutional neural network
CN111582236B (en) * 2020-05-27 2022-08-02 哈尔滨工程大学 LPI radar signal classification method based on dense convolutional neural network
CN112070535A (en) * 2020-09-03 2020-12-11 常州微亿智造科技有限公司 Electric vehicle price prediction method and device
CN112434462A (en) * 2020-10-21 2021-03-02 华为技术有限公司 Model obtaining method and device
WO2022083624A1 (en) * 2020-10-21 2022-04-28 华为技术有限公司 Model acquisition method, and device
CN112434462B (en) * 2020-10-21 2024-07-09 华为技术有限公司 Method and equipment for obtaining model
CN114298278A (en) * 2021-12-28 2022-04-08 河北工业大学 Electric equipment performance prediction method based on pre-training model

Similar Documents

Publication Publication Date Title
CN109409442A (en) Convolutional neural networks model selection method in transfer learning
CN112052755B (en) Semantic convolution hyperspectral image classification method based on multipath attention mechanism
CN110533631B (en) SAR image change detection method based on pyramid pooling twin network
Dreissigacker et al. Deep-learning continuous gravitational waves
CN114937151B (en) Lightweight target detection method based on multiple receptive fields and attention feature pyramid
CN110321963A (en) Based on the hyperspectral image classification method for merging multiple dimensioned multidimensional sky spectrum signature
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
CN111723701B (en) Underwater target identification method
CN108446312B (en) Optical remote sensing image retrieval method based on deep convolution semantic net
CN110728656A (en) Meta-learning-based no-reference image quality data processing method and intelligent terminal
CN104732240A (en) Hyperspectral image waveband selecting method applying neural network to carry out sensitivity analysis
CN103984746B (en) Based on the SAR image recognition methodss that semisupervised classification and region distance are estimated
CN112818777B (en) Remote sensing image target detection method based on dense connection and feature enhancement
CN116047427B (en) Small sample radar active interference identification method
CN111562597A (en) Beidou satellite navigation interference source identification method based on BP neural network
CN112288700A (en) Rail defect detection method
CN106097290A (en) SAR image change detection based on NMF image co-registration
CN118332443B (en) Pulsar data radio frequency interference detection method
CN115565019A (en) Single-channel high-resolution SAR image ground object classification method based on deep self-supervision generation countermeasure
CN116299684B (en) Novel microseismic classification method based on bimodal neurons in artificial neural network
CN117523394A (en) SAR vessel detection method based on aggregation characteristic enhancement network
CN109460788B (en) Hyperspectral image classification method based on low-rank-sparse information combination network
CN116973677A (en) Distribution network single-phase earth fault line selection method based on cavity convolution and attention mechanism
CN116884435A (en) Voice event detection method and device based on audio prompt learning
CN116797928A (en) SAR target increment classification method based on stability and plasticity of balance model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190301