CN111898591B - Modulation signal identification method based on pruning residual error network - Google Patents

Modulation signal identification method based on pruning residual error network Download PDF

Info

Publication number
CN111898591B
CN111898591B CN202010885528.7A CN202010885528A CN111898591B CN 111898591 B CN111898591 B CN 111898591B CN 202010885528 A CN202010885528 A CN 202010885528A CN 111898591 B CN111898591 B CN 111898591B
Authority
CN
China
Prior art keywords
residual error
pruning
error network
gamma
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010885528.7A
Other languages
Chinese (zh)
Other versions
CN111898591A (en
Inventor
纪衡
廖红舒
甘露
徐汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NORTH AUTOMATIC CONTROL TECHNOLOGY INSTITUTE
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010885528.7A priority Critical patent/CN111898591B/en
Publication of CN111898591A publication Critical patent/CN111898591A/en
Application granted granted Critical
Publication of CN111898591B publication Critical patent/CN111898591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Digital Transmission Methods That Use Modulated Carrier Waves (AREA)

Abstract

The invention belongs to the technical field of modulation signal identification, and particularly relates to a modulation signal identification method based on a pruning residual error network. The method comprises the steps of obtaining an input modulation signal, inputting the input modulation signal to residual error network training, obtaining the network parameters of the trained deep residual error convolutional layer, and extracting Gamma parameters of all normalization layers in a modulation recognition residual error model. And (5) performing ascending arrangement on all Gamma parameters. And setting the global pruning proportion of the convolution kernel channel. According to the pruning proportion, determining a global threshold value of pruning in the Gamma parameters which are arranged in an ascending order; and deleting the Gamma parameter smaller than the global threshold and the convolution kernel channel corresponding to the previous layer in all normalization layers. And finally, training the pruned model by using a small number of modulation recognition samples or training the model from the beginning by using all modulation signal training samples. Compared with the prior modulation identification model, the invention further reduces network parameters, compresses the size of the model and greatly reduces the operation amount and the reasoning time of the model.

Description

Modulation signal identification method based on pruning residual error network
Technical Field
The invention belongs to the technical field of modulation signal identification, and particularly relates to a modulation signal identification method based on a pruning residual error network.
Background
The modulation identification of the signal is an important component in the spectrum monitoring process, and plays an important role in the military and civilian application fields of cognitive radio spectrum sensing, battlefield signal interception and the like. Compared with the traditional manual analysis method, the deep neural network method is adopted to judge the signal attribute, the deep features are easier to describe in a parameterized mode, the efficiency is higher, and more signal types can be identified.
The depth neural network (such as a depth residual error network) remarkably improves the accuracy of modulation recognition compared with the traditional algorithm, the model is more and more complex along with more and more network layers, although the depth learning method is excellent in performance, the large and deep depth network model has large operation amount, even though the GPU is used for acceleration, large-scale model parameters occupy a large amount of memory space, and the model is still limited in time and space in practical application and difficult to meet the requirement of real-time performance.
Disclosure of Invention
The invention aims to solve the core problems of modulating and identifying a model and compressing the parameter quantity of the model for a residual error network and reducing the computational complexity on the premise of not influencing the precision. In the prior art, when a modulation recognition model is cut, partial convolution layer convolution kernels are completely reserved for ensuring channel matching. The core of the method is to prune all convolution kernels by using a channel self-adaptive mode while ensuring channel matching so as to achieve a better network structure after pruning.
In order to realize the method, the technical scheme adopted by the invention is as follows:
a modulation signal identification method based on a pruning residual error network is characterized by comprising the following steps:
s1, preprocessing the modulation signal sample to obtain an equalized signal sample, inputting the equalized signal sample into a depth residual error network for training to obtain a trained depth residual error network;
s2, based on the trained deep residual error network, in the normalization layer, the activation value z of the model inputinIs normalized to obtain
Figure BDA0002655456140000011
Then linearly changed, and the output activation value is zout
Figure BDA0002655456140000021
Wherein, muBIn order to input the mean value of the activation values,
Figure BDA0002655456140000022
and the variance is shown, epsilon is a minimum constant term, Gamma is a linear transformation training weight Gamma, and beta is a bias term. Acquiring training parameters Gamma (Gamma) of all normalization layers in a depth residual error network, and arranging the training parameters in an ascending order;
s3, setting a global pruning proportion c, and determining a threshold value according to the global pruning proportion and the result of the step S2
Figure BDA0002655456140000023
In all normalized layers, the deletion is less than the threshold
Figure BDA0002655456140000024
The training parameter gamma and the convolution kernel channel corresponding to the previous layer are obtained, and the residual error network after the convolution layer is removed is obtained;
s4, respectively calculating the mean value of the last convolutional layer channel in the residual block of each stage of the depth residual error network based on the residual error network after convolutional layers are removed, and respectively pruning the number of the last convolutional layer channel in the residual block of each stage of the depth residual error network to the mean value of the stage to obtain a pruned residual error network;
and S5, recognizing the modulation signal by adopting a pruning residual error network, comparing the average classification accuracy of the pruning residual error network and the trainable parameter total amount of the model with a set standard, if the average classification accuracy of the pruning residual error network and the trainable parameter total amount of the model do not reach the set standard, updating the global pruning proportion c, and returning to the step S2 until the global pruning proportion c reaches the set standard.
The invention has the beneficial effects that:
1) the invention identifies the modulation signal based on deep learning and improves the deep residual error network;
2) a more accurate network channel importance evaluation mode is adopted, and a modulation recognition model is compressed by using a training parameter Gamma as an evaluation standard of importance, so that the compression is more accurate than manual experience compression;
3) by the channel self-adaption mode, on the premise of ensuring channel matching, importance evaluation and pruning are carried out on all convolutional layer channels in the modulation recognition model, so that a more compact model is obtained;
4) the pruning proportion can be dynamically adjusted, so that the modulation recognition model can be compressed to an expected range;
5) stable pruning result, and repeated experiments prove that the method has better reproducibility and stability.
Drawings
FIG. 1 is a flow chart of a typical modulation recognition network model pruning in the prior art
FIG. 2 is a schematic diagram of a normalization layer and its corresponding modulation recognition model convolutional layer pruning
FIG. 3 is a simplified diagram of a same stage residual block structure;
fig. 4 is a graph illustrating the accuracy of multiple pruning on a modulated signal data set.
Detailed Description
The following description of the embodiments of the present invention refers to the accompanying drawings:
the main method of the invention is as follows: and acquiring an input modulation signal, and preprocessing the modulation signal to obtain a modulation identification sample. And inputting the sample into the residual error network training, and acquiring the training-completed deep residual error convolutional layer network parameters and normalization layer training parameters. And extracting Gamma parameters of all normalization layers in the modulation identification residual model, and performing ascending arrangement on all the Gamma parameters. And setting the global pruning proportion of the convolution kernel channel. According to the pruning proportion, determining a global threshold value of pruning in the Gamma parameters which are arranged in an ascending order; and deleting the Gamma parameter smaller than the global threshold and the convolution kernel channel corresponding to the previous layer in all normalization layers. The residual part of the modulation recognition model comprises 3 stages, and only the first residual block direct-connected channel of each stage comprises a convolutional layer. For each residual block in the same stage, the number of the last convolutional layer channel and the number of the direct-connected channel convolutional layer channels are subtracted to the average value of the number of channels after pruning in the stage. And finally, training the pruned model by using a small number of modulation recognition samples or training the sample from the beginning by using all modulation signals.
The method for pruning the self-adaptive residual error network channel aiming at modulation identification comprises the following specific steps:
a) and inputting the preprocessed modulation signal sample into a residual error network and training to obtain a trained residual error network.
Common modulation modes of digital signals comprise 8 modulation modes such as 2ASK,2FSK,4FSK, MSK, BPSK, QPSK, OQPSK,16QAM and 64QAM, the signal to noise ratio is set to be 0dB, the data volume of each signal type is 10000, the total amount of samples is 80000, 60% of the total amount of the samples are used as a training set, 20% of the total amount of the samples are used as a verification set, and 20% of the total amount of the samples are used as a test set.
The method comprises the steps of constructing a network model by using a python language, using a deep learning framework of Tensorflow, using NVIDIATITANX (Pascal) to accelerate GPU calculation, summarizing parameter settings as shown in a table 1, sending data sets of 8 signal types into the model for learning, training the convolutional neural network based on a loss function so as to correct parameters of the convolutional neural network by a verification set, and storing the trained model.
Table 1 experimental parameter settings
Type of parameter Parameter value Type of parameter Parameter value
Learning rate 0.001 Training set 48000
Epochs 200 Training set 16000
Optimizer Adam Verification set 16000
Batch Size 64 Activating a function ReLu
b) The trained residual error network model convolution layer and the adjacent normalization layer Gamma parameter jointly determine the importance degree of different channels of the layer;
generally, a residual block comprises a normalization layer, an activation function, a convolution layer and a direct connection channel in sequence; in the normalization layer, each channel is independently transformed, and information among the channels is fused by the convolution layer of the next layer;
Zhang Liang
Figure BDA0002655456140000041
for activation values that have been normalized in the first normalization layer without scaling, where Cl, HlAnd WlThe number of channels, length and width of the activation value are respectively expressed, and the scaled activation value is
Figure BDA0002655456140000042
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002655456140000043
for the c-th channel unscaled activation value,
Figure BDA0002655456140000044
for the Gamma scaling parameter of the c-th channel,
Figure BDA0002655456140000045
the activation value of the c-th channel after scaling for the normalization layer (for simplicity, the bias term parameter β for the normalization layer and the convolution layer has been ignored);
thereupon, the activation value ZlBy activating the layer function ReLU, we get:
Zout=ReLU(Zl)
wherein ZoutFor the activation value after passing through the activation layer,
and performing characteristic fusion through the next convolution layer to obtain the next activation value:
Figure BDA0002655456140000046
wherein C isl+1,Hl+1And Wl+1Number of channels, length and width, Weight, respectively, representing the activation value of the next convolution layerl+1Is the convolution kernel of the layer or layers,
Figure BDA0002655456140000047
representing a convolution operation;
c channel parameter of l layer convolution kernel
Figure BDA0002655456140000051
And Gamma parameter of the l-th layer
Figure BDA0002655456140000052
In the method, a first BN layer Gamma parameter vector of a residual block is used for evaluating the importance degree of a last convolutional layer channel in a previous residual block.
c) Setting a global channel pruning proportion to determine a channel to be pruned;
setting a global channel pruning proportion c (such as 15%), extracting Gamma parameters of all normalization layers in the network, forming a one-dimensional vector, and performing ascending arrangement, wherein in the vector, the Gamma parameters with larger rear numerical values and corresponding convolution kernel channels have higher importance degrees; the front-value smaller Gamma parameter and the convolution kernel channel are parameters which are insensitive or redundant to network performance.
Finding out Gamma parameter occupying vector quantity proportion of c in ascending order arranged parameter vectors, namely, the proportion of the parameter quantity smaller than the value to the total quantity of the Gamma parameter is c, and setting the Gamma parameter as a global pruning threshold value
Figure BDA0002655456140000053
Traversing the model, and satisfying all Gamma channels of the normalization layer
Figure BDA0002655456140000054
Then the parameter is subtracted
Figure BDA0002655456140000055
And its corresponding convolutionNuclear tunnel
Figure BDA0002655456140000056
Wherein
Figure BDA0002655456140000057
The c channel Gamma parameter of the normalization layer after the first convolution layer. Gamma parameter less than threshold in first normalization layer for each residual block
Figure BDA0002655456140000058
Subtracting the corresponding convolution kernel channel in the last convolution layer of the previous residual block
Figure BDA0002655456140000059
d) Calculating the mean value of each stage of the residual error network by adopting a channel self-adaption method:
the residual error network comprises 3 stages, wherein only the first residual error block of each stage has a direct connection channel containing a convolutional layer, and the direct connection channels of the rest residual error blocks of the stage do not contain the convolutional layer; when cut, exist
Weightm,i≠Weightm,j,i≠j,i,j=0,1,…,n
In the case where the number of channels in the residual stage m is not matched with that in 1,2, and 3, where n is the number of residual blocks included in the residual network in one stage, Weightm,0Represents the convolution kernel parameter, Weight, of the direct channel of the first residual block in the m-stagem,iThe last convolution kernel parameter of the ith residual block, at this time, the activation value Z of the direct channel of the residual blockinAnd the activation value Z of the convolution operationoutCannot perform Add operations, as shown in fig. 3, the method of channel adaptation extracts the number c of last convolutional layer channels of all residual blocks at each stagem,i=channel(Weightm,i) And calculating the average value of the channel number and rounding up
Figure BDA0002655456140000061
e) Clipping channel mismatched convolution kernels:
obtaining convolution kernel channel mean values of 3 stages according to the step c)
Figure BDA0002655456140000062
Directly-connected channel convolution kernel Weight of first residual block in same stagem,0And the last convolutional layer convolutional kernel Weight of all residual blocksm,iI-1, …, n (convolutional layer indicated by dotted line in fig. 3), the number of channels is clipped to the average obtained at the same stage in step c)
Figure BDA0002655456140000063
f) After the channel pruning is completed, the network is trained using a small number of modulation signal samples or using the complete training set of modulation signals.
If the size of the modulation recognition network model obtained after the step f) does not meet the requirement, dynamically adjusting the overall pruning proportion c according to the requirement of the network size, and cutting the network; this iterative pruning may be performed on the network until the model size meets the requirements.
g) And analyzing the accuracy of the modulation recognition model on the test set. FIG. 4 shows the variation process of the recognition accuracy of 8 modulation signals with the decrease of model parameters, the accuracy of the original modulation recognition model is 99.1%, and when the model parameters are compressed to 20% of the original model, the accuracy of the modulation recognition model can still reach more than 98%.
The overall flow of the core algorithm of the invention is as follows:
the overall flow of the core algorithm is as follows:
Figure BDA0002655456140000064
Figure BDA0002655456140000071

Claims (1)

1. a modulation signal identification method based on a pruning residual error network is characterized by comprising the following steps:
s1, preprocessing the modulation signal sample to obtain an equalized signal sample, inputting the equalized signal sample into a depth residual error network for training to obtain a trained depth residual error network;
s2, based on the trained deep residual error network, in the normalization layer, the activation value z of the model inputinIs normalized to obtain
Figure FDA0003613400510000011
Then linear change is carried out, and the output activation value is zout
Figure FDA0003613400510000012
Wherein, muBIn order to input the mean value of the activation values,
Figure FDA0003613400510000013
obtaining training parameters Gamma of all normalization layers in a depth residual error network, and arranging the training parameters in ascending order, wherein epsilon is a constant term, Gamma is a linear transformation training weight Gamma, and beta is a bias term;
s3, setting a global pruning proportion c, and determining a threshold value according to the global pruning proportion and the result of the step S2
Figure FDA0003613400510000014
In all normalization layers, the erasures are less than the threshold
Figure FDA0003613400510000015
The training parameter gamma and the convolution kernel channel corresponding to the previous layer are used for obtaining the residual error network after the convolution layer is removed, and the method specifically comprises the following steps:
finding G with the vector quantity proportion of c in the training parameter vectors in ascending orderThe ama parameter, i.e. the ratio of the number of parameters smaller than the value to the total number of Gamma parameters, is c, and the Gamma parameter is set as the global pruning threshold
Figure FDA0003613400510000016
Traversing the model, and satisfying all Gamma channels of the normalization layer
Figure FDA0003613400510000017
Then the parameter is subtracted
Figure FDA0003613400510000018
And its corresponding convolution kernel channel
Figure FDA0003613400510000019
Wherein
Figure FDA00036134005100000110
For the c channel Gamma parameter of the normalization layer after the l convolution layer, the Gamma parameter smaller than the threshold value in the first normalization layer of each residual block
Figure FDA00036134005100000111
Subtracting the corresponding convolution kernel channel in the last convolution layer of the previous residual block
Figure FDA00036134005100000112
S4, respectively calculating the mean value of the last convolutional layer channel in the residual block of each stage of the depth residual error network based on the residual error network after convolutional layers are removed, and respectively pruning the number of the last convolutional layer channel in the residual block of each stage of the depth residual error network to the mean value of the stage to obtain a pruned residual error network;
and S5, recognizing the modulation signal by adopting the pruning residual error network, comparing the average classification accuracy of the pruning residual error network and the trainable parameter total amount of the model with a set standard, if the average classification accuracy of the pruning residual error network and the trainable parameter total amount of the model do not reach the set standard, updating the global pruning proportion c, and returning to the step S2 until the global pruning proportion c reaches the set standard.
CN202010885528.7A 2020-08-28 2020-08-28 Modulation signal identification method based on pruning residual error network Active CN111898591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010885528.7A CN111898591B (en) 2020-08-28 2020-08-28 Modulation signal identification method based on pruning residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010885528.7A CN111898591B (en) 2020-08-28 2020-08-28 Modulation signal identification method based on pruning residual error network

Publications (2)

Publication Number Publication Date
CN111898591A CN111898591A (en) 2020-11-06
CN111898591B true CN111898591B (en) 2022-06-24

Family

ID=73225844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885528.7A Active CN111898591B (en) 2020-08-28 2020-08-28 Modulation signal identification method based on pruning residual error network

Country Status (1)

Country Link
CN (1) CN111898591B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101487B (en) * 2020-11-17 2021-07-16 深圳感臻科技有限公司 Compression method and device for fine-grained recognition model
CN112380872B (en) * 2020-11-27 2023-11-24 深圳市慧择时代科技有限公司 Method and device for determining emotion tendencies of target entity
CN113537452A (en) * 2021-02-25 2021-10-22 中国人民解放军战略支援部队航天工程大学 Automatic model compression method for communication signal modulation recognition
CN113408709B (en) * 2021-07-12 2023-04-07 浙江大学 Condition calculation method based on unit importance
CN116825088B (en) * 2023-08-25 2023-11-07 深圳市国硕宏电子有限公司 Conference voice detection method and system based on deep learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN108764471A (en) * 2018-05-17 2018-11-06 西安电子科技大学 The neural network cross-layer pruning method of feature based redundancy analysis
CN109344921A (en) * 2019-01-03 2019-02-15 湖南极点智能科技有限公司 A kind of image-recognizing method based on deep neural network model, device and equipment
CN109754080A (en) * 2018-12-21 2019-05-14 西北工业大学 The pruning method of Embedded network model
CN110263841A (en) * 2019-06-14 2019-09-20 南京信息工程大学 A kind of dynamic, structured network pruning method based on filter attention mechanism and BN layers of zoom factor
CN110895714A (en) * 2019-12-11 2020-03-20 天津科技大学 Network compression method of YOLOv3
CN111222640A (en) * 2020-01-11 2020-06-02 电子科技大学 Signal recognition convolutional neural network convolutional kernel partition pruning method
CN111325342A (en) * 2020-02-19 2020-06-23 深圳中兴网信科技有限公司 Model compression method and device, target detection equipment and storage medium
CN111368968A (en) * 2018-12-26 2020-07-03 浙江宇视科技有限公司 Network model cutting method and device and computer readable storage medium
CN111382839A (en) * 2020-02-23 2020-07-07 华为技术有限公司 Method and device for pruning neural network
CN111488982A (en) * 2020-03-05 2020-08-04 天津大学 Compression method for automatic optimization-selection mixed pruning of deep neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033380A1 (en) * 2017-08-18 2019-02-21 Intel Corporation Slimming of neural networks in machine learning environments
US11875260B2 (en) * 2018-02-13 2024-01-16 Adobe Inc. Reducing architectural complexity of convolutional neural networks via channel pruning
US11488019B2 (en) * 2018-06-03 2022-11-01 Kneron (Taiwan) Co., Ltd. Lossless model compression by batch normalization layer pruning in deep neural networks

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN108764471A (en) * 2018-05-17 2018-11-06 西安电子科技大学 The neural network cross-layer pruning method of feature based redundancy analysis
CN109754080A (en) * 2018-12-21 2019-05-14 西北工业大学 The pruning method of Embedded network model
CN111368968A (en) * 2018-12-26 2020-07-03 浙江宇视科技有限公司 Network model cutting method and device and computer readable storage medium
CN109344921A (en) * 2019-01-03 2019-02-15 湖南极点智能科技有限公司 A kind of image-recognizing method based on deep neural network model, device and equipment
CN110263841A (en) * 2019-06-14 2019-09-20 南京信息工程大学 A kind of dynamic, structured network pruning method based on filter attention mechanism and BN layers of zoom factor
CN110895714A (en) * 2019-12-11 2020-03-20 天津科技大学 Network compression method of YOLOv3
CN111222640A (en) * 2020-01-11 2020-06-02 电子科技大学 Signal recognition convolutional neural network convolutional kernel partition pruning method
CN111325342A (en) * 2020-02-19 2020-06-23 深圳中兴网信科技有限公司 Model compression method and device, target detection equipment and storage medium
CN111382839A (en) * 2020-02-23 2020-07-07 华为技术有限公司 Method and device for pruning neural network
CN111488982A (en) * 2020-03-05 2020-08-04 天津大学 Compression method for automatic optimization-selection mixed pruning of deep neural network

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Learning Efficient Convolutional Networks through Network Slimming;Zhuang Liu 等;《arXiv》;20170822;第1-10页 *
Neural Network Pruning with Residual-Connections and Limited-Data;Jian-Hao Luo;《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20200805;第1458-1467页 *
Over the Air Deep Learning Based Radio Signal Classification;Tim O’Shea 等;《arXiv》;20171213;第1-13页 *
RETHINKING THE VALUE OF NETWORK PRUNING;Zhuang Liu 等;《ICLR 2019》;20180928;第1-21页 *
Variational Convolutional Neural Network Pruning;Chenglong Zhao 等;《2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20200109;第2780-2789页 *
一种用于卷积神经网络压缩的混合剪枝方法;靳丽蕾 等;《小型微型计算机系统》;20181211;第39卷(第12期);第200-203、207页 *
改进神经网络的数字调制信号识别;刘翔;《内蒙古师范大学学报(自然科学汉文版)》;20170315;第46卷(第2期);第200-207页 *
深度学习中残差网络的随机训练策略;孙琪 等;《计算数学》;20200814;第42卷(第3期);第349-369页 *

Also Published As

Publication number Publication date
CN111898591A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111898591B (en) Modulation signal identification method based on pruning residual error network
CN110084221B (en) Serialized human face key point detection method with relay supervision based on deep learning
CN111126386A (en) Sequence field adaptation method based on counterstudy in scene text recognition
CN117290364B (en) Intelligent market investigation data storage method
CN112434662B (en) Tea leaf scab automatic identification algorithm based on multi-scale convolutional neural network
CN111476422A (en) L ightGBM building cold load prediction method based on machine learning framework
CN112101487B (en) Compression method and device for fine-grained recognition model
CN117316301B (en) Intelligent compression processing method for gene detection data
CN110990784A (en) Cigarette ventilation rate prediction method based on gradient lifting regression tree
CN111708810B (en) Model optimization recommendation method and device and computer storage medium
CN111177217A (en) Data preprocessing method and device, computer equipment and storage medium
CN112612948A (en) Deep reinforcement learning-based recommendation system construction method
CN114880318A (en) Method and system for realizing automatic data management based on data standard
CN112883066B (en) Method for estimating multi-dimensional range query cardinality on database
CN117253122B (en) Corn seed approximate variety screening method, device, equipment and storage medium
CN115879750B (en) Aquatic seedling environment monitoring management system and method
CN113609809B (en) Method, system, equipment, medium and terminal for diagnosing faults of radio frequency low-noise discharge circuit
CN116468102A (en) Pruning method and device for cutter image classification model and computer equipment
CN113489005B (en) Distribution transformer load estimation method and system for power flow calculation of distribution network
CN110929849B (en) Video detection method and device based on neural network model compression
CN110609832B (en) Non-repeated sampling method for streaming data
CN114037005A (en) Power load prediction method based on optimized selection of typical daily load curve
CN114037857B (en) Image classification precision improving method
CN112465054A (en) Multivariate time series data classification method based on FCN
CN109993413B (en) Data-driven flue-cured tobacco quality benefit comprehensive evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240318

Address after: No. 351 Tiyu Road, Xiaodian District, Taiyuan City, Shanxi Province 030000

Patentee after: NORTH AUTOMATIC CONTROL TECHNOLOGY INSTITUTE

Country or region after: China

Address before: 611731, No. 2006, West Avenue, hi tech West District, Sichuan, Chengdu

Patentee before: University of Electronic Science and Technology of China

Country or region before: China