CN109886406A - A kind of complex convolution neural network compression method based on depth-compression - Google Patents

A kind of complex convolution neural network compression method based on depth-compression Download PDF

Info

Publication number
CN109886406A
CN109886406A CN201910136000.7A CN201910136000A CN109886406A CN 109886406 A CN109886406 A CN 109886406A CN 201910136000 A CN201910136000 A CN 201910136000A CN 109886406 A CN109886406 A CN 109886406A
Authority
CN
China
Prior art keywords
network
compression
complex
weight
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910136000.7A
Other languages
Chinese (zh)
Inventor
伍家松
任虹珊
孔佑勇
杨淳沨
章品正
姜龙玉
陈阳
舒华忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910136000.7A priority Critical patent/CN109886406A/en
Publication of CN109886406A publication Critical patent/CN109886406A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The method for the complex convolution neural network compression based on depth-compression algorithm that the invention discloses a kind of, passes through the connectivity of normal network training learning network first;Then, the network parameter trained is trimmed, the connection by the mould of complex parameter lower than a threshold value trims;Then, the sparse network after trimming is quantified, further compression network;Finally, encoding using real and imaginary parts of the Huffman encoding to complex parameter, final compression network is obtained.The method of the present invention utilizes nuisance parameter excessive in convolutional neural networks, delete unessential connection, and further pass through quantization and Huffman encoding compression network, largely reduce the parameter of network, and the loss of significance of only very little, achieve the purpose that compress complex convolution neural network, has solved the problems, such as that complex convolution neural network can not be deployed on embedded device due to huge parameter amount.

Description

A kind of complex convolution neural network compression method based on depth-compression
Technical field
The invention belongs to depth learning technology fields, and in particular to a kind of complex convolution neural network based on depth-compression The method of compression.
Background technique
Compared with conventional method (principal component analysis, support vector machines etc.), convolutional neural networks are at image procossing, voice Reason, natural language processing etc. have obtained the promotion of significant performance and precision.In terms of image procossing, Canada is more Krizhevsky of much of human relations et al. has been won 2012 by training one very big very deep CNN (i.e. AlexNet) The champion of ILSVRC (ImageNet Large Scale Visual Recognition Competition) contest, in image Current year most marvelous results are obtained in classification and framing.Oxonian Simonyan et al. is passing through convolutional neural networks VGG-16 equally obtains excellent achievement in the image classification of ILSVRC contest in 2014 and positioning comprehensive task.Furthermore The ResNet in ILSVRC contest in GoogleNet and 2015 year in ILSVRC contest in 2014 is achieved on image procossing Huge success.In the Research of Speech Recognition, Dong et al. is realized using in conjunction with convolutional neural networks and Hidden Markov Model Voice conversion passes through MOS (mean opinion score) and evaluates and tests, it was demonstrated that this method obtains good voice transformation result. Dat et al. compares the multicenter voice processing method in urban environment, and discovery is by combining convolutional neural networks The error rate of speech recognition can be effectively reduced with the method for Recognition with Recurrent Neural Network.At the same time, Baidu and Iflytek be also all The performance boost advanced by leaps and bounds is achieved in speech recognition based on convolutional neural networks.
Currently, convolutional neural networks have application extremely abundant, such as speech recognition and the Microsoft of Baidu in industry Used speech recognition and DeepMind and AlchemyAPI Liang Jia enterprise are all based on volume on Cortana virtual assistant Product nerual network technique provides artificial intelligence related service for client, all obtains great success.Due to convolutional neural networks Huge parameter amount and great number calculating cost, these intelligent Services generally all operate on large-scale professional equipment, if The application of small device (such as embedded device and mobile device) will use these intelligent Services, need by connecting network application Api interface accesses service interface after certification, by acquired image data, voice data, text data, unstructured data Etc. through network being sent to API provider after carrying out compression or uncompressed or feature extraction, data are calculated by API provider Afterwards, result is returned again to and gives these equipment.But, it is contemplated that actual production environment can not connect network or signal certain Blind zone operation etc., third-party application can not carry out data interaction with service provider, this will lead to intelligence under no network connection Can apply can not normal use.It, can be by the way that embedded device or shifting be transplanted in convolutional neural networks application for this situation It is solved in dynamic equipment.
Excessive parameter makes itself become height computation-intensive and memory-intensive in convolutional neural networks model Model, this, which is transplanted to the application of convolutional neural networks in embedded or mobile device, will encounter three biggish difficult points: 1) Model is big, and more than 200MB if AlexNet model, VGG-16 model has been more than 500MB;2) computationally intensive, show good volume Product neural network model suffers from thousands of parameter, runs an intelligent Service and needs to carry out thousands of calculating Its result can be obtained;3) power consumption is big, and a large amount of internal storage access and CPU computing resource use, this will lead to huge power consumption. With popularizing for mobile device, the application demand of convolutional neural networks in embedded systems is growing, but for these For the limited equipment of hardware resource, complete convolutional neural networks model almost can not directly transplanting carry out offline mode make With intelligent use can not work normally.
Compression for real number convolutional neural networks, domestic and international researcher are made that many trials to this.These compressions Method can be to be probably divided into a manner of four kinds: parameter sharing method (parameter sharing), network delete method (networkpruning), knowledge distillating method (knowledge distillation) and matrix disassembling method (matrix decomposition).The main thought of parameter sharing is exactly that multiple parameters share the same value, and concrete methods of realizing is also each It is not identical.Vanhoucke et al. reduces parameters precision by fixed point (fixed-point) method, so that similar in value Parameter sharing same value.Chen et al. proposes a kind of method based on hash algorithm, maps the parameter to corresponding Hash bucket Interior, in the same Hash bucket parameter sharing same value.Gong et al. by using K-means clustering algorithm, by parameter into Row cluster, parameter sharing its central value in every cluster.Network, which is deleted, can be used to reduce network complexity, effectively prevent intending It closes.Han et al. deletes the network connection under certain threshold value for trained network model to compress to network, It is then based on parameter sharing and Huffman coding further compresses network.In the Web compression side of knowledge based distillation In method, the network model that Sau et al. is based on " teacher-student " compresses, and is tested on MNIST data set, as a result table Bright this method reduces storage and the computation complexity of model simultaneously.In the compression method based on matrix decomposition theory, Denil Et al. and Nakkiran et al. all the parameter in neural network different layers is compressed using low-rank decomposition.Denton etc. People uses matrix disassembling method in convolutional neural networks, accelerates the calculating process of convolutional layer, while also effectively reducing complete The parameter of articulamentum.
Although these compression methods largely have compressed real number network, the compression of complex-valued neural networks is calculated Method is fewer and fewer.
The network of country's external compression concentrates on real value network at present, fewer and fewer for the compression algorithm of more network.It grinds Study carefully proof, complex field is the beneficial popularization of real number field, there is the advantage of following two aspects relative to real number field:
(1) for the angle of signal and image procossing, plural number compared with real number it is most important be the introduction of it is extremely important Phase information.In speech signal processing, phase information affects the interpretation of voice signal.In image procossing, figure It is described as phase information provides image shape, edge and the detail in direction, and can be used for restoring the amplitude information of image.
(2) for the angle of deep learning net structure, the expression based on complex field is receive more and more attention. Researcher has found among the construction of recurrent neural network (Recurrent Neural Network:RNN): with real number RNN It compares, plural RNN is easier to optimize, and has better generalization ability, has more quick learning ability, has stronger table Danone power and to the more robust reminiscence of noise (memory retrieval) mechanism.
It is therefore desirable to which a new technical solution solves the compression problems of complex-valued neural networks.
Summary of the invention
Goal of the invention: aiming at the problems existing in the prior art with deficiency, the present invention provides a kind of based on depth-compression The compression method of complex convolution neural network can realize the ginseng of complex convolution network in the case where accurate rate loses less Number substantially reduces, and solves the problems, such as that complex convolution neural network can not be deployed on embedded device due to huge parameter amount.
Technical solution: the present invention provides a kind of complex convolution neural network compression method based on depth-compression, including with Lower step:
1) network cut:
1.1) training original complex convolutional neural networks, obtain its connectivity, obtain its original weight;
1.2) it trims small weight connection: setting a threshold value, mould and setting for more network, with complex-valued weights first Threshold value make comparisons, weight is all removed from network lower than the connection of given threshold value, i.e., by the real part of complex-valued weights and imaginary part It is disposed as 0, the weight after being trimmed;
1.3) re -training network learns the final weight of remaining sparse network;
2) network quantifies: quantization cluster is carried out to the trimming weight that step 1 obtains, specific as follows:
2.1) a clusters number K is set, and initializes the mass center of cluster;
2.2) each layer of weight is clustered using two-dimentional K mean cluster algorithm;
2.3) re -training network come learn quantization after network final weight;
3) Huffman coding is carried out to the real and imaginary parts of the complex-valued weights after quantization respectively, obtains final compression Network.
The present invention includes normal training more network, then trims unessential connection, and every layer of connection carries out primary Cluster quantization then encodes further compression network using Huffman (Huffman), finally realizes complex convolution neural network Compression.
Although the weight of more network saves as two real numbers in physical store in step 1.2 of the invention, It is plural form in logic, real part has correlation with imaginary part, therefore uses the mould of complex-valued weights as comparison other, without straight It connects and enables real and imaginary parts compared with threshold value, experiment, which also turns out, uses the mould of complex-valued weights more preferable as comparison other effect.
It further, can there are many weights that value is 0, when storage, and the storage method of sparse matrix can be used after trimming Compression loose line (CSR) compresses sparse column (CSC) to store parameter, effectively saves memory space.Originally it needs to store n2 A weight now only needs 2a+n+1 weight of storage, and wherein a is the quantity of nonzero element, and n is the number of row or column in matrix Amount.
A sparse network can be obtained after primitive network is by trimming in the present invention, step 2 is to this sparse net Network is quantified, and is clustered using two-dimentional K mean value (k-means) clustering method to each layer of parameter.Use Euclidean distance N initial weight W={ w is divided in measurement1,w2,...,wnArrive K cluster C={ c1,c2,...,ck, k > > n, to minimize in cluster Quadratic sum (WCSS):
K-means algorithm needs to initialize the mass center of cluster in the present invention, and shared weight is the mass center for clustering every cluster, matter Heart initialization influences the quality of cluster, therefore influences the precision of prediction of network.Here there are three initial methods: Forgy (random) Initialization, initial method and linear initialization method based on density.
(random) initialization of Forgy randomly chooses k observation data set, uses these as initial mass center.General weight point Cloth shows as bimodal distribution, and Forgy method often focuses on the two peaks.
Initial method based on density be it is linear weight CDF is distributed in Y-axis, then find horizontal friendship with CDF Collection, the vertical intersection being eventually found in X-axis, this intersection point is a mass center.This method makes mass center concentrate on Liang Feng, still Disperse than Forgy method.
The linear maximum value (max) for mass center being put into initial weight of linear initialization method and the section minimum value (min) In [min, max].Weight distribution is constant in this initial method, is most dispersed compared with the first two method.Due to More network is that weight is two-dimensional, therefore the mass center linear initialization of more network can be divided into following four, be respectively:
1, y=c, c are constant, i.e. the linear initialization method of horizontal direction;
2, x=c, c are constant, i.e. the linear initialization method of vertical direction;
3, y=kx+b, k, b are constant, i.e. the linear initialization method that is positive of slope;
4, y=-kx+b, k, b are constant, i.e. the linear initialization method that is negative of slope.
The smaller weight performance more important function of biggish weight, but the negligible amounts of big weight.Therefore at the beginning of Forgy Beginningization and initialization based on density, the biggish quantity of mass center absolute value selected is often seldom, as a result causes these a small amount of Big weight is representative poor, and linear initialization does not have this problem, because mass center is equally distributed, the big weight ratio of acquirement First two method is more.
Network in the present invention after step 3 pair quantization carries out Huffman coding.Huffman coding is before one kind is optimal Sew code, be usually used in lossless data compression, it encodes source symbol using variable-length character code.The frequency that each symbol occurs is determined The digit of this symbol is determined, the more symbol of the frequency of occurrences is indicated with less position.The weight of plural CNN is logically multiple Number, actual storage when are to be stored with pairs of real number form, therefore can encode respectively to real and imaginary parts, are obtained To final compression network.
The present invention compresses more network on the basis of depth-compression technology.Compression is divided into three steps: first First, trimming, the connection by weight lower than a threshold value remove, here with the mould of complex-valued weights as comparison other;Then, Quantify the complex-valued weights in network, quantization cluster is carried out using two-dimentional K mean cluster herein;Finally, to real and Imaginary part carries out Huffman encoding, further compression network.
The utility model has the advantages that compared with prior art, the present invention using three stage methods based on depth-compression to complex convolution Neural network is compressed, and depth-compression method has reached very high compression ratio (35 to the compression of real number convolutional neural networks × -49 ×) and accurate rate (loss within 1%), the present invention are considered to the deficient of complex convolution neural network compression method It is weary, three stage compression methods of depth-compression are improved, are compared using the mould of plural number with given threshold value in the trimming stage, Weight is quantified using two-dimentional k-means clustering method, is finally encoded using Huffman respectively to the real part of complex-valued weights Encoded with imaginary part, the compression ratio on Cifar-10 data set and ImageNet data set respectively reached 7 times or so and 14 times or so, accurate rate loses within 3%, so as to realize complex convolution net in the case where accurate rate loss is less The parameter of network substantially reduces, and solves complex convolution neural network since huge parameter amount can not be deployed on embedded device Problem.
Detailed description of the invention
Fig. 1 is of the invention from the flow chart for being trimmed to Huffman coding;
Fig. 2 is the structure chart of complex convolution neural network used in the present embodiment.
Specific embodiment
In order to verify the effect of the method for the present invention, the following experiment of progress:
1, experiment condition:
Confirmatory experiment is carried out on the computer of a 64 bit manipulation systems (16.04 version of Ubuntu), which matches Be set to Intel (R) Core (TM) i5-7500 processor (3400 megahertzs), 8000 Mbytes of random access memory (RAM) and 4000 Mbytes of 970 video cards of Nvidia GeForce GTX, programming language is Python (2.7 version).
2, experimental method:
The convolutional neural networks that initial network uses in this experiment is the complex convolution neural networks based on residual error network (as shown in Figure 2), data set use Cifar-10 and ImageNet.
As shown in Figure 1, experiment uses the three stage compression frames based on depth-compression by convolutional neural networks initial in Fig. 1 Compression of parameters reduce.The specific steps of which are as follows:
1) network cut:
1.1) training original complex convolutional neural networks, obtain its connectivity, obtain its original weight;
1.2) it trims small weight connection: setting a threshold value, mould and setting for more network, with complex-valued weights first Threshold value make comparisons, weight is all removed from network lower than the connection of given threshold value, i.e., by the real part of complex-valued weights and imaginary part It is disposed as 0, the weight after being trimmed;
1.3) re -training network learns the final weight of remaining sparse network;
2) network quantifies: quantization cluster is carried out to the trimming weight that step 1 obtains, specific as follows:
2.1) a clusters number K is set, using the mass center of linear initialization method initialization cluster;
2.2) each layer of weight is clustered using two-dimentional K mean cluster algorithm, is judged between mass center and weight Apart from when measured using Euclidean distance, divide n initial complex weight W={ w1,w2,...,wn, the wherein form of complex-valued weights For wi=ai+jbi, arrive K cluster C={ c1,c2,...,ck, k > > n minimizes the quadratic sum in cluster:
2.3) re -training network come learn quantization after network final weight;
3) Huffman coding is carried out to the real and imaginary parts of the complex-valued weights after quantization respectively, obtains final compression Network.
3, the evaluation index of experimental result:
Experimental result uses the compression ratio and network accurate rate (Accuracy) before and after Web compression.
Compression ratio refers to the ratio between the parameter storage size before Web compression and the parameter storage size after Web compression.
Accurate rate refers to the accurate rate predicted after Web compression test data set.
4, with the contrast and experiment of the prior art:
1) CIFAR-10 data set
Table 1 gives the compression result after complex convolution neural network is trimmed on CIFAR-10 data set, surveys Test result gives accurate rate and trimming rate under different trimming threshold values simultaneously.Table 2 give trimming threshold value be 0.3 network into Compression result after row quantization, test result give accurate rate and compression ratio under different clusters numbers simultaneously.
Accuracy and trimming rate of the plural CNN on CIFAR-10 after the trimming of 1. weight of table
Trim threshold value Accuracy (%) Trimming rate (%)
Original complex CNN 93.19 -
0.01 93.14 10.9
0.02 93.07 25.49
0.03 92.98 38.7
0.04 92.35 50.86
0.05 89.65 61.38
Accuracy and compression ratio of the plural CNN on CIFAR-10 after 2. weight trimming of table+weight quantization
By table 1 it is recognised that trimming threshold value is bigger, the weight trimmed is more, and trimming rate is higher, but accurate rate is got over Low, when threshold value is 0.3 and 0.4, trimming rate and accurate rate reach a relatively good compromise balance.
As table 2 it is recognised that clusters number is in stage 1, rank when trimming threshold value and determining (being 0.3 used in experiment) Section 2 and stage 3 reach a relatively good compromise using trimming rate when 90,100 and 100 and accurate rate respectively and balance, and finally deposit Storage size is 552.2KB, and compared with original size 4.1MB, compression ratio has reached 7.42 times.
2) ImageNet data set
Table 3 gives the compression result after complex convolution neural network is trimmed on ImageNet data set, surveys Test result gives accurate rate and trimming rate under different trimming threshold values simultaneously.Table 4 gives the network that trimming threshold value is 0.009 Compression result after being quantified, test result give accurate rate and compression ratio under different clusters numbers simultaneously.
Accuracy and trimming rate of the plural CNN on IMAGENET after the trimming of 3. weight of table
Trim threshold value Top-1 accuracy (%) Top-5 accuracy (%) Trimming rate (%)
Original complex CNN 68.31 88.07 -
0.006 67.67 88.01 19.13
0.007 67.61 87.94 24.56
0.008 67.41 87.77 29.48
0.009 68.10 88.04 34.13
0.01 67.38 88.05 38.39
Accuracy and compression ratio of the plural CNN on IMAGENET after 4. weight trimming of table+weight quantization
By table 3 it is recognised that trimming threshold value is bigger, the weight trimmed is more, and trimming rate is higher, but accurate rate is got over Low, when threshold value is 0.009, trimming rate and accurate rate reach a relatively good compromise balance.
As table 4 it is recognised that clusters number is in 1 He of stage when trimming threshold value and determining (being 0.009 used in experiment) 2, stage 3 and stage 4 and 5, it is flat to reach a relatively good compromise using trimming rate when 127,256 and 256 and accurate rate respectively Weighing apparatus, final storage size are 3.6MB, and compared with original size 51.6MB, compression ratio has reached 14.33 times.

Claims (4)

1. a kind of complex convolution neural network compression method based on depth-compression, it is characterised in that: the following steps are included:
1) network cut:
1.1) training original complex convolutional neural networks, obtain its connectivity, obtain its original weight;
1.2) it trims small weight connection: a threshold value is set first, for more network, with the mould of complex-valued weights and the threshold of setting Value is made comparisons, and the connection by weight lower than given threshold value is all removed from network, i.e., is all provided with the real part of complex-valued weights with imaginary part It is set to 0, the weight after being trimmed;
1.3) re -training network learns the final weight of remaining sparse network;
2) network quantifies: quantization cluster is carried out to the trimming weight that step 1 obtains, specific as follows:
2.1) a clusters number K is set, and initializes the mass center of cluster;
2.2) each layer of weight is clustered using two-dimentional K mean cluster algorithm;
2.3) re -training network come learn quantization after network final weight;
3) Huffman coding is carried out to the real and imaginary parts of the complex-valued weights after quantization respectively, obtains final compression network.
2. a kind of complex convolution neural network compression method based on depth-compression according to claim 1, feature exist In: using one in random initializtion method, the initial method based on density and linear initialization method in the step 2.1 The mass center of kind initialization cluster.
3. a kind of complex convolution neural network compression method based on depth-compression according to claim 1, feature exist In: two-dimentional K mean cluster algorithm is clustered in the step 2.2 specifically: is measured using Euclidean distance, segmentation n are initial Complex-valued weights W={ w1,w2,...,wnArrive K cluster C={ c1,c2,...,ck, k > > n, to minimize the quadratic sum in cluster, The form of middle complex-valued weights is wi=ai+jbi
4. a kind of complex convolution neural network compression method based on depth-compression according to claim 1, feature exist In: parameter is stored using the storage method of sparse matrix compression loose line or the sparse column of compression after the completion of the step 1.
CN201910136000.7A 2019-02-25 2019-02-25 A kind of complex convolution neural network compression method based on depth-compression Pending CN109886406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910136000.7A CN109886406A (en) 2019-02-25 2019-02-25 A kind of complex convolution neural network compression method based on depth-compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910136000.7A CN109886406A (en) 2019-02-25 2019-02-25 A kind of complex convolution neural network compression method based on depth-compression

Publications (1)

Publication Number Publication Date
CN109886406A true CN109886406A (en) 2019-06-14

Family

ID=66929147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910136000.7A Pending CN109886406A (en) 2019-02-25 2019-02-25 A kind of complex convolution neural network compression method based on depth-compression

Country Status (1)

Country Link
CN (1) CN109886406A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110266620A (en) * 2019-07-08 2019-09-20 电子科技大学 3D MIMO-OFDM system channel estimation method based on convolutional neural networks
CN110399975A (en) * 2019-07-30 2019-11-01 重庆邮电大学 A kind of lithium battery depth diagnostic model compression algorithm towards hardware transplanting
CN110909775A (en) * 2019-11-08 2020-03-24 支付宝(杭州)信息技术有限公司 Data processing method and device and electronic equipment
CN110942143A (en) * 2019-12-04 2020-03-31 卓迎 Toy detection acceleration method and device based on convolutional neural network
CN111008693A (en) * 2019-11-29 2020-04-14 深动科技(北京)有限公司 Network model construction method, system and medium based on data compression
CN111327559A (en) * 2020-02-28 2020-06-23 北京邮电大学 Encoding and decoding method and device
CN113030902A (en) * 2021-05-08 2021-06-25 电子科技大学 Twin complex network-based few-sample radar vehicle target identification method
CN113221981A (en) * 2021-04-28 2021-08-06 之江实验室 Edge deep learning-oriented data cooperative processing optimization method
CN114626418A (en) * 2022-03-18 2022-06-14 中国人民解放军32802部队 Radiation source identification method and device based on multi-center complex residual error network
CN115935154A (en) * 2023-03-13 2023-04-07 南京邮电大学 Radio frequency signal characteristic selection and identification method based on sparse representation and near-end algorithm
WO2023236977A1 (en) * 2022-06-08 2023-12-14 华为技术有限公司 Data processing method and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557812A (en) * 2016-11-21 2017-04-05 北京大学 The compression of depth convolutional neural networks and speeding scheme based on dct transform
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557812A (en) * 2016-11-21 2017-04-05 北京大学 The compression of depth convolutional neural networks and speeding scheme based on dct transform
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110266620A (en) * 2019-07-08 2019-09-20 电子科技大学 3D MIMO-OFDM system channel estimation method based on convolutional neural networks
CN110399975A (en) * 2019-07-30 2019-11-01 重庆邮电大学 A kind of lithium battery depth diagnostic model compression algorithm towards hardware transplanting
CN110909775A (en) * 2019-11-08 2020-03-24 支付宝(杭州)信息技术有限公司 Data processing method and device and electronic equipment
CN111008693A (en) * 2019-11-29 2020-04-14 深动科技(北京)有限公司 Network model construction method, system and medium based on data compression
CN111008693B (en) * 2019-11-29 2024-01-26 小米汽车科技有限公司 Network model construction method, system and medium based on data compression
CN110942143A (en) * 2019-12-04 2020-03-31 卓迎 Toy detection acceleration method and device based on convolutional neural network
CN111327559B (en) * 2020-02-28 2021-01-08 北京邮电大学 Encoding and decoding method and device
CN111327559A (en) * 2020-02-28 2020-06-23 北京邮电大学 Encoding and decoding method and device
CN113221981A (en) * 2021-04-28 2021-08-06 之江实验室 Edge deep learning-oriented data cooperative processing optimization method
CN113030902A (en) * 2021-05-08 2021-06-25 电子科技大学 Twin complex network-based few-sample radar vehicle target identification method
CN113030902B (en) * 2021-05-08 2022-05-17 电子科技大学 Twin complex network-based few-sample radar vehicle target identification method
CN114626418A (en) * 2022-03-18 2022-06-14 中国人民解放军32802部队 Radiation source identification method and device based on multi-center complex residual error network
WO2023236977A1 (en) * 2022-06-08 2023-12-14 华为技术有限公司 Data processing method and related device
CN115935154A (en) * 2023-03-13 2023-04-07 南京邮电大学 Radio frequency signal characteristic selection and identification method based on sparse representation and near-end algorithm
CN115935154B (en) * 2023-03-13 2023-11-24 南京邮电大学 Radio frequency signal characteristic selection and identification method based on sparse representation and near-end algorithm

Similar Documents

Publication Publication Date Title
CN109886406A (en) A kind of complex convolution neural network compression method based on depth-compression
Dubey et al. Coreset-based neural network compression
CN111738301B (en) Long-tail distribution image data identification method based on double-channel learning
CN106021364B (en) Foundation, image searching method and the device of picture searching dependency prediction model
CN110442684A (en) A kind of class case recommended method based on content of text
CN107240136B (en) Static image compression method based on deep learning model
CN109002889A (en) Adaptive iteration formula convolutional neural networks model compression method
CN112465120A (en) Fast attention neural network architecture searching method based on evolution method
CN114943345B (en) Active learning and model compression-based federal learning global model training method
CN111125469B (en) User clustering method and device of social network and computer equipment
CN108984642A (en) A kind of PRINTED FABRIC image search method based on Hash coding
CN114118369B (en) Image classification convolutional neural network design method based on group intelligent optimization
CN108197707A (en) Compression method based on the convolutional neural networks that global error is rebuild
CN109034370A (en) Convolutional neural network simplification method based on feature mapping pruning
CN112182221A (en) Knowledge retrieval optimization method based on improved random forest
CN111506760B (en) Depth integration measurement image retrieval method based on difficult perception
CN111626404A (en) Deep network model compression training method based on generation of antagonistic neural network
Xie et al. Object Re-identification Using Teacher-Like and Light Students.
Yang et al. Structured pruning via feature channels similarity and mutual learning for convolutional neural network compression
CN108805844B (en) Lightweight regression network construction method based on prior filtering
Cardarelli A deep variational convolutional Autoencoder for unsupervised features extraction of ceramic profiles. A case study from central Italy
CN116958020A (en) Abnormal image detection method, model training method, device, equipment and medium
CN112200275B (en) Artificial neural network quantification method and device
Wu et al. Compressing complex convolutional neural network based on an improved deep compression algorithm
CN108417204A (en) Information security processing method based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190614