CN109376859A - A kind of neural networks pruning method based on diamond shape convolution - Google Patents
A kind of neural networks pruning method based on diamond shape convolution Download PDFInfo
- Publication number
- CN109376859A CN109376859A CN201811128219.4A CN201811128219A CN109376859A CN 109376859 A CN109376859 A CN 109376859A CN 201811128219 A CN201811128219 A CN 201811128219A CN 109376859 A CN109376859 A CN 109376859A
- Authority
- CN
- China
- Prior art keywords
- convolution
- convolution kernel
- diamond shape
- neural networks
- rectangular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of neural networks pruning method based on diamond shape convolution, include the following steps: that (1) convolutional neural networks input layer and convolutional layer increase blank ranks pixel;(2) convolution operation replaces rectangular convolution kernel using odd diamonds convolution kernel, and pondization operation uses rectangular convolution kernel, the output of forward calculation neural network;(3) using convolutional neural networks of the back-propagation algorithm training containing diamond shape convolution kernel, the gradient of each neuron is found out, new weight is obtained.The invention proposes the neural networks based on diamond shape convolution, with diamond shape convolution kernel instead of traditional rectangular convolution kernel, using a norm constraint convolution window, more effective local experiences center is remained, ensure that the sparsity of network, further solve the excessively high problem of neural network parameter complexity, spread speed is accelerated, while diamond shape convolution kernel has regularization effect, it is therefore prevented that model over-fitting, whole training speed is accelerated, and has higher accuracy rate.
Description
Technical field
The present invention relates to technical field of image processing, especially a kind of neural networks pruning method based on diamond shape convolution.
Background technique
Convolutional neural networks are developed by artificial neural network, and the shared structure of unique weight reduces the rule of network
Mould, it is easier to training.Since image translation, scaling and the invariance of rotation, convolutional neural networks are widely used in image recognition
Field, Microsoft do the hand-written discrimination system of Arabic and Chinese using convolutional neural networks, and Google uses convolutional neural networks
To identify face and license plate in Streetscape picture, etc..
The characteristics of convolutional neural networks is exactly that local sensing and weight are shared.Association between picture pixels be it is local,
Similar human eye is to go to perceive small image block respectively by optic nerve, does not need each neuron and perceives to picture in its entirety,
Comprehensive descision in brain obtains the feature of picture in its entirety, therefore each convolution kernel has an output.Meanwhile image being capable of weight
It is shared.Be characterized in positioned at picture bottom it is blanket, it is unrelated with position, such as image edge, be either located on image
Side or the regional area edge features below image, can use similar feature extractor.In this way, office in piece image
One feature in portion region only needs a convolution kernel that can extract.For be mainly used for extract low-level image feature before several layers of nets
Network, weight is shared to further reduce the number of parameter.
Compared to fully-connected network, convolutional neural networks reduce the ginseng of network while being sufficiently reserved characteristics of image
Number quantity, reduces network size, machine is independently handled image, such as image classification, compresses, and generates.But
It is that the speed of common Processing with Neural Network image does not still reach requirement, general neural network parameter enormous amount, once
Propagated forward will consume huge computing resource, expend longer time, therefore be unsuitable for using in mobile terminal, particularly with need
The application to handle in real time.Therefore many to accelerate to be suggested with the method for compact model, such as beta pruning neural network, reduction numerical value
Precision decomposes convolution kernel, global average pond, optimization activation primitive cost function, hardware-accelerated etc..Wherein network beta pruning has been
It is proved to be a kind of effective ways for reducing network complexity and overfitting.Conventional neural networks are by training come learning network
Connection weight, then can the lesser connection of beta pruning weighted value, obtain the network of remaining partially connected.But select the nerve of beta pruning
Member requires a great deal of time, and convolutional neural networks need more efficient pruning algorithms.
Summary of the invention
Technical problem to be solved by the present invention lies in, a kind of neural networks pruning method based on diamond shape convolution is provided,
The shape for changing convolution kernel, reduces network parameter.
In order to solve the above technical problems, the present invention provides a kind of neural networks pruning method based on diamond shape convolution, including
Following steps:
(1) convolutional neural networks input layer and convolutional layer increase blank ranks pixel, guarantee that picture centre is symmetrical, avoid down
One step convolution loses effective information;
(2) convolution operation replaces rectangular convolution kernel using odd diamonds convolution kernel, and pondization operation uses rectangular convolution kernel, preceding
It is exported to neural network is calculated;
(3) using convolutional neural networks of the back-propagation algorithm training containing diamond shape convolution kernel, each neuron is found out
Gradient obtains new weight.
Preferably, in step (2), convolution kernel ranks length is odd number, and Odd Phases have physical centre's pixel for even number
Point, it is easier to determine position to edge, it is more sensitive to lines, can have symmetrical outer back gauge, more effectively extract edge and believe
Breath, convolution effect are better than even number convolution kernel.
Preferably, in step (2), regional area is become diamond shape by rectangular, is protected with a norm constraint central point distance, neighborhood
More effective local experiences center is stayed;Diamond shape convolution kernel weight number is successively decreased two line by line from central row to both sides row, is obtained
Input window and convolution window shape be similar to diamond shape.
Preferably, in step (3), the convolutional neural networks containing diamond shape convolution kernel are a beta prunings to neural network,
Width is that the convolution kernel of n needs 0.5*n* (n+1) a parameter, reduces 0.5*n* (n-1) compared to rectangular convolution nuclear parameter, guarantees
The sparsity of network, reduces model complexity, accelerates spread speed, while having regularization effect, it is therefore prevented that model mistake
Fitting.
The invention has the benefit that being taken the invention proposes the neural network based on diamond shape convolution with diamond shape convolution kernel
More effective local experiences center is remained, further using a norm constraint convolution window for traditional rectangular convolution kernel
It solves the problems, such as that neural network parameter complexity is excessively high, tests improved LeNet5 structure recognition MNIST handwritten numeral figure
Piece.
Detailed description of the invention
Fig. 1 is convolution operation schematic diagram of the invention.
Fig. 2 is diamond shape convolution kernel schematic diagram of the invention.
Fig. 3 is improved LeNet5 network diagram of the invention.
Fig. 4 is improved LeNet5 communication process schematic diagram of the invention.
Specific embodiment
A kind of neural networks pruning method based on diamond shape convolution, includes the following steps:
(1) convolutional neural networks input layer and convolutional layer increase blank ranks pixel, guarantee that picture centre is symmetrical, avoid down
One step convolution loses effective information;
(2) convolution operation replaces rectangular convolution kernel using odd diamonds convolution kernel, and pondization operation uses rectangular convolution kernel, preceding
It is exported to neural network is calculated;
(3) using convolutional neural networks of the back-propagation algorithm training containing diamond shape convolution kernel, each neuron is found out
Gradient obtains new weight.
The invention discloses a kind of neural networks pruning algorithms based on diamond shape convolution.Convolution operation is as shown in Figure 1, M
Input feature vector figure I1, I2 ..., Im pass through K11 to K1n respectively, and K21 to K2n, Km1 to Kmn convolution kernels, superposition obtains N number of defeated
Characteristic pattern O1, O2 out ..., On.Existing convolutional neural networks are realized to be slided on input feature vector figure using rectangular convolution kernel, with
Rectangular input window carries out product accumulation operation, obtains next layer of output.The operation in square window, be equivalent to input feature vector figure with
Infinite Norm is as normal constraint local experiences domain, i.e., in two-dimensional surface, arrives local center in wide and high both direction
The node of distance respectively less than specification distance, forms local experiences domain, obtains an output node.The convolution kernel that width is n needs
N*n parameter.
Diamond shape convolution kernel is a kind of special-shaped convolution kernel, and weight number is successively decreased two line by line from central row to both sides row, obtained
Input window and convolution window shape be similar to diamond shape, be equivalent to input feature vector figure using a norm as normal constraint part feel
By domain, i.e., in two-dimensional surface, it is less than the node of specification distance in wide and high both direction to the sum of the distance of local center,
Local experiences domain is formed, an output node is obtained.For the diamond shape convolution kernel that center ranks length n is, when n is odd number, often
The number of row each column is { 1,3 ..., n-2, n, n-2 ..., 3,1 }, when n is even number, the number of every row be 1,3 ..., n-2,
N, n-2 ..., 3,1 }, the number of each column is { 2,4 ..., n-2, n, n-2 ..., 4,2 }, and shape is similar to diamond shape.Compared to rectangular volume
Product core, diamond shape convolution kernel is a beta pruning to neural network, sets 0 for the weight on local experiences domain boundary, ensure that net
The sparsity of network.Width is that the convolution kernel of n needs 0.5*n* (n+1) a parameter, reduces 0.5* compared to rectangular convolution nuclear parameter
n*(n-1)。
Diamond shape convolution kernel uses the diamond shape convolution of n=5 as shown in Fig. 2, input feature vector figure is extended to 16*16 by 14*14
Core, only 13 weights, are a half than rectangular convolution kernel, and one under each diamond-shaped windows group input is rolled up with one group of weight
Product obtains an output neuron, the characteristic pattern of final output 12*12.
Convolution kernel is greater than 1x1, it is meant that extracts feature and needs neighborhood information.If it is desired to extract cross grain, then laterally
Neighborhood information density ratio longitudinal direction information density is high.Convolution kernel should take flat pattern.If it is desired to extract longitudinal texture, then it is longitudinal
Neighborhood information density ratio transverse direction information density is high.Convolution kernel should take elongated shape.If it is desired to the texture type extracted is abundant,
The expectation of that lateral neighborhood information density is approximately equal to longitudinal information density.In most cases, convolution kernel square rather than
Rectangle, exactly because the hypothesis of neighborhood information density.
Determine that the shapes and sizes of neighborhood are most important to the effect of convolution.If according to the plan range of distance center point
It determines neighborhood, i.e., constrains distance with " two norms ", neighborhood should be circular, but circle shaped neighborhood region is not easy in actual operation
Value calculates complicated to specific point.Therefore under normal conditions, for ease of calculation, convolution kernel and calculation window are all rectangular
, i.e., distance is calculated with " Infinite Norm ".The point of round convolution kernel is all included in this way, so that convolution theory is with complete
Property, but such the problem of bringing is exactly that the far and near different point of distance center point is regarded it on an equal basis, there are the superfluous of a part of parameter
It is remaining, increase computation complexity.If carrying out beta pruning to convolutional neural networks, remove some points remote apart from local center,
The training prediction effect of network is influenced less, and model complexity substantially reduces.
If neighborhood becomes diamond shape, central point distribution according to " norm " distance restraint regional area of distance center point
Density is more than edge point.There are the place of cataclysm, i.e. image border in picture surrounding pixel gray value, the bottom for containing picture is special
Sign.Information positioned at image edge is not necessarily marginal information.When convolution kernel slips over local experiences domain, the reduction of edge parameter
It will cause the loss of a part of weight, but contribute bigger local center parameter to be retained on network, greatly accelerate net
The speed of network.In deep neural network, the feature that big convolution kernel takes out can be combined by multiple small convolution kernels to be constituted, such as 5*
5 are obtained by extracting multiple 3*3, then the frequency that centrally located weight occurs is higher,.Diamond shape convolution is applied to by the present invention
LeNet5 structure is slightly improved in former network structure, designs convolutional neural networks as shown in figure 3, Network Recognition MNIST word
Symbol collection, parameter are reduced, and training speed is accelerated, and accuracy rate slightly improves.
Convolutional neural networks input layer is hand-written image gradation data.The image in MNIST handwritten numeral characters library is 28*
28 sizes, to input feature vector figure, in each two column in left and right and up and down, each two row increases blank pixel, and input layer is extended to 32*32,
Carry out next step convolution.First layer convolution uses the diamond shape convolution kernel of n=5, exports 6 width 28*28 characteristic patterns, second layer pond is adopted
The second extraction that feature is carried out with the rectangular convolution kernel of 2*2 obtains 6 width 14*14 characteristic patterns, and third layer extension input is 16*16, is adopted
With the diamond shape convolution kernel of n=5,16 width 12*12 characteristic patterns are exported, the 4th layer of pondization obtains 16 width 6*6 using the rectangular convolution kernel of 2*2
Characteristic pattern, the 4th layer connect convolution with layer 5 entirely, obtains 120 outputs, and layer 5 layer 6 connects entirely, obtain 84 it is defeated
Out, output layer is made of European radial basis function (Euclidean Radial Basis Function) unit.
Input layer generallys use full convolution, increases blank ranks on boundary to make full use of boundary effective information.Input
When piece image carries out direct convolution, image first trip, first, last line, last be listed in the boundary of convolution window, and convolution
When these ranks local experiences domain number be less than other ranks, few to the contribution of output node, such convolution mode is not
The effective information of image boundary can sufficiently be extracted.Therefore, line of input needs full convolutional calculation, increases up and down in input feature vector figure
The ranks of image boundary are put into centre by the ranks for adding blank, are obtained output characteristic pattern identical with input size, are comprehensively mentioned
Characteristics of image is taken.
Convolutional layer also will increase blank ranks.When convolutional layer is using diamond shape convolution kernel direct convolution, diamond shape convolution window meeting
Remove the weight at upper one layer of angle of cut, lose one part of pixel, lose the characteristic information of one layer of transmitting, it is therefore desirable to increase row
Columns.Width is that the convolution kernel of odd number increases identical line number or columns up and down, and picture centre does not change, holding pair
Claim.
Convolutional neural networks odd number convolution kernel (wide and high convolution kernel is all odd number) convolution effect is better than even number convolution kernel, square
Battle array convolution can be slided on the basis of a part of convolution core module, odd number convolution kernel be convenient for using module centers as standard into
Row sliding.Odd Phases have physical centre's pixel for even number, it is easier to determine position to edge, it is more sensitive to lines, can
To have symmetrical outer back gauge, more effectively extraction side information.Famous convolutional neural networks such as LeNet5, AlexNet,
VGGNet uses wide and a height of odd number convolution kernel.For the convolution kernel (n is odd number) of n*n, two sides increase identical ranks number
It needs just not having edge loss in increase 0.5* (n-3) each up and down.
Neural metwork training based on diamond shape convolution still uses back-propagation algorithm.The each neuron of forward direction operation it is defeated
It is worth out, the error term of the reversed each neuron obtained finds out the gradient of each neuron connection weight, the i.e. loss function of network
To the partial differential of each layer neuron weighting input, formula is then declined according to gradient and updates weight.The sparsity of weight reduces mould
Type complexity accelerates training speed, while diamond shape convolution kernel simplifies weight, has apparent regularization to act on, it is therefore prevented that model mistake
Fitting.
The improved LeNet5 Internet communication process of the present invention exports 10 as shown in figure 4, using f (x) function as activation primitive
Dimensional vector defines square error cost function E, in which: k is dimension, tk、ykRepresent network kth dimension label value and output valve.
It enables Indicate the gradient of l n-th of neuron of layer,Indicate l layer n-th and (l-1) layer
M-th of connection weight, such asMiddle O7F6 indicates that output layer layer 7 and full articulamentum layer 6,10*84 refer to output layer
Layer 7 has 10 neurons, and full articulamentum layer 6 has 84 neurons, in total 10*84 connection, C5, C3, and C1 indicates volume
Lamination first layer third layer layer 5, P4, P2 indicate the 4th layer of the pond layer second layer, and netl indicates l layers of neuron, such as net6
Layer 6 neuron is represented,For derivative value of the activation primitive at n-th of neuron netl of l layer, such asIndicate derivative value of the full articulamentum layer 6 at 84 neurons, input indicates input layer, Xn,mIndicate the layer
Line n m column input value, the error term of output layer are as follows:
6 error terms of full connection are as follows:
5 error term of convolution is as follows:
4 error term of pondization is as follows:
3 error term of convolution is as follows:
2 error term of pondization is as follows:
1 error term of convolution is as follows:
Wold indicates that old weight, Wnew indicate that new weight, m, n are characterized figure current line columns, and M, it is total that N is characterized figure
Ranks number, i, j are convolution kernel ranks number, Wi,jIndicate the i-th row of convolution kernel, jth column weight, η is learning rate, netm,nIndicate m
Row the n-th column neuron,Indicate the convolutional layer in m+i row, the error term of the n-th+j column, Xm+i,n+jIndicate the convolutional layer
In m+i row, the input of the n-th+j column, the following (W of convolutional layer right value updatei,j∈ C1, C3): due to convolution kernel ranks group diamondwise
(i=1,2 ..., 5j:=1,2 ..., min { I, 6-i } -1)
Following (the W of pond layer right value update formulai,j∈ P2, P4): (i=1,2 j=1,2)
Full articulamentum right value update formula is as follows: (Wi,j∈C5,F6,O7):
Bold indicates old biasing, and bnew indicates new biasing, and η is learning rate, and net indicates neuron, as input X=1
When, biasing is exactly weight, i.e., (Wnew=bnew ifX=1), biasing newer is as follows:
Experiment builds the neural network based on diamond shape convolution kernel using TensorFlow.Using 5*5 convolution kernel, each convolution
Core has (5*5+1)/2=13 weight, a biasing, first layer convolution contain (13+1) * 6=84 can training parameter, 84*
(28*28)=65854 connection, third layer contain (13+1) * 16=224 can training parameter, 224* (14*14)=43904
A connection, for pondization using average pond, every width characteristic pattern has two parameters of weight and biasing, and second layer pond layer has (1+1) * 6
=12 can training parameter, 6*14*14* (2*2+1)=5880 connection, the 4th layer of pond layer have (1+1) * 16=32 that can train
Parameter, 16*6*6* (2*2+1)=2880 connection, layer 5 convolution is full articulamentum, there is (6*6+1) * 16*120=71040
A parameter and connection, the full articulamentum of layer 6 have 120*84=10080 parameter and connection.Whole network has 81472 parameters,
199638 connections.Every batch of trains 50 width images, and the wheel of right value update 20000 needs 2600 seconds, takes the average value of 10 tests,
Test set reaches 99.12% accuracy rate.
In addition, being tested respectively in full articulamentum beta pruning half (drop_out=0.5 expression) and non-beta pruning (drop_out
=1 indicate) rectangular convolution kernel (CNN) and diamond shape convolution kernel (CNN_DF) neural network structure accuracy rate, test 5X5
Be also tested for while convolution kernel the convolution kernel of 3X3 size as a result, as shown in table 1.When convolution kernel size is 5X5 and drop_
When out=1, the result of the convolution kernel of diamond shape is than rectangular 0.03%.When convolution kernel size is 5X5 and drop_out=
When 0.5, the result of the convolution kernel of diamond shape is than rectangular poor 0.02%.When convolution kernel size is 3X3 and drop_out=1,
For the result of the convolution kernel of diamond shape than rectangular poor 0.06%, the case where this is than 5X5 convolution kernel, is poor, it may be possible to because 3X3
Convolution kernel has reached small, then cuts diamondwise, becomes smaller, the feature of extraction is not thorough enough.When convolution kernel size be 3X3 and
When drop_out=0.5, the result of the convolution kernel of diamond shape is than rectangular poor 0.07%, not with difference the case where drop_out=1
It is more.In short, rectangular convolution kernel is become diamond shape, accuracy rate decline is seldom, or even in the case where not using drop_out,
Accuracy rate can be promoted, while accelerate training speed.
1 model accuracy rate test result of table
The computing resource of different convolution kernels is analyzed, as shown in table 2, by taking the convolution kernel of 5*5 as an example:
CNN5X5 number of parameters:
C2=5*5*32+32=832, C4=5*5*64+64=1664, C6=7*7*64*1024+1024=459776,
C7=1024*10+10=10250, total number of parameters are as follows: C2+C4+C6+C7=472522
CNN5X5_DF number of parameters:
C2=13*32+32=448, C4=13*64+64=896, C6=7*7*64*1024+1024=459776, C7=
1024*10+10=10250, total number of parameters are as follows: C2+C4+C6+C7=471370
CNN5X5_DF number of parameters is the 99.76% of CNN5X5, and mainly convolution kernel partial parameters tail off, full articulamentum
Parameter it is very more, and will not reduce, thus parameter reduce it is few.Since convolutional neural networks are the weights of shared convolution
, since the convolution kernel therefore connection number of whole network that becomes smaller can be much less, multiplication is much less.
CNN5X5 multiplication number:
C2=28*28*5*5*32=627200, C4=28*28*5*5*64=1254400, C6=7*7*64*1024=
458752,
C7=1024*10=10240, total number of parameters are as follows: C2+C4+C6+C7=2350592
CNN5X5_DF multiplication number:
C2=28*28*13*32+32=326144, C4=28*28*13*64+64=652288, C6=7*7*64*1024
=458752,
C7=1024*10=10240, total number of parameters are as follows: C2+C4+C6+C7=1447424
The convolution kernel of diamond shape fewer than the parameter of rectangular convolution kernel nearly 1 half, therefore the connection number of conventional part will be close less
Half, in hardware realization, it is possible to reduce many multiplyings, the multiplication number by calculating CNN5X5_DF only have
The 61.6% of CNN5X5 multiplication number, since convolution kernel becomes smaller, the data of convolution are reduced, this can also effectively reduce the band of data
Width, therefore diamond shape convolution kernel is applied and is had the certain significance in hardware realization.
The computing resource of the different convolution kernels of table 2
Model | Number of parameters | Multiplication number |
CNN5X5 | 472522 | 2350592 |
CNN5X5_DF | 471370 | 1447424 |
Claims (4)
1. a kind of neural networks pruning method based on diamond shape convolution, which comprises the steps of:
(1) convolutional neural networks input layer and convolutional layer increase blank ranks pixel;
(2) convolution operation replaces rectangular convolution kernel using odd diamonds convolution kernel, and pondization operation uses rectangular convolution kernel, forward direction meter
Calculate neural network output;
(3) using convolutional neural networks of the back-propagation algorithm training containing diamond shape convolution kernel, the gradient of each neuron is found out,
Obtain new weight.
2. as described in claim 1 based on the neural networks pruning method of diamond shape convolution, which is characterized in that in step (2), volume
Product core ranks length is odd number.
3. as described in claim 1 based on the neural networks pruning method of diamond shape convolution, which is characterized in that in step (2), office
Portion region becomes diamond shape by rectangular with a norm constraint central point distance, neighborhood;Diamond shape convolution kernel weight number from central row to
Both sides row successively decreases two line by line, and obtained input window and convolution window shape are similar to diamond shape.
4. as described in claim 1 based on the neural networks pruning method of diamond shape convolution, which is characterized in that in step (3), contain
The convolutional neural networks of diamond shape convolution kernel are a beta prunings to neural network, and width is that the convolution kernel of n needs 0.5*n* (n+
1) a parameter reduces 0.5*n* (n-1) compared to rectangular convolution nuclear parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811128219.4A CN109376859A (en) | 2018-09-27 | 2018-09-27 | A kind of neural networks pruning method based on diamond shape convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811128219.4A CN109376859A (en) | 2018-09-27 | 2018-09-27 | A kind of neural networks pruning method based on diamond shape convolution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109376859A true CN109376859A (en) | 2019-02-22 |
Family
ID=65402739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811128219.4A Pending CN109376859A (en) | 2018-09-27 | 2018-09-27 | A kind of neural networks pruning method based on diamond shape convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376859A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110718211A (en) * | 2019-09-26 | 2020-01-21 | 东南大学 | Keyword recognition system based on hybrid compressed convolutional neural network |
CN111222629A (en) * | 2019-12-31 | 2020-06-02 | 暗物智能科技(广州)有限公司 | Neural network model pruning method and system based on adaptive batch normalization |
CN111539463A (en) * | 2020-04-15 | 2020-08-14 | 苏州万高电脑科技有限公司 | Method and system for realizing image classification by simulating neuron dendritic branches |
CN112734025A (en) * | 2019-10-28 | 2021-04-30 | 复旦大学 | Neural network parameter sparsification method based on fixed base regularization |
CN112785663A (en) * | 2021-03-17 | 2021-05-11 | 西北工业大学 | Image classification network compression method based on arbitrary shape convolution kernel |
CN113344182A (en) * | 2021-06-01 | 2021-09-03 | 电子科技大学 | Network model compression method based on deep learning |
US11157769B2 (en) * | 2018-09-25 | 2021-10-26 | Realtek Semiconductor Corp. | Image processing circuit and associated image processing method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104867169A (en) * | 2015-05-18 | 2015-08-26 | 北京航空航天大学 | Infrared sensor physical effect real-time imaging simulation method based on GPU |
CN104915322A (en) * | 2015-06-09 | 2015-09-16 | 中国人民解放军国防科学技术大学 | Method for accelerating convolution neutral network hardware and AXI bus IP core thereof |
CN105160400A (en) * | 2015-09-08 | 2015-12-16 | 西安交通大学 | L21 norm based method for improving convolutional neural network generalization capability |
-
2018
- 2018-09-27 CN CN201811128219.4A patent/CN109376859A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104867169A (en) * | 2015-05-18 | 2015-08-26 | 北京航空航天大学 | Infrared sensor physical effect real-time imaging simulation method based on GPU |
CN104915322A (en) * | 2015-06-09 | 2015-09-16 | 中国人民解放军国防科学技术大学 | Method for accelerating convolution neutral network hardware and AXI bus IP core thereof |
CN105160400A (en) * | 2015-09-08 | 2015-12-16 | 西安交通大学 | L21 norm based method for improving convolutional neural network generalization capability |
Non-Patent Citations (2)
Title |
---|
JIFENG DAI ET AL: "Deformable Convolutional Networks", 《HTTP://ARXIV.ORG/PDF/1703.06211V3》 * |
KAIZHOU LI ET AL: "Complex Convolution Kernel for Deep Networks", 《2016 8TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS & SIGNAL PROCESSING (WCSP)》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11157769B2 (en) * | 2018-09-25 | 2021-10-26 | Realtek Semiconductor Corp. | Image processing circuit and associated image processing method |
CN110718211A (en) * | 2019-09-26 | 2020-01-21 | 东南大学 | Keyword recognition system based on hybrid compressed convolutional neural network |
CN112734025A (en) * | 2019-10-28 | 2021-04-30 | 复旦大学 | Neural network parameter sparsification method based on fixed base regularization |
CN112734025B (en) * | 2019-10-28 | 2023-07-21 | 复旦大学 | Neural network parameter sparsification method based on fixed base regularization |
CN111222629A (en) * | 2019-12-31 | 2020-06-02 | 暗物智能科技(广州)有限公司 | Neural network model pruning method and system based on adaptive batch normalization |
CN111539463A (en) * | 2020-04-15 | 2020-08-14 | 苏州万高电脑科技有限公司 | Method and system for realizing image classification by simulating neuron dendritic branches |
CN111539463B (en) * | 2020-04-15 | 2023-09-15 | 苏州万高电脑科技有限公司 | Method and system for realizing image classification by simulating neuron dendrite branching |
CN112785663A (en) * | 2021-03-17 | 2021-05-11 | 西北工业大学 | Image classification network compression method based on arbitrary shape convolution kernel |
CN112785663B (en) * | 2021-03-17 | 2024-05-10 | 西北工业大学 | Image classification network compression method based on convolution kernel of arbitrary shape |
CN113344182A (en) * | 2021-06-01 | 2021-09-03 | 电子科技大学 | Network model compression method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376859A (en) | A kind of neural networks pruning method based on diamond shape convolution | |
CN107526785B (en) | Text classification method and device | |
CN110059698B (en) | Semantic segmentation method and system based on edge dense reconstruction for street view understanding | |
CN101520894B (en) | Method for extracting significant object based on region significance | |
CN107506761A (en) | Brain image dividing method and system based on notable inquiry learning convolutional neural networks | |
CN110909801B (en) | Data classification method, system, medium and device based on convolutional neural network | |
CN110287777B (en) | Golden monkey body segmentation algorithm in natural scene | |
CN109919830A (en) | It is a kind of based on aesthetic evaluation band refer to human eye image repair method | |
CN107292267A (en) | Photo fraud convolutional neural networks training method and human face in-vivo detection method | |
CN110674704A (en) | Crowd density estimation method and device based on multi-scale expansion convolutional network | |
Yang et al. | Down image recognition based on deep convolutional neural network | |
CN107301396A (en) | Video fraud convolutional neural networks training method and human face in-vivo detection method | |
Mamatkulovich | Lightweight residual layers based convolutional neural networks for traffic sign recognition | |
CN110059769A (en) | The semantic segmentation method and system rebuild are reset based on pixel for what streetscape understood | |
CN110210278A (en) | A kind of video object detection method, device and storage medium | |
CN112149521A (en) | Palm print ROI extraction and enhancement method based on multitask convolutional neural network | |
CN113920516A (en) | Calligraphy character skeleton matching method and system based on twin neural network | |
CN113554084A (en) | Vehicle re-identification model compression method and system based on pruning and light-weight convolution | |
CN106503743A (en) | A kind of quantity is more and the point self-adapted clustering method of the high image local feature of dimension | |
CN109522953A (en) | The method classified based on internet startup disk algorithm and CNN to graph structure data | |
CN113688715A (en) | Facial expression recognition method and system | |
Chen et al. | Scale-aware rolling fusion network for crowd counting | |
Osaku et al. | Convolutional neural network simplification with progressive retraining | |
CN111428809B (en) | Crowd counting method based on spatial information fusion and convolutional neural network | |
CN110738213A (en) | image recognition method and device comprising surrounding environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190222 |