CN107679617B - Multi-iteration deep neural network compression method - Google Patents

Multi-iteration deep neural network compression method Download PDF

Info

Publication number
CN107679617B
CN107679617B CN201611105480.3A CN201611105480A CN107679617B CN 107679617 B CN107679617 B CN 107679617B CN 201611105480 A CN201611105480 A CN 201611105480A CN 107679617 B CN107679617 B CN 107679617B
Authority
CN
China
Prior art keywords
matrix
neural network
compression ratio
wer
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611105480.3A
Other languages
Chinese (zh)
Other versions
CN107679617A (en
Inventor
李鑫
韩松
孙世杰
单羿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xilinx Technology Beijing Ltd
Original Assignee
Xilinx Technology Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/242,622 external-priority patent/US10621486B2/en
Priority claimed from US15/242,624 external-priority patent/US20180046903A1/en
Application filed by Xilinx Technology Beijing Ltd filed Critical Xilinx Technology Beijing Ltd
Priority to US15/390,559 priority Critical patent/US10762426B2/en
Publication of CN107679617A publication Critical patent/CN107679617A/en
Application granted granted Critical
Publication of CN107679617B publication Critical patent/CN107679617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Memory System (AREA)
  • Complex Calculations (AREA)

Abstract

A method of compressing a neural network, wherein weights between individual neurons of the neural network are represented by a plurality of matrices, the method comprising: a sensitivity analysis step for analyzing the sensitivity of each matrix of the plurality of matrices and determining the initial compression ratio of each matrix; a compression step, which is used for compressing each matrix based on the initial compression ratio to obtain a compressed neural network; and a retraining step for retraining the compressed neural network. The invention also discloses a device for compressing the neural network.

Description

Multi-iteration deep neural network compression method
This application claims priority from U.S. patent application No.15/242,622 filed on 2016, 8, 22 and U.S. patent application No.15/242,624 filed on 2016, 8, 22.
Technical Field
The invention relates to a multi-iteration deep neural network compression method and device.
Background
Compression of artificial neural networks
Artificial Neural Networks (ANNs), also called Neural Networks (NNs) for short, are mathematical computation models that model the behavioral characteristics of animal Neural Networks and perform distributed parallel information processing. In recent years, neural networks have been developed rapidly and widely used in many fields such as image recognition, speech recognition, natural language processing, weather forecast, gene expression, content push, and the like.
In a neural network, there are a large number of nodes (also called "neurons") connected to each other. Neural networks have two characteristics: 1) each neuron, through some specific output Function (also called Activation Function), calculates and processes the weighted input value from other adjacent neurons; 2) the information transmission strength between neurons is defined by a so-called weight, and the algorithm will continuously learn by itself to adjust the weight.
Early neural networks, which had only two layers, an input layer and an output layer, could not process complex logic, thus limiting their utility.
As shown in fig. 1, Deep Neural Networks (DNNs) revolutionarily change this by adding a hidden intermediate layer between the input layer and the output layer.
Recurrent Neural Networks (RNNs) are a popular model of deep Neural Networks. Unlike traditional forward Neural Networks (Feed-forward Neural Networks), the recurrent Neural network introduces directional circulation, which can deal with the problem of contextual relationships between inputs. In speech recognition, the signal is strongly correlated before and after, for example, a word in a recognition sentence is closely related to a word sequence preceding the word. Therefore, the recurrent neural network has a very wide application in the field of speech recognition.
However, with the rapid development in recent years, the size of neural networks is increasing, and the disclosed more advanced neural networks can reach hundreds of layers and hundreds of millions of connections, and belong to computing and memory-intensive applications. Model compression becomes extremely important in the case of gradually larger neural networks.
In a deep neural network, the connection relationships of neurons can be mathematically represented as a series of matrices. Although the prediction of the trained network is accurate, the matrix of the trained network is dense, that is, "the matrix is filled with non-zero elements", so that a large amount of storage and computing resources are consumed. This not only reduces speed but also increases cost. Therefore, the popularization and application of the method at the mobile terminal face huge difficulties, and the development of the neural network is greatly restricted.
FIG. 2 shows a schematic diagram of a compressed neural network with pruning and retraining.
In recent years, extensive research shows that in the neural network model matrix obtained by training, only some elements with larger weights represent important connections, and other elements with smaller weights can be removed (set to zero),
fig. 3 shows that the corresponding neuron is pruned (pruning). The precision of the pruned neural network is reduced, but the weight value still remained in the model matrix can be adjusted through retraining (fine tune), thereby reducing the precision loss.
The dense matrix in the neural network can be thinned by model compression, the storage capacity can be effectively reduced, the calculation amount can be reduced, and the acceleration can be realized while the precision is kept. Model compression is extremely important for dedicated sparse neural network accelerators.
Speech Recognition
Speech Recognition (Speech Recognition) is the sequential mapping of an analog signal of a language onto a specific set of words. In recent years, the effect of the artificial neural network method in the speech recognition field is far beyond all traditional methods, and the artificial neural network method is becoming the mainstream of the whole industry. Among them, the deep neural network has a very wide application.
FIG. 4 illustrates an example of a speech recognition engine using a neural network. In the model of fig. 4, it is involved to calculate the speech output probability, i.e., the prediction of the similarity between the input speech string and various matching candidates, using a deep learning model. With the solution of the present invention, the DNN part of fig. 4 can be implemented speedily using, for example, an FPGA.
FIG. 5 further illustrates a deep learning model applied to the speech recognition engine of FIG. 4.
Fig. 5a shows a deep learning model including CNN (convolutional neural network), LSTM (long-short time memory model), DNN (deep neural network), Softmax, and the like.
FIG. 5b is a learning model for which the present invention is intended, using multiple layers of LSTM.
In the network model of fig. 5b, the input is a segment of speech. For example, about 1 second of speech, is cut into 100 frames in turn, and the characteristics of each frame can be represented by a floating-type vector.
LSTM (Long-short time memory)
In the field of speech recognition, in order to solve the problem of Memory of Long-Term information, Hochreiter & Schmidhuber proposed a Long Short-Term Memory (LSTM) model in 1997.
FIG. 6 illustrates the use of an LSTM network model in the field of speech recognition. The LSTM neural network is a kind of RNN, changing a simple repetitive neural network module among general RNNs into a complex connection interaction relationship. The LSTM neural network also has a very good application effect in speech recognition.
For more information on LSTM, see the following articles: sak H, Senior A W, Beaufays F. Long short-term memory recovery network architecture [ C ]// INTERSPECH.2014: 338-.
As mentioned above, LSTM is a type of RNN. RNNs differ from DNNs in that RNNs are time-dependent. Specifically, the input at time T depends on the output at time T-1, i.e., the calculation of the current frame requires the calculation of the previous frame.
In the structure of the LSTM shown in fig. 6, the meaning of the respective parameters is as follows:
i. f and o respectively represent three gates, and g is the characteristic input of the cell;
the bold line represents the output of the previous frame;
each gate has a weight matrix, and the calculated amount is large when the input at the moment T and the output of T-1 pass through the gate;
the dotted line represents the peepole, and the operations corresponding to the peepole and the three cross-multiplication symbols are all element-wise operation, and the calculation amount is small.
As shown in fig. 7, in order to reduce the calculation amount of the LSTM layer, an additional projection layer needs to be introduced for dimensionality reduction.
The corresponding calculation formula of fig. 7 is:
it=σ(Wixxt+Wiryt-1+Wicct-1+bi)
ft=σ(Wfxxt+Wrfyt-1+Wcfct-1+bf)
ct=ft⊙ct-1+it⊙g(Wcxxt+Wcryt-1+bc)
ot=σ(Woxxt+Woryt-1+Wocct+bo)
mt=ot⊙h(ct)
yt=Wyrmt
wic, Wcf, Woc is peepole, corresponding to the three dashed lines in the diagram. The calculation of a unit in calculation (cell) as an operand is element-wise operation between vectors. It can also be understood as a multiplication of a vector and a diagonal matrix, in which case the weight matrix is a diagonal matrix.
For the LSTM neural network, because the structure is complex, it is difficult to achieve an ideal target with one compression, and multiple iterations are required.
It is therefore desirable to provide a compression method for multiple iterations of a neural network (e.g., LSTM) while reducing storage resources, increasing computational speed, reducing power consumption, maintaining accuracy, and thereby achieving overall performance optimization.
Disclosure of Invention
To this end, in one aspect, the invention proposes a method of compressing a neural network, weights between individual neurons of the neural network being represented by a plurality of matrices, the method comprising: a sensitivity analyzing step for analyzing the sensitivity of each matrix in the plurality of matrices and determining the initial compression ratio of each matrix; a compression step, which is used for compressing each matrix based on the initial compression ratio to obtain a compressed neural network; and a retraining step for retraining the compressed neural network.
In another aspect, the present invention provides an apparatus for compressing a neural network, weights between respective neurons of the neural network being represented by a plurality of matrices, the apparatus comprising: a sensitivity analysis unit for analyzing the sensitivity of each matrix of the plurality of matrices and determining the initial compression ratio of each matrix; a compression unit, configured to compress the matrices based on the initial compression ratio to obtain a compressed neural network; and a retraining unit for retraining the compressed neural network.
Drawings
Fig. 1 shows a model of Deep Neural Networks (DNNs).
FIG. 2 shows a schematic diagram of a compressed neural network with pruning and retraining.
Fig. 3 shows a pruned neural network, in which a portion of the neurons are pruned.
FIG. 4 illustrates an example of a speech recognition engine using a neural network.
FIG. 5 illustrates a deep learning model applied to a speech recognition engine.
FIG. 6 illustrates an LSTM network model applied in the field of speech recognition.
FIG. 7 illustrates an improved LSTM network model.
FIG. 8 illustrates a compression method of an LSTM neural network, where compression is achieved through multiple iterative operations, according to one embodiment of the present invention.
Fig. 9 shows the specific steps of the sensitivity test in the compression method according to an embodiment of the present invention.
Fig. 10 shows the corresponding curves resulting from applying a sensitivity test on an LSTM network using a compression method according to one embodiment of the invention.
Fig. 11 shows the specific steps of determining the final consistency sequence and pruning in the compression method according to an embodiment of the present invention.
Fig. 12 shows specific sub-steps of iteratively adjusting the initial consistency sequence by a "compression trial-consistency sequence adjustment" in the compression method according to one embodiment of the invention.
Fig. 13 shows the specific steps of retraining the neural network in the compression method according to an embodiment of the present invention.
Detailed Description
Past results of the present inventors
As in the inventor's previous article "Learning walls and connections for effective neural networks", a method of compressing a neural network (e.g., CNN) by pruning has been proposed. The method comprises the following steps.
An initialization step of initializing weights of the convolutional layer and the FC layer to random values, wherein an ANN having a complete connection with weight parameters is generated,
training the ANN, and adjusting the weight of the ANN according to the accuracy of the ANN until the accuracy reaches a preset standard. The training step adjusts the weights of the ANN based on a stochastic gradient descent algorithm, i.e., randomly adjusts weight values, selected based on variations in accuracy of the ANN. For the introduction of the stochastic gradient algorithm, see the above-mentioned "Learning weights and connections for influencing neural networks". Further, the accuracy may be quantified as the difference between the predicted and correct outcome of the ANN for the training data set.
A pruning step of discovering unimportant connections in the ANN based on a predetermined condition, and pruning the unimportant connections. In particular, the weight parameters of the pruned connection are no longer saved. For example, the predetermined condition includes any one of: the weight parameter of the connection is 0; or the weight parameter of the connection is less than a predetermined value.
A fine tuning step of resetting the pruned connection to a connection whose weight parameter value is zero, i.e. restoring the pruned connection and assigning a weight value of 0.
And an iteration step, namely judging that the accuracy of the ANN reaches a preset standard. If not, repeating the training, trimming and fine-tuning steps.
Improvements proposed by the invention
The invention provides a multi-iteration deep neural network compression method.
FIG. 8 illustrates a compression method suitable for an LSTM neural network, in which the compression of the neural network is achieved through multiple iterative operations, according to one embodiment of the present invention.
According to the embodiment of fig. 8, each iteration specifically includes three steps of sensitivity analysis, pruning and retraining. Each step is specifically described below.
Step 8100, sensitivity analysis (sensitivity analysis).
In this step, for example, sensitivity analysis is performed for all matrices in the LSTM network to determine the initial solidity (or initial compression ratio) of the different matrices.
Fig. 9 shows the specific steps of the sensitivity test.
As shown in fig. 9, at step 8110, for example, compression is attempted for each matrix in the LSTM network according to different densities (the selected densities are, for example, 0.1,0.2, …,0.9, and the specific compression method for the matrix refers to step 8200). Then, the Word Error Rate (WER) of the network compressed at different densities was measured.
When a sequence of words is recognized, there may be cases where some words are inserted, deleted, or replaced by mistake. For example, for an initial recognized word containing N words, if there are I words inserted, D words deleted, and S words replaced, we is:
WER=(I+D+S)/N,
where WER is typically expressed as a percentage. Generally, the WER of the compressed network becomes large, which means that the accuracy of the compressed network becomes poor.
In step 8120, for a matrix, the density is used as the abscissa and the WER is used as the ordinate, and the WER curve of the matrix in the neural network under different densities is drawn. The density-WER curve is plotted for each matrix.
In step 8130, for a matrix, the density corresponding to the point where the WER changes drastically is found from the curve as the initial density of the matrix. The initial consistency is obtained for each matrix.
In this embodiment, the density corresponding to the inflection point of the density-WER curve is selected as the initial density of the matrix. Specifically, the inflection point is determined in one iteration as follows:
the initial network WER before compression (i.e., consistency of 1) is known as: wer (initial);
the compressed network WER obtained for different consistencies is: WER (0.1), WER (0.2),. cndot. cndot., WER (0.9);
calculate Δ WER, i.e.: WER (0.1) compared to WER (initial), WER (0.2) compared to WER (initial), WER (0.9) compared to WER (initial);
based on the calculated Δ WER, the inflection point refers to the point having the smallest thickness among all points where Δ WER is smaller than a certain threshold. It should be understood that the points in the curve where the WER changes dramatically can be chosen based on other strategies that are also within the scope of the present invention.
In one example, for a 3-layer LSTM network, where 9 dense matrices per layer need to be compressed: wix, Wfx, Wcx, Wox, Wir, Wfr, Wcr, Worm, Wrm, so that a total of 27 dense matrices need to be compressed.
Firstly, for each matrix, 9 tests are carried out according to the density from 0.1 to 0.9 and the step length of 0.1, the WER of the whole network in 9 tests is tested, and a corresponding density-WER curve is drawn. Thus, for 27 matrices, a total of 27 curves are obtained.
Then, for each matrix, from the density-WER curve corresponding to that matrix (e.g., the curve drawn for the Wix matrix in the first layer LSTM), find the points where the WER changes drastically.
Here, it is considered that the point having the smallest thickness among all points at which the Δ WER changes by less than 1% compared to the WER of the initial network of the present iteration is the inflection point.
For example, assuming that the WER of the initial network is 24%, the point with the smallest thickness among all the points with the WER less than 25% in the curve is selected as the inflection point. The initial density of Wix is taken as the density corresponding to the inflection point.
Thus, an initial density sequence of length 27 is obtained, corresponding to the initial density of each matrix. Compression can therefore be guided in this initial consistency sequence.
An example of an initial dense sequence is as follows (in the order of the matrix arrangement Wcx, Wix, Wfx, Wox, Wcr, Wir, Wfr, Wor, Wrm):
densityList=[0.2,0.1,0.1,0.1,0.3,0.3,0.1,0.1,0.3,
0.5,0.1,0.1,0.1,0.2,0.1,0.1,0.1,0.3,
0.4,0.3,0.1,0.2,0.3,0.3,0.1,0.2,0.5]
fig. 10 shows the corresponding densitometer curves for 9 matrices in a single layer LSTM network. It can be seen that the sensitivity of the different matrices to compression is very different, where w _ g _ x, w _ r _ m, w _ g _ r are more sensitive than the others, i.e. there is a point in the density-WER curve where max (Δ WER) > 1.
In step 8200, the final consistency determination is performed and Pruning is performed.
Fig. 11 shows the specific steps of determining the final consistency sequence and pruning.
As shown in fig. 11, step 8200 of fig. 8 may include several substeps.
First, at step 8210, each corresponding matrix is directed to perform a first compression test based on the initial consistency sequence determined in step 8100.
Then, at step 8215, the WER of the compressed network is tested based on the results of the initial compression test. If the Δ WER of the network before and after compression exceeds a certain threshold ε (e.g., 4%), then proceed to the next step 8220.
At step 8220, the initial consistency sequence is adjusted by a "compaction test-consistency sequence adjustment" iteration. At step 8225, a final consistency sequence is obtained.
If Δ WER does not exceed the threshold ε, the process proceeds directly to step 8225 where the initial consistency sequence is the final consistency sequence.
Finally, at step 8230, the LSTM network is directed to prune based on the final consistency sequence.
Next, each substep of fig. 11 will be explained in detail.
Step 8210, perform the initial compression test
According to the experience obtained in the research, the weight with larger absolute value in the matrix corresponds to stronger neuron connection relation. Therefore, in the present embodiment, matrix compression is performed based on the absolute values of the elements in the matrix. It should be understood that the compression of the matrix may be based on other strategies that are also within the scope of the present invention.
According to one embodiment of the invention, all elements in each matrix are ordered from small to large in absolute value. Then, the matrix is compressed based on the initial density of the matrix determined in step 8100, only the element with larger absolute value of the ratio corresponding to the corresponding density is retained, and the rest elements are set to zero. For example, if the initial solidity of the matrix is 0.4, then the first 40% of the elements in the matrix with larger absolute values are retained, and the remaining 60% of the elements are zeroed out.
At step 8215, it is determined that Δ WER of the network before and after compression exceeds a certain threshold ε (e.g., 4%).
If Δ WER of the network before and after compression exceeds the threshold ε (e.g., 4%), an iteration is passed through the "compression test-consistency sequence adjustment," step 8220.
At step 8225, a final consistency sequence is obtained by adjusting the initial consistency sequence at step 8220.
Fig. 12 shows the specific steps of iteratively adjusting the initial consistency sequence by "compression test-consistency sequence adjustment".
As shown in fig. 12, at step 8221, the density of the relatively sensitive matrix is adjusted. I.e. to float the consistency of the relatively sensitive matrix, e.g. to float 0.05. Based on the density, a compression test is performed on the corresponding matrix.
In this embodiment, the strategy of the compression test is the same as the initial compression test, but it should be understood that other strategies may be selected to compress the matrix and are also within the scope of the present invention.
Then, we of the compressed network is calculated, and if we does not meet the target, we continue to float the density of the relatively sensitive matrix, for example 0.1. Based on the density, a compression test is performed on the corresponding matrix. And so on until the Δ WER of the network before and after compression is below the threshold ε (e.g., 4%).
Alternatively or in turn, at step 8222, the refinement of the density of the relatively insensitive matrix may continue so that the Δ WER of the network before and after compression is below a certain threshold ε' (e.g., 3.5%). In this way, the accuracy of the compressed network can be further improved.
As shown in fig. 12, the process of trimming the density of the matrix that is relatively insensitive is similar to the process of trimming the density of the matrix that is relatively sensitive described above.
In one example, the original WER of the neural network is 24.2%, and the initial thick density sequence obtained in step 8100 is:
densityList=[0.2,0.1,0.1,0.1,0.3,0.3,0.1,0.1,0.3,
0.5,0.1,0.1,0.1,0.2,0.1,0.1,0.1,0.3,
0.4,0.3,0.1,0.2,0.3,0.3,0.1,0.2,0.5]
pruning the network according to the initial consistency sequence, wherein the WER deterioration of the compressed network is 32%, and at the moment, the initial consistency sequence needs to be adjusted. The method comprises the following specific steps:
from the results of step 8100, it is known that the matrices Wcx, Wcr, Wir, Wrm in the first layer LSTM, Wcx, Wcr, Wrm in the second layer, and Wcx, Wix, Wox, Wcr, Wir, Wor, Wrm in the third layer are relatively sensitive, and the remaining matrices are relatively insensitive.
First, for the relatively sensitive matrix, the corresponding initial density is increased by a step size of 0.05.
Then, a compression test is performed on the neural network array based on the floated density. The we of the compressed network was calculated to be 27.7%. At this time, the requirement that the network delta WER is less than 4% before and after compression is met, and the adjustment of the density of the relatively sensitive matrix is stopped.
According to another embodiment of the invention, the initial solidity of the relatively insensitive matrix can optionally be fine-tuned such that the network Δ WER before and after compression is < 3.5%. In this example, this step is omitted.
Thus, the final consistency sequence obtained by the "compression test-consistency sequence adjustment" iterative adjustment is:
densityList=[0.25,0.1,0.1,0.1,0.35,0.35,0.1,0.1,0.35,
0.55,0.1,0.1,0.1,0.25,0.1,0.1,0.1,0.35,
0.45,0.35,0.1,0.25,0.35,0.35,0.1,0.25,0.55]
at this time, the overall consistency of the compressed neural network was about 0.24.
In step 8230, Pruning (Pruning) is performed based on the final consistency.
For example, in the present embodiment, the matrix is also pruned based on the absolute values of the elements in the matrix.
Specifically, all elements in each matrix are sorted from small to large according to absolute values; then, each matrix is compressed based on the final dense density sequence, for each matrix, only the element with the larger absolute value corresponding to the corresponding dense density is retained, and the rest elements are set to zero.
At step 8300, retraining (fine tuning)
Training of the neural network is the process of optimizing the loss function. The loss function refers to the difference between the predicted and true results of the neural network model at a given input. It is desirable that the value of the loss function is as small as possible.
The essence of training the neural network is to find the optimal solution. Retraining means that the optimal solution is searched under the condition that a possible suboptimal solution which is very close to the optimal solution exists, namely training is continued on a certain basis.
For example, for the LSTM deep neural network, after the pruning operation in step 8200, training is continued on the basis of the retained weights, and finding the optimal solution is a retraining process.
Fig. 13 shows the specific steps of retraining the neural network.
As shown in fig. 13, the input is the neural network after the pruning operation at step 8200.
In step 8310, the sparse neural network obtained in step 8200 is trained with a training set, and a weight matrix is updated.
Then, at step 8320, it is determined whether the matrix has converged to a locally optimal solution.
If the local optimal solution is not converged, the procedure returns to step 8310, and the steps of training the training set and updating the weight matrix are repeated.
If the optimal solution is converged, proceed to step 8330 to obtain the final neural network.
In one embodiment of the present invention, a gradient descent method is used to update the weight matrix during retraining.
Specifically, the gradient descent method is based on the observation that:
if the real-valued function F (x) is differentiable and defined at point a, the function F (x) is opposite in gradient direction at point a-
Figure BDA0001171307170000091
The decrease is fastest. Thus, if:
Figure BDA0001171307170000092
for gamma > 0 to be a sufficiently small value, then F (a) ≧ F (b), where a is the vector.
In view of this, we can estimate x from the initial estimate of the local minima of the function F0Starting from, and considering the following sequence x0,x1,x2… is such that:
Figure BDA0001171307170000093
thus, it is possible to obtain:
,F(x0)≥F(x1)≥F(x2)≥…
if successful, the sequence (x)n) Converge to the desired extremum. Note that the step size y may change each iteration.
Here, the principle that the gradient descent method reduces the model prediction loss can be understood by considering f (x) as a loss function.
In one example, reference is made to the paper DSD: retraining Deep Neural Networks with depth-spark-depth transforming Flow in NIPS 2016, retraining method for LSTM Deep Neural Networks as follows:
Figure BDA0001171307170000094
where W is the weight matrix, η represents the learning rate, i.e., the step size of the stochastic gradient descent method, f is the loss function,
Figure BDA0001171307170000095
gradient is calculated for the loss function, x is training data, and t +1 represents the update weight.
The meaning of the above formula is: and subtracting the product of the learning rate and the gradient of the loss function from the weight matrix to update the weight matrix.
In another example, a method of maintaining the distribution of the non-zero elements of each compressed matrix in the network is to use a mask matrix, which includes only 0 and 1 elements, for recording the distribution information of the non-zero elements of the compressed matrix.
The typical retraining with mask method is as follows:
Figure BDA0001171307170000096
Mask=(W(0)≠0)
that is, the calculated gradient is multiplied by the mask matrix to ensure that the gradient matrix becomes the same shape as the mask matrix for updating the weight matrix.
Next, a specific example of the retraining process and the convergence judgment criterion is described in detail.
In this example, the retraining inputs are: the network to be trained, the learning rate, the maximum number of training rounds, keep _ lr _ iters (the number of rounds to maintain the original learning rate), start _ hashing _ impr (to determine the timing to change the learning rate, e.g., 0.01), end _ hashing _ impr (to terminate the training, e.g., 0.001), hashing _ factor (e.g., 0.5), data sets (training set, cross-validation set, test set), etc.
In addition, the retraining input also includes parameters such as learning momentum, num-stream, batch-size, etc., which are temporarily omitted here. The retraining output is: and (5) training the network.
The specific process of retraining is as follows:
1. the method comprises the following steps that a cross validation data set is adopted for testing average loss (cross loss, hereinafter referred to as 'loss') of an initial model to be trained, and the cross loss is used as an initial standard for measuring the good and bad network training;
2. iterative training:
iterative training is divided into a plurality of epochs (here, one run of all data in the training set is called an epoch, and hereinafter referred to as a round), and the total number of iterative rounds does not exceed the maximum number of training rounds max _ iters;
in each round, updating the weight of the matrix in the network by adopting a training data set and utilizing a gradient descent method;
after each round of training is finished, saving the trained network, testing the average loss through a cross validation data set, rejecting the training (the next round of training is also based on the training result) if the loss is larger than the loss (marked as loss _ prev) of the previous round of effective training, and accepting the training (the next round of training is based on the training result) and storing the loss of the round;
dynamic change of learning rate and conditions for training termination: after each round of training is finished, the improvement is calculated according to (loss _ prev-loss)/loss _ prev and is recorded as real _ imprr, which represents the relative lifting magnitude of the loss of the training result accepted in the current round compared with the loss of the training result accepted in the previous round, and then the processing is carried out according to the real _ imprr:
1) if the iteration round number is less than keep _ lr _ iters, the learning rate is not changed;
2) if real _ impr is smaller than start _ hashing _ impr (e.g. 0.01), i.e. the training of the current round has been within some smaller limit compared to the previous round, indicating that the local optimal solution is approached, the learning rate is decreased (multiplied by hashing _ factor, usually halved) so as to decrease the step size of the gradient descent method, and approach the local optimal solution with a smaller step size;
3) if real _ impr is less than end _ solving _ impr (e.g., 0.001), i.e., the relative lift of the current round of training is small compared to the previous round, the training is considered to have reached the end point and the training is terminated (but if the number of training rounds is less than min _ iters, the training is continued to the min _ iters round).
Therefore, the end-of-training situation may include the following four cases:
1. training a min _ iters wheel, if no real _ impr smaller than end _ hashing _ impr appears in the middle, taking the result of the min _ iters wheel;
2. training a min _ iters wheel, if the middle shows that real _ imprr is smaller than end _ hashing _ imprr, taking the training result of the previous min _ iters wheel with the minimum loss;
3. if normal training exceeds the min _ iters round but is less than the max _ iters round, and if real _ impr is less than end _ hashing _ impr, the training result of the last round, namely the round with the lowest loss is taken;
4. training normally to the max _ iters round, and taking the result of the max _ iters round when real _ impr is smaller than end _ hashing _ impr.
It should be noted that the above example describes a retraining process and decision criteria for determining whether a matrix converges to a locally optimal solution. However, in actual practice, in order to improve the compression efficiency, it is not necessary to wait until the convergence result, and an intermediate result may be taken and then the next compression round may be performed.
The judgment criteria also includes judging whether the WER of the trained network meets certain criteria, and the like, and these judgment criteria are also included in the scope of the present invention.
Through retraining, the WER of the network is reduced, and therefore precision loss caused by compression is reduced. For example, by retraining, we can drop from 27.7% to 25.8% for an LSTM network with a consistency of 0.24.
Iteration step (iteration)
Referring back to fig. 8, as described above, the present invention compresses the neural network to a desired consistency by performing a number of iterative operations, i.e., repeating the above steps 8100, 8200, 8300.
For example, in one example, a final network consistency of 0.14 is desired.
In the first iteration, a network with a consistency of 0.24 and a WER of 25.8% is obtained, via step 8300.
And repeating the steps 8100, 8200 and 8300 to continuously perform multiple rounds of compression on the network.
For example, after the second compression, the network has a consistency of 0.18 and a WER of 24.7%.
After the third compression, the network density was 0.14 and the WER was 24.6%, which reached the target.
Technical effects
Based on the technical scheme, the invention provides a multiple-iteration deep neural network compression method, which particularly has the following beneficial effects:
saving storage space
By adopting the method, the original dense neural network can be compressed to the sparse neural network with the precision basically unchanged and the actual parameters greatly reduced through a series of unique compression operations, so that the problem of low effective data occupation ratio in operation is solved, the storage space is greatly reduced, the on-chip storage is possible, and the operation efficiency is effectively improved. For example, for LSTM networks, dense networks can be compressed below 0.15 in general, while guaranteeing a relative value of < 5% for network loss of accuracy.
Increasing the compression speed
According to the preferred embodiment of the present invention, in multiple iterations, considering that the retraining time is very costly, in order to converge to the target compression value as soon as possible, the inflection point of the density-WER curve is used as the initial density of the corresponding matrix when the sensitivity analysis determines the initial density sequence, so that the compression step length can be accelerated, and the compression time can be shortened. In addition, by employing a judgment criterion such as WER in retraining to determine whether to converge to a locally optimal solution, the number of retraining rounds is reduced, thereby shortening the compression time.
The above embodiments have only used the LSTM network as an example to illustrate the invention. It should be understood that the present invention may not be limited to LSTM neural networks, but may be applied to other various neural networks.

Claims (18)

1. A method of compressing a neural network in speech recognition to optimize speech recognition performance, weights between individual neurons of the neural network being represented by a plurality of matrices, the method comprising:
a sensitivity analysis step comprising:
(1) obtaining neural network word error rate WER before compressioninitial
(2) Obtaining each compressed neural network based on different compression ratios d1, d2 and … dn, and performing voice recognition on each compressed neural network to obtain corresponding word error rate WER in voice recognition resultsd1、WERd2、…WERdnWhereby the sensitivity of each of the plurality of matrices to speech recognition results is analyzed, an
(3) Based on the word error rate WER of each neural networkinitialAnd WERd1、WERd2、…WERdnSelecting one of the different compression ratios as an initial compression ratio, thereby determining the initial compression ratio of each matrix;
a compression step, which is used for compressing each matrix based on the initial compression ratio to obtain a compressed neural network;
retraining step, which is used to retrain the compressed neural network, so that the word error rate in the voice recognition result is reduced.
2. The method of claim 1, further comprising:
iteratively performing the sensitivity analysis step, the compression step, and the retraining step.
3. The method of claim 1, wherein selecting one of the plurality of different compression ratios as the initial compression ratio comprises:
calculating individual Δ WERs, WERsinitialAnd WERd1、WERd2、…WERdnA difference of (d);
based on the respective Δ WER, the largest compression ratio among all the compression ratios d1, d2, … dn that satisfy Δ WER smaller than a predetermined threshold is selected.
4. The method of claim 1, wherein the compressing step further comprises:
and pruning the corresponding matrix based on the initial compression ratio of each matrix.
5. The method of claim 4, wherein the pruning operation comprises:
sorting all elements in each matrix from small to large according to absolute values;
retaining an element having a large absolute value of a proportion corresponding to the initial compression ratio; and
the remaining elements are set to zero.
6. The method of claim 1, wherein the compressing step further comprises:
a first compression step of compressing each matrix of the neural network based on an initial compression ratio of each matrix;
an adjusting step, adjusting the initial compression ratio of each matrix based on the word error rate WER of the network after the first compression step to obtain the adjusted compression ratio of each matrix;
and a second compression step of compressing the respective matrices of the neural network based on the adjusted compression ratios of the respective matrices to obtain a compressed neural network.
7. The method of claim 6, wherein the adjusting step further comprises:
adjusting the compression ratio, namely adjusting the compression ratio of the relatively sensitive matrix, and compressing the corresponding matrix according to the adjusted compression ratio;
judging whether the WER of the neural network compressed by the adjusted compression ratio meets a preset requirement or not; if the preset requirement is not met, returning to the step of adjusting the compression ratio to continuously adjust the compression ratio of the relative sensitivity matrix;
if said predetermined requirement is met, said adjusted compression ratio of the relatively sensitive matrix is taken as the adjusted compression ratio of the corresponding matrix.
8. The method of claim 7, wherein said adjusting the compression ratio step comprises:
and reducing the compression ratio of the relative sensitivity matrix by a certain step length based on the word error rate WER of the neural network.
9. The method of claim 6, wherein the adjusting step further comprises:
adjusting the compression ratio, namely adjusting the compression ratio of the relatively insensitive matrix, and compressing the corresponding matrix according to the adjusted compression ratio;
judging whether the WER of the neural network compressed by the adjusted compression ratio meets a preset requirement or not;
if the preset requirement is not met, returning to the step of adjusting the compression ratio to continuously adjust the compression ratio of the relatively insensitive matrix;
if said predetermined requirement is met, said adjusted compression ratio of the relatively insensitive matrix is taken as the adjusted compression ratio of the corresponding matrix.
10. The method of claim 9, wherein said adjusting the compression ratio step comprises:
based on the word error rate WER of the neural network, reducing the compression ratio of the relatively insensitive matrix by a certain step size.
11. The method of claim 1, wherein the retraining step further comprises:
training, namely training the neural network by using a training set, and updating a weight matrix;
judging whether the weight matrix converges to a local optimal solution;
if the local optimal solution is not converged, returning to the training step;
and if the optimal solution is converged, taking the neural network as a final neural network.
12. The method of claim 11, wherein the training step comprises:
inputting training set data, calculating a derivative of a loss function to a network parameter, and solving a gradient matrix;
updating a weight matrix in the network by adopting a random gradient descent method, wherein the updated weight matrix is the weight matrix before updating-learning rate gradient matrix;
calculating the average loss of the network aiming at the weight matrix in the updated network;
judging whether the training of the current round is effective or not, wherein if the average loss is larger than that of the effective training of the previous round, the training of the current round is ineffective; if the average loss is smaller than that of the previous round of effective training, the round of training is effective;
if the training of the current round is invalid and the maximum training round number is not reached, adjusting the learning rate, and continuing the training on the basis of the previous effective training round;
and if the training of the current round is effective, performing a judgment step.
13. The method of claim 11, wherein the training step further comprises:
and updating the weight matrix by using the mask matrix.
14. The method of claim 11, wherein the determining step comprises:
and testing the word error rate of the neural network obtained in the training step, and if the word error rate meets certain requirements, considering that the network has converged to a local optimal solution.
15. The method of claim 11, wherein the determining step comprises:
and testing the relative lifting size of the average loss of the network effectively trained in the current round compared with the average loss of the network effectively trained in the previous round, and if the relative lifting size meets certain requirements, considering that the network is converged to the local optimal solution.
16. An apparatus for compressing a neural network in speech recognition to optimize speech recognition performance, weights between individual neurons of the neural network being represented by a plurality of matrices, the apparatus comprising:
a sensitivity analysis unit for (1) obtaining a neural network word error rate WER before compressioninitialAnd (2) obtaining compressed neural networks based on different compression ratios d1, d2 and … dn, and performing voice recognition on the compressed neural networks to obtain corresponding word error rates WER in voice recognition resultsd1、WERd2、…WERdnThereby analyzing a sensitivity of each of the plurality of matrices to speech recognition results, and (3) based on the respective neural network word error rate WERinitialAnd WERd1、WERd2、…WERdnSelecting one of the different compression ratios as an initial compression ratio, thereby determining the initial compression ratio of each matrix;
a compression unit, configured to compress the matrices based on the initial compression ratio to obtain a compressed neural network;
and the retraining unit is used for retraining the compressed neural network so that the word error rate in the voice recognition result is reduced.
17. The apparatus of claim 16, wherein the compression unit further comprises:
a first compression unit, configured to compress each matrix of the neural network based on an initial compression ratio of each matrix;
the adjusting unit is used for adjusting the initial compression ratio of each matrix based on the word error rate WER of the network obtained by the first compressing unit so as to obtain the adjusted compression ratio of each matrix;
and the second compression unit is used for compressing each matrix of the neural network based on the adjusted compression ratio of each matrix to obtain the compressed neural network.
18. The apparatus of claim 16, wherein the retraining unit further comprises:
the training unit is used for training the neural network by using a training set and updating a weight matrix;
and the judging unit is used for judging whether the weight matrix converges to the local optimal solution.
CN201611105480.3A 2016-08-12 2016-12-05 Multi-iteration deep neural network compression method Active CN107679617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/390,559 US10762426B2 (en) 2016-08-12 2016-12-26 Multi-iteration compression for deep neural networks

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/242,622 2016-08-22
US15/242,622 US10621486B2 (en) 2016-08-12 2016-08-22 Method for optimizing an artificial neural network (ANN)
US15/242,624 2016-08-22
US15/242,624 US20180046903A1 (en) 2016-08-12 2016-08-22 Deep processing unit (dpu) for implementing an artificial neural network (ann)

Publications (2)

Publication Number Publication Date
CN107679617A CN107679617A (en) 2018-02-09
CN107679617B true CN107679617B (en) 2021-04-09

Family

ID=59983010

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201611105491.1A Active CN107689948B (en) 2016-08-12 2016-12-05 Efficient data access management device applied to neural network hardware acceleration system
CN201611104482.0A Active CN107689224B (en) 2016-08-12 2016-12-05 Deep neural network compression method for reasonably using mask
CN201611105081.7A Active CN107239825B (en) 2016-08-12 2016-12-05 Deep neural network compression method considering load balance
CN201611105480.3A Active CN107679617B (en) 2016-08-12 2016-12-05 Multi-iteration deep neural network compression method

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN201611105491.1A Active CN107689948B (en) 2016-08-12 2016-12-05 Efficient data access management device applied to neural network hardware acceleration system
CN201611104482.0A Active CN107689224B (en) 2016-08-12 2016-12-05 Deep neural network compression method for reasonably using mask
CN201611105081.7A Active CN107239825B (en) 2016-08-12 2016-12-05 Deep neural network compression method considering load balance

Country Status (1)

Country Link
CN (4) CN107689948B (en)

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017218889A1 (en) * 2017-10-23 2019-04-25 Robert Bosch Gmbh Unarmed parameterized AI module and method of operation
US11651223B2 (en) 2017-10-27 2023-05-16 Baidu Usa Llc Systems and methods for block-sparse recurrent neural networks
CN107977704B (en) * 2017-11-10 2020-07-31 中国科学院计算技术研究所 Weight data storage method and neural network processor based on same
CN107832835A (en) * 2017-11-14 2018-03-23 贵阳海信网络科技有限公司 The light weight method and device of a kind of convolutional neural networks
CN107832439B (en) 2017-11-16 2019-03-08 百度在线网络技术(北京)有限公司 Method, system and the terminal device of more wheel state trackings
CN109902817B (en) 2017-12-11 2021-02-09 安徽寒武纪信息科技有限公司 Board card and neural network operation method
CN108170529A (en) * 2017-12-26 2018-06-15 北京工业大学 A kind of cloud data center load predicting method based on shot and long term memory network
CN109791628B (en) * 2017-12-29 2022-12-27 清华大学 Neural network model block compression method, training method, computing device and system
CN108038546B (en) 2017-12-29 2021-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for compressing neural networks
CN109993291B (en) * 2017-12-30 2020-07-07 中科寒武纪科技股份有限公司 Integrated circuit chip device and related product
WO2019129302A1 (en) 2017-12-30 2019-07-04 北京中科寒武纪科技有限公司 Integrated circuit chip device and related product
CN109993290B (en) 2017-12-30 2021-08-06 中科寒武纪科技股份有限公司 Integrated circuit chip device and related product
CN109993289B (en) * 2017-12-30 2021-09-21 中科寒武纪科技股份有限公司 Integrated circuit chip device and related product
CN109993292B (en) * 2017-12-30 2020-08-04 中科寒武纪科技股份有限公司 Integrated circuit chip device and related product
CN108280514B (en) * 2018-01-05 2020-10-16 中国科学技术大学 FPGA-based sparse neural network acceleration system and design method
CN110084364B (en) * 2018-01-25 2021-08-27 赛灵思电子科技(北京)有限公司 Deep neural network compression method and device
CN110110853B (en) * 2018-02-01 2021-07-30 赛灵思电子科技(北京)有限公司 Deep neural network compression method and device and computer readable medium
US11693627B2 (en) * 2018-02-09 2023-07-04 Deepmind Technologies Limited Contiguous sparsity pattern neural networks
CN110197262B (en) * 2018-02-24 2021-07-30 赛灵思电子科技(北京)有限公司 Hardware accelerator for LSTM networks
CN108540338B (en) * 2018-03-08 2021-08-31 西安电子科技大学 Application layer communication protocol identification method based on deep cycle neural network
CN108510063B (en) * 2018-04-08 2020-03-20 清华大学 Acceleration method and accelerator applied to convolutional neural network
EP3794515A1 (en) * 2018-05-17 2021-03-24 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Concepts for distributed learning of neural networks and/or transmission of parameterization updates therefor
CN110797021B (en) * 2018-05-24 2022-06-07 腾讯科技(深圳)有限公司 Hybrid speech recognition network training method, hybrid speech recognition device and storage medium
CN108665067B (en) * 2018-05-29 2020-05-29 北京大学 Compression method and system for frequent transmission of deep neural network
US10832139B2 (en) * 2018-06-22 2020-11-10 Moffett Technologies Co. Limited Neural network acceleration and embedding compression systems and methods with activation sparsification
CN109102064B (en) * 2018-06-26 2020-11-13 杭州雄迈集成电路技术股份有限公司 High-precision neural network quantization compression method
CN110659731B (en) * 2018-06-30 2022-05-17 华为技术有限公司 Neural network training method and device
CN109063835B (en) * 2018-07-11 2021-07-09 中国科学技术大学 Neural network compression device and method
CN113190791A (en) * 2018-08-06 2021-07-30 华为技术有限公司 Matrix processing method and device and logic circuit
CN110874550A (en) * 2018-08-31 2020-03-10 华为技术有限公司 Data processing method, device, equipment and system
WO2020062312A1 (en) * 2018-09-30 2020-04-02 华为技术有限公司 Signal processing device and signal processing method
CN109104197B (en) * 2018-11-12 2022-02-11 合肥工业大学 Coding and decoding circuit and coding and decoding method for non-reduction sparse data applied to convolutional neural network
CN111382852B (en) * 2018-12-28 2022-12-09 上海寒武纪信息科技有限公司 Data processing device, method, chip and electronic equipment
CN111291884A (en) * 2018-12-10 2020-06-16 中科寒武纪科技股份有限公司 Neural network pruning method and device, electronic equipment and computer readable medium
CN111353591A (en) * 2018-12-20 2020-06-30 中科寒武纪科技股份有限公司 Computing device and related product
CN109800869B (en) * 2018-12-29 2021-03-05 深圳云天励飞技术有限公司 Data compression method and related device
WO2020133492A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Neural network compression method and apparatus
CN111383157B (en) * 2018-12-29 2023-04-14 北京市商汤科技开发有限公司 Image processing method and device, vehicle-mounted operation platform, electronic equipment and system
CN109784490B (en) * 2019-02-02 2020-07-03 北京地平线机器人技术研发有限公司 Neural network training method and device and electronic equipment
CN111626305B (en) * 2019-02-28 2023-04-18 阿里巴巴集团控股有限公司 Target detection method, device and equipment
CN109938696A (en) * 2019-03-22 2019-06-28 江南大学 Electroneurographic signal compressed sensing processing method and circuit
CN109978144B (en) * 2019-03-29 2021-04-13 联想(北京)有限公司 Model compression method and system
CN110399972B (en) * 2019-07-22 2021-05-25 上海商汤智能科技有限公司 Data processing method and device and electronic equipment
CN110704024B (en) * 2019-09-28 2022-03-08 中昊芯英(杭州)科技有限公司 Matrix processing device, method and processing equipment
CN110705996B (en) * 2019-10-17 2022-10-11 支付宝(杭州)信息技术有限公司 User behavior identification method, system and device based on feature mask
CN112699990B (en) * 2019-10-22 2024-06-07 杭州海康威视数字技术股份有限公司 Neural network model training method and device and electronic equipment
CN111078840B (en) * 2019-12-20 2022-04-08 浙江大学 Movie comment sentiment analysis method based on document vector
CN111126600A (en) * 2019-12-20 2020-05-08 上海寒武纪信息科技有限公司 Training method of neural network model, data processing method and related product
US20210209462A1 (en) * 2020-01-07 2021-07-08 Alibaba Group Holding Limited Method and system for processing a neural network
KR20210106131A (en) 2020-02-20 2021-08-30 삼성전자주식회사 Electronic device and control method thereof
WO2021196158A1 (en) * 2020-04-03 2021-10-07 北京希姆计算科技有限公司 Data access circuit and method
KR20210126398A (en) * 2020-04-10 2021-10-20 에스케이하이닉스 주식회사 Neural network computation apparatus having systolic array
CN111711511B (en) * 2020-06-16 2021-07-13 电子科技大学 Method for lossy compression of frequency domain data
CN111553471A (en) * 2020-07-13 2020-08-18 北京欣奕华数字科技有限公司 Data analysis processing method and device
CN112132062B (en) * 2020-09-25 2021-06-29 中南大学 Remote sensing image classification method based on pruning compression neural network
CN112286447A (en) * 2020-10-14 2021-01-29 天津津航计算技术研究所 Novel software and hardware cooperation RAID improvement system
CN112230851A (en) * 2020-10-14 2021-01-15 天津津航计算技术研究所 Novel software and hardware cooperation RAID improvement method
CN112270352A (en) * 2020-10-26 2021-01-26 中山大学 Decision tree generation method and device based on parallel pruning optimization
CN112396178A (en) * 2020-11-12 2021-02-23 江苏禹空间科技有限公司 Method for improving CNN network compression efficiency
CN112465035A (en) * 2020-11-30 2021-03-09 上海寻梦信息技术有限公司 Logistics distribution task allocation method, system, equipment and storage medium
WO2022133623A1 (en) * 2020-12-24 2022-06-30 Intel Corporation Accelerated scale-out performance of deep learning training workload with embedding tables
CN112883982B (en) * 2021-01-08 2023-04-18 西北工业大学 Data zero-removing coding and packaging method for neural network sparse features
US20220343145A1 (en) * 2021-04-21 2022-10-27 Alibaba Singapore Holding Private Limited Method and system for graph neural network acceleration
CN113794709B (en) * 2021-09-07 2022-06-24 北京理工大学 Hybrid coding method for binary sparse matrix
CN113947185B (en) * 2021-09-30 2022-11-18 北京达佳互联信息技术有限公司 Task processing network generation method, task processing device, electronic equipment and storage medium
CN116187408B (en) * 2023-04-23 2023-07-21 成都甄识科技有限公司 Sparse acceleration unit, calculation method and sparse neural network hardware acceleration system
CN117170588B (en) * 2023-11-01 2024-01-26 北京壁仞科技开发有限公司 Method, apparatus and medium for converting a layout of tensor data
CN117634711B (en) * 2024-01-25 2024-05-14 北京壁仞科技开发有限公司 Tensor dimension segmentation method, system, device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317730B1 (en) * 1996-05-23 2001-11-13 Siemens Aktiengesellschaft Method for optimizing a set of fuzzy rules using a computer
CN105184362A (en) * 2015-08-21 2015-12-23 中国科学院自动化研究所 Depth convolution neural network acceleration and compression method based on parameter quantification

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129397A (en) * 2010-12-29 2011-07-20 深圳市永达电子股份有限公司 Method and system for predicating self-adaptive disk array failure
US9053430B2 (en) * 2012-11-19 2015-06-09 Qualcomm Incorporated Method and apparatus for inferring logical dependencies between random processes
US9367519B2 (en) * 2013-08-30 2016-06-14 Microsoft Technology Licensing, Llc Sparse matrix data structure
US9400955B2 (en) * 2013-12-13 2016-07-26 Amazon Technologies, Inc. Reducing dynamic range of low-rank decomposition matrices
US10339447B2 (en) * 2014-01-23 2019-07-02 Qualcomm Incorporated Configuring sparse neuronal networks
US9324321B2 (en) * 2014-03-07 2016-04-26 Microsoft Technology Licensing, Llc Low-footprint adaptation and personalization for a deep neural network
US9202178B2 (en) * 2014-03-11 2015-12-01 Sas Institute Inc. Computerized cluster analysis framework for decorrelated cluster identification in datasets
US10242313B2 (en) * 2014-07-18 2019-03-26 James LaRue Joint proximity association template for neural networks
CN104217433B (en) * 2014-08-29 2017-06-06 华为技术有限公司 A kind of method and device for analyzing image
CN104915322B (en) * 2015-06-09 2018-05-01 中国人民解放军国防科学技术大学 A kind of hardware-accelerated method of convolutional neural networks
CN105184369A (en) * 2015-09-08 2015-12-23 杭州朗和科技有限公司 Depth learning model matrix compression method and device
CN105260794A (en) * 2015-10-12 2016-01-20 上海交通大学 Load predicting method of cloud data center

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317730B1 (en) * 1996-05-23 2001-11-13 Siemens Aktiengesellschaft Method for optimizing a set of fuzzy rules using a computer
CN105184362A (en) * 2015-08-21 2015-12-23 中国科学院自动化研究所 Depth convolution neural network acceleration and compression method based on parameter quantification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Going Deeper with Embedded FPGA Platform for Convolutional Neural Network;Jiantao Qiu 等;《FPGA "16: Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays》;20160229;第26-35页 *

Also Published As

Publication number Publication date
CN107689948B (en) 2020-09-01
CN107689224A (en) 2018-02-13
CN107689224B (en) 2020-09-01
CN107239825B (en) 2021-04-09
CN107239825A (en) 2017-10-10
CN107689948A (en) 2018-02-13
CN107679617A (en) 2018-02-09

Similar Documents

Publication Publication Date Title
CN107679617B (en) Multi-iteration deep neural network compression method
CN107729999B (en) Deep neural network compression method considering matrix correlation
US10762426B2 (en) Multi-iteration compression for deep neural networks
CN107688850B (en) Deep neural network compression method
US10984308B2 (en) Compression method for deep neural networks with load balance
US10832123B2 (en) Compression of deep neural networks with proper use of mask
CN107688849B (en) Dynamic strategy fixed-point training method and device
CN109285562B (en) Voice emotion recognition method based on attention mechanism
CN107679618B (en) Static strategy fixed-point training method and device
CN110135510B (en) Dynamic domain self-adaption method, device and computer readable storage medium
KR20210032140A (en) Method and apparatus for performing pruning of neural network
CN110084364B (en) Deep neural network compression method and device
CN110929798A (en) Image classification method and medium based on structure optimization sparse convolution neural network
CN112990444A (en) Hybrid neural network training method, system, equipment and storage medium
CN113919484A (en) Structured pruning method and device based on deep convolutional neural network model
CN116720620A (en) Grain storage ventilation temperature prediction method based on IPSO algorithm optimization CNN-BiGRU-Attention network model
CN108090564A (en) Based on network weight is initial and the redundant weighting minimizing technology of end-state difference
CN111967528A (en) Image identification method for deep learning network structure search based on sparse coding
CN109033413B (en) Neural network-based demand document and service document matching method
de Brébisson et al. The z-loss: a shift and scale invariant classification loss belonging to the spherical family
CN116384471A (en) Model pruning method, device, computer equipment, storage medium and program product
CN113378910B (en) Poisoning attack method for identifying electromagnetic signal modulation type based on pure label
CN110110853B (en) Deep neural network compression method and device and computer readable medium
Zhao et al. Fuzzy pruning for compression of convolutional neural networks
CN112508194B (en) Model compression method, system and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180611

Address after: 100083, 17 floor, 4 Building 4, 1 Wang Zhuang Road, Haidian District, Beijing.

Applicant after: Beijing deep Intelligent Technology Co., Ltd.

Address before: 100083, 8 floor, 4 Building 4, 1 Wang Zhuang Road, Haidian District, Beijing.

Applicant before: Beijing insight Technology Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200901

Address after: Unit 01-19, 10 / F, 101, 6 / F, building 5, yard 5, Anding Road, Chaoyang District, Beijing 100029

Applicant after: Xilinx Electronic Technology (Beijing) Co., Ltd

Address before: 100083, 17 floor, 4 Building 4, 1 Wang Zhuang Road, Haidian District, Beijing.

Applicant before: BEIJING DEEPHI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant