CN110659725A - Neural network model compression and acceleration method, data processing method and device - Google Patents

Neural network model compression and acceleration method, data processing method and device Download PDF

Info

Publication number
CN110659725A
CN110659725A CN201910893276.XA CN201910893276A CN110659725A CN 110659725 A CN110659725 A CN 110659725A CN 201910893276 A CN201910893276 A CN 201910893276A CN 110659725 A CN110659725 A CN 110659725A
Authority
CN
China
Prior art keywords
linear layer
quantization
layer
parameters
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910893276.XA
Other languages
Chinese (zh)
Other versions
CN110659725B (en
Inventor
金庆
杨林杰
廖震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ByteDance Inc
Original Assignee
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ByteDance Inc filed Critical ByteDance Inc
Priority to CN201910893276.XA priority Critical patent/CN110659725B/en
Priority to PCT/IB2019/059565 priority patent/WO2021053381A1/en
Publication of CN110659725A publication Critical patent/CN110659725A/en
Application granted granted Critical
Publication of CN110659725B publication Critical patent/CN110659725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A compression and acceleration method of a neural network model, a data processing method and device and a storage medium are provided. The neural network model comprises a linear layer, and parameters of the neural network model comprise preparation weight parameters; the compression and acceleration method comprises the following steps: quantizing parameters of the neural network model to obtain a quantized model, wherein the parameters of the quantized model comprise quantized weight parameters of a linear layer; and carrying out scale transformation processing on the quantization model to obtain a target quantization model. Carrying out scale transformation processing on the quantization model, wherein the scale transformation processing comprises the following steps: calculating scale transformation parameters of the linear layer based on the number of output neurons of the linear layer or the standard deviation of the preparation weight parameters of the linear layer; and carrying out scale transformation processing on the quantization weight parameter of the linear layer based on the scale transformation parameter of the linear layer to obtain a standard quantization weight parameter of the linear layer.

Description

Neural network model compression and acceleration method, data processing method and device
Technical Field
The embodiment of the disclosure relates to a compression and acceleration method of a neural network model, a data processing method and device and a storage medium.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
Disclosure of Invention
At least one embodiment of the present disclosure provides a compression and acceleration method of a neural network model, the neural network model including a linear layer, parameters of the neural network model including preparatory weight parameters, the compression and acceleration method including: quantizing the parameters of the neural network model to obtain a quantization model, wherein the parameters of the quantization model comprise quantization weight parameters of the linear layer; carrying out scale transformation processing on the quantization model to obtain a target quantization model; wherein performing the scaling process on the quantization model comprises: calculating a scale transformation parameter of the linear layer based on the number of output neurons of the linear layer or a standard deviation of preparation weight parameters of the linear layer; and based on the scale transformation parameter of the linear layer, carrying out the scale transformation processing on the quantization weight parameter of the linear layer to obtain a standard quantization weight parameter of the linear layer.
For example, in the compression and acceleration methods provided in some embodiments of the present disclosure, the linear layer includes at least one selected from the group consisting of a convolutional layer, a recursive layer, and a fully-connected layer.
For example, in the compression and acceleration methods provided by some embodiments of the present disclosure, the linear layer is not directly followed by the batch normalization layer.
For example, in the compression and acceleration method provided by some embodiments of the present disclosure, quantizing parameters of the neural network model to obtain the quantization model includes: clamping the preparation weight parameter of the linear layer to obtain a clamping weight parameter of the linear layer; and carrying out quantization processing on the clamping weight parameters of the linear layer to obtain the quantization weight parameters of the linear layer.
For example, in the compression and acceleration method provided by some embodiments of the present disclosure, calculating the scaling parameter of the linear layer based on the number of output neurons of the linear layer includes: calculating the scale transformation parameters of the linear layer according to a first scale transformation parameter calculation formula, wherein the first scale transformation parameter calculation formula is expressed as:
Figure BDA0002209441190000021
wherein RSF represents a scaling parameter of the linear layer,
Figure BDA0002209441190000029
represents a number of output neurons of the linear layer, Q represents a quantization weight matrix of the linear layer, and VAR (Q) represents a variance of elements of the quantization weight matrix of the linear layer.
For example, in the compression and acceleration methods provided by some embodiments of the present disclosure, the number of bits of the quantization weight parameter of the linear layer is 1 to 8.
For example, in the compression and acceleration methods provided in some embodiments of the present disclosure, the number of bits of the quantization weight parameter of the linear layer is 1-2.
For example, in the compression and acceleration method provided by some embodiments of the present disclosure, calculating the scaling parameter of the linear layer based on the number of output neurons of the linear layer includes: calculating the scale transformation parameters of the linear layer according to a second scale transformation parameter calculation formula, wherein the second scale transformation parameter calculation formula is expressed as:
Figure BDA0002209441190000022
wherein RSF represents a scaling parameter of the linear layer,representing the number of output neurons of the linear layer,
Figure BDA0002209441190000024
an auxiliary weight matrix representing the linear layer,
Figure BDA0002209441190000025
representing a variance of an element of an auxiliary weight matrix of the linear layer;
the auxiliary weight matrix of the linear layer
Figure BDA0002209441190000026
Expressed as:
Figure BDA0002209441190000027
wherein,
Figure BDA0002209441190000028
a clamp weight matrix representing the linear layer.
For example, in the compression and acceleration methods provided by some embodiments of the present disclosure, calculating the scaling parameters of the linear layer based on the standard deviation of the preparation weight parameters of the linear layer includes: calculating the scale transformation parameters of the linear layer according to a third scale transformation parameter calculation formula, wherein the third scale transformation parameter calculation formula is expressed as:
wherein RSF represents a scaling parameter of the linear layer, W represents a preparation weight matrix of the linear layer, VAR (W) represents a variance of elements of the preparation weight matrix of the linear layer,an auxiliary weight matrix representing the linear layer,
Figure BDA0002209441190000033
of elements of an auxiliary weight matrix representing said linear layerVariance;
the auxiliary weight matrix of the linear layer
Figure BDA0002209441190000034
Expressed as:
wherein,
Figure BDA0002209441190000036
a clamp weight matrix representing the linear layer.
For example, in the compression and acceleration methods provided in some embodiments of the present disclosure, the number of bits of the quantization weight parameter of the linear layer is 3 to 8.
For example, in the compression and acceleration methods provided in some embodiments of the present disclosure, the performing the scaling process on the quantization weight parameter of the linear layer based on the scaling parameter of the linear layer to obtain a standard quantization weight parameter of the linear layer includes: and carrying out the scale transformation processing on the quantization weight parameters of the linear layer according to a scale transformation formula, wherein the scale transformation formula is expressed as follows:
Figure BDA0002209441190000037
wherein Q is*A standard quantization weight matrix representing the linear layer,
Figure BDA0002209441190000038
representing the parameter of the ith row and the jth column of the standard quantization weight matrix of the linear layer, Q representing the quantization weight matrix of the linear layer, QijAnd representing the parameter of the ith row and the jth column of the quantization weight matrix of the linear layer.
For example, in some embodiments of the present disclosure, in a compression and acceleration method, performing the clamping processing on the preparation weight parameters of the linear layer to obtain clamping weight parameters of the linear layer includes: performing the clamping processing on the preparation weight parameter of the linear layer according to a clamping formula, wherein the clamping formula is expressed as:
Figure BDA0002209441190000039
wherein,
Figure BDA00022094411900000310
a clamping weight matrix representing the linear layer,represents the parameters of the ith row and the jth column of the clamped weight matrix, W represents the preparation weight matrix of the linear layer, WijA parameter, W, representing the ith row and the jth column of the preparation weight matrix for the linear layermnThe parameter of the nth column of the mth row of the preparation weight matrix of the linear layer is represented, tanh (·) represents a hyperbolic tangent function, and max (·) represents a max-valued function.
For example, in some embodiments of the present disclosure, in a compression and acceleration method, performing the quantization on the clamp weight parameter of the linear layer to obtain a quantization weight parameter of the linear layer includes: performing the quantization processing on the clamp weight parameter of the linear layer according to a quantization formula, wherein the quantization formula is expressed as:
Figure BDA0002209441190000041
wherein Q represents a quantization weight matrix of the linear layer, QijThe parameter of the ith row and the jth column of the quantization weight matrix of the linear layer is represented, b represents the number of bits of a quantization bit, and round (·) represents a rounding function.
For example, some embodiments of the present disclosure provide a compression and acceleration method, further including: and training the target quantization model by adopting the same training parameter configuration as the neural network model.
For example, in the compression and acceleration method provided by some embodiments of the present disclosure, the training process of the target quantization model includes: a forward propagation stage, a backward propagation stage and a standard quantization stage; the forward propagation phase comprises: processing training input data by using a current target quantization model to obtain training output data, and calculating a loss value based on the training output data; the back propagation phase comprises: calculating a gradient based on the loss value, and correcting parameters of the current neural network model based on the gradient to obtain an updated neural network model; the standard quantization stage comprises: quantizing parameters of the updated neural network model to obtain an updated quantization model, and performing scale transformation processing on the updated quantization model to obtain an updated target quantization model.
For example, in compression and acceleration methods provided by some embodiments of the present disclosure, the neural network model includes an activation layer that includes a PACT activation function represented as:
Figure BDA0002209441190000042
wherein,
Figure BDA0002209441190000043
represents the output of the active layer, x represents the input of the active layer, and α represents the activation value parameter of the PACT activation function;
quantifying parameters of the neural network model to obtain the quantified model, further comprising:
performing the quantization process on the output of the active layer according to an active value quantization formula, the active value quantization formula being represented as:
Figure BDA0002209441190000051
where q represents a quantized value of the output of the active layer, a represents the number of bits of the quantized value of the output of the active layer, and round (·) represents a rounding function.
For example, in the compression and acceleration methods provided by some embodiments of the present disclosure, the back propagation stage further includes: calculating an activation value gradient according to an activation value gradient formula, and correcting a current activation value parameter based on the activation value gradient to obtain an updated activation value parameter, wherein the activation value gradient formula is expressed as:
Figure BDA0002209441190000052
wherein,
Figure BDA0002209441190000053
representing the activation value gradient.
For example, in the compression and acceleration method provided in some embodiments of the present disclosure, the training parameter configuration includes: initial learning rate, learning rate adjustment scheme, weight attenuation, iteration times of a training set, optimizer and batch size.
For example, in the compression and acceleration method provided in some embodiments of the present disclosure, before quantizing the parameters of the neural network model, the compression and acceleration method further includes: and pre-training the neural network model to obtain a preparation weight parameter of the neural network model.
For example, in the compression and acceleration method provided by some embodiments of the present disclosure, the pre-training of the neural network model includes: parameters of the neural network model are initialized using an happy-inch initialization scheme.
For example, in some embodiments of the present disclosure providing methods of compression and acceleration, the neural network model includes one of ResNet, MobileNet-V1, MobileNet-V2, and VGG-Net.
At least one embodiment of the present disclosure further provides a data processing method, including: the target quantization model obtained by adopting the compression and acceleration method provided by any embodiment of the disclosure is used for processing input data.
At least one embodiment of the present disclosure also provides a data processing apparatus, including: a memory for non-transitory storage of computer readable instructions; and a processor for executing computer readable instructions; wherein the computer readable instructions, when executed by the processor, perform the compression and acceleration methods provided by any of the embodiments of the present disclosure or perform the data processing methods provided by any of the embodiments of the present disclosure.
At least one embodiment of the present disclosure also provides a storage medium that stores computer-readable instructions non-temporarily, wherein the non-transitory computer-readable instructions, when executed by a computer, may perform instructions of the compression and acceleration method provided by any embodiment of the present disclosure or may perform instructions of the data processing method provided by any embodiment of the present disclosure.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
FIG. 1 is a schematic diagram of a convolutional neural network;
FIG. 2A is a schematic diagram of a convolutional neural network;
FIG. 2B is a schematic diagram of the operation of a convolutional neural network;
FIG. 3 is a schematic diagram of another convolutional neural network;
fig. 4 is a flowchart of a method for compressing and accelerating a neural network model according to at least one embodiment of the present disclosure;
fig. 5 is an exemplary flowchart corresponding to step S100 shown in fig. 4 provided in at least one embodiment of the present disclosure;
fig. 6 is another exemplary flowchart corresponding to step S100 shown in fig. 4 provided in at least one embodiment of the present disclosure;
fig. 7 is an exemplary flowchart corresponding to step S200 shown in fig. 4 provided in at least one embodiment of the present disclosure;
fig. 8 is an exemplary flowchart corresponding to step S300 shown in fig. 4 provided in at least one embodiment of the present disclosure;
fig. 9 is a schematic block diagram of a data processing apparatus according to at least one embodiment of the present disclosure; and
fig. 10 is a schematic diagram of a storage medium according to at least one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
The present disclosure is illustrated by the following specific examples. To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure. When any component of an embodiment of the present disclosure appears in more than one drawing, that component is represented by the same or similar reference numeral in each drawing.
Among the algorithm technologies in the AI field, Deep Learning (Deep Learning) is widely concerned by academia and industry, and scientists, researchers, enterprises, network communities, etc. all are energetically studying and promoting the research and development of Deep Learning neural network models.
With the breakthrough and progress of deep learning in the fields of image classification, target detection, natural language processing and the like, the demand for applying the deep learning to the actual life scene is stronger. Currently, mobile and portable electronic devices greatly facilitate people's lives, and deep learning will greatly improve the intelligence and entertainment of these devices. Therefore, it is highly desirable to deploy the deep learning neural network model in the mobile terminal and the embedded system.
However, in actual deployment, a neural network model applying deep learning generally faces a problem of an oversize model, for example, a file size of the neural network model generally varies from tens of megabytes to hundreds of megabytes, and such a file size is unbearable for a mobile terminal by a user due to an excessively long transmission waiting time caused by a consumed flow and a bandwidth influence during downloading; especially for some embedded systems with limited storage space, there may not be enough storage space at all to store such a large neural network model file. Meanwhile, the deep learning neural network model has high requirements on computing resources and computing power; when a large neural network model is used for calculation, the mobile terminal and the embedded system either cannot provide required calculation resources or are slow in calculation, so that response delay is too high to meet the actual application scene. In addition, the neural network model also consumes a large amount of power. In the calculation process of the neural network, the processor needs to frequently read the parameters of the neural network model, so that a larger neural network model correspondingly brings higher memory access times, the frequent memory access can also greatly improve the power consumption, and the high power consumption is not beneficial to deploying the neural network model at a mobile terminal.
Therefore, in order to deploy a well-performing neural network on a resource-limited hardware device, the neural network model needs to be compressed and accelerated. Because the quantization model can be transplanted on hardware very conveniently, the method for quantizing the neural network model has great development potential in a plurality of methods for compressing and accelerating the neural network model.
At least one embodiment of the present disclosure provides a method for compressing and accelerating a neural network model. The neural network model comprises a linear layer, and parameters of the neural network model comprise preparation weight parameters; the compression and acceleration method comprises the following steps: quantizing parameters of the neural network model to obtain a quantized model, wherein the parameters of the quantized model comprise quantized weight parameters of a linear layer; and carrying out scale transformation processing on the quantization model to obtain a target quantization model. Wherein, carrying out scale transformation processing on the quantization model comprises the following steps: calculating scale transformation parameters of the linear layer based on the number of output neurons of the linear layer or the standard deviation of the preparation weight parameters of the linear layer; and carrying out scale transformation processing on the quantization weight parameter of the linear layer based on the scale transformation parameter of the linear layer to obtain a standard quantization weight parameter of the linear layer.
Some embodiments of the present disclosure also provide a data processing method and apparatus, and a storage medium corresponding to the compression and acceleration method.
According to the compression and acceleration method of the neural network model, the target quantization model is obtained by carrying out scale transformation processing on the quantization model, the precision of the target quantization model can be improved, and the performance of the target quantization model is improved.
Originally, Convolutional Neural Networks (CNNs) were primarily used to identify two-dimensional shapes that were highly invariant to translation, scaling, tilting, or other forms of deformation of images. CNN simplifies the complexity of neural network models and reduces the number of weights mainly by local perceptual field and weight sharing. With the development of deep learning technology, the application range of CNN has not only been limited to the field of image recognition, but also can be applied to the fields of face recognition, character recognition, animal classification, image processing, and the like.
Fig. 1 shows a schematic diagram of a convolutional neural network. For example, the convolutional neural network may be used for image processing, which uses images as input and output and replaces scalar weights by convolutional kernels. Only a convolutional neural network having a 3-layer structure is illustrated in fig. 1, and embodiments of the present disclosure are not limited thereto. As shown in fig. 1, the convolutional neural network includes an input layer 101, a hidden layer 102, and an output layer 103. The input layer 101 has 4 inputs, the hidden layer 102 has 3 outputs, the output layer 103 has 2 outputs, and finally the convolutional neural network finally outputs 2 images.
For example, the 4 inputs to the input layer 101 may be 4 images, or four feature images of 1 image. The 3 outputs of the hidden layer 102 may be feature images of the image input via the input layer 101.
For example, as shown in FIG. 1, the convolutional layers have weights
Figure BDA0002209441190000091
And bias
Figure BDA0002209441190000092
Weight of
Figure BDA0002209441190000093
Representing convolution kernels, offsets
Figure BDA0002209441190000094
Is a scalar superimposed on the output of the convolutional layer, where k is a label representing the input layer 101 and i and j are labels of the elements of the input layer 101 and the elements of the hidden layer 102, respectively. For example, the first convolution layer 201 includes a first set of convolution kernels (of FIG. 1)
Figure BDA0002209441190000095
) And a first set of offsets (of FIG. 1
Figure BDA0002209441190000096
). The second convolutional layer 202 includes a second set of convolutional kernels (of FIG. 1)
Figure BDA0002209441190000097
) And a second groupBiasing (of FIG. 1
Figure BDA0002209441190000098
). Typically, each convolutional layer comprises tens or hundreds of convolutional kernels, which may comprise at least five convolutional layers if the convolutional neural network is a deep convolutional neural network.
For example, as shown in fig. 1, the convolutional neural network further includes a first activation layer 203 and a second activation layer 204. A first active layer 203 is located behind the first convolutional layer 201, and a second active layer 204 is located behind the second convolutional layer 202. The activation layers (e.g., the first activation layer 203 and the second activation layer 204) include activation functions that are used to introduce non-linear factors into the convolutional neural network so that the convolutional neural network can better solve more complex problems. The activation function may include a linear modification unit (ReLU) function, a Sigmoid function (Sigmoid function), or a hyperbolic tangent function (tanh function), etc. The ReLU function is a non-saturated non-linear function, and the Sigmoid function and the tanh function are saturated non-linear functions. For example, the activation layer may be solely a layer of the convolutional neural network, or the activation layer may be included in a convolutional layer (e.g., the first convolutional layer 201 may include the first activation layer 203, and the second convolutional layer 202 may include the second activation layer 204).
For example, in the first convolution layer 201, first, a number of convolution kernels of the first set of convolution kernels are applied to each input
Figure BDA0002209441190000099
And a number of biases of the first set of biasesTo obtain the output of the first convolution layer 201; the output of first buildup layer 201 can then be processed through first active layer 203 to obtain the output of first active layer 203. In the second convolutional layer 202, first, several convolutional kernels of the second set of convolutional kernels are applied to the output of the first active layer 203 which is input
Figure BDA00022094411900000911
And a firstSeveral of the two sets of biases
Figure BDA00022094411900000912
To obtain the output of the second convolutional layer 202; the output of second convolutional layer 202 may then be processed by second active layer 204 to obtain the output of second active layer 204. For example, the output of the first convolution layer 201 may be the application of a convolution kernel to its input
Figure BDA00022094411900000913
Then is offset with
Figure BDA00022094411900000914
As a result of the addition, the output of the second convolutional layer 202 may apply a convolutional kernel to the output of the first active layer 203
Figure BDA00022094411900000915
Then is offset with
Figure BDA00022094411900000916
The result of the addition.
Before image processing is performed by using the convolutional neural network, the convolutional neural network needs to be trained. After training, the convolution kernel and bias of the convolutional neural network remain unchanged during image processing. In the training process, each convolution kernel and bias are adjusted through a plurality of groups of input/output example images and an optimization algorithm to obtain an optimized convolution neural network model.
Fig. 2A shows a schematic structural diagram of a convolutional neural network, and fig. 2B shows a schematic operational process diagram of a convolutional neural network. For example, as shown in fig. 2A and 2B, after the input image is input to the convolutional neural network through the input layer, the class identifier is output after several processing procedures (e.g., each level in fig. 2A) are performed in sequence. The main components of a convolutional neural network may include a plurality of convolutional layers, a plurality of downsampling layers, and a fully-connected layer. For example, a complete convolutional neural network may be composed of a stack of these three layers. For example, fig. 2A shows only three levels of a convolutional neural network, namely a first level, a second level, and a third level. For example, each tier may include a convolution module and a downsampling layer. For example, each convolution module may include a convolution layer. Thus, the processing procedure of each hierarchy may include: the input image is convolved (convolution) and downsampled (sub-sampling/down-sampling). For example, each convolution module may further include a batch normalization (batch normalization) layer according to actual needs, so that the processing procedure of each level may further include batch normalization processing.
For example, the batch normalization layer is used for performing batch normalization processing on the feature map so as to change the gray value of the pixel of the feature image within a predetermined range, thereby reducing the calculation difficulty and improving the contrast. For example, the predetermined range may be [ -1, 1 ]. For example, the processing manner of the batch normalization layer may refer to a common batch normalization processing process, and is not described herein again.
Convolutional layers are the core layers of convolutional neural networks. In the convolutional layer of the convolutional neural network, one neuron is connected with only part of the neurons of the adjacent layer. The convolutional layer may apply several convolutional kernels (also called filters) to the input image to extract various types of features of the input image. Each convolution kernel may extract one type of feature. The convolution kernel is generally initialized in the form of a random decimal matrix, and the convolution kernel can be learned to obtain a reasonable weight in the training process of the convolutional neural network. The result obtained after applying a convolution kernel to the input image is called a feature image (feature map), and the number of feature images is equal to the number of convolution kernels. Each characteristic image is composed of a plurality of neurons arranged in a rectangular shape, and the neurons of the same characteristic image share a weight value, wherein the shared weight value is a convolution kernel. The feature images output by a convolutional layer of one level may be input to an adjacent convolutional layer of the next level and processed again to obtain new feature images. For example, as shown in fig. 2A, a first level of convolutional layers may output a first feature image, which is input to a second level of convolutional layers for further processing to obtain a second feature image.
For example, as shown in fig. 2B, the convolutional layer may use different convolutional cores to convolve the data of a certain local perceptual domain of the input image, and the convolution result is input to the active layer, which performs calculation according to the corresponding activation function to obtain the feature information of the input image.
For example, as shown in fig. 2A and 2B, a downsampled layer is disposed between adjacent convolutional layers, which is one form of downsampling. On one hand, the down-sampling layer can be used for reducing the scale of an input image, simplifying the complexity of calculation and reducing the phenomenon of overfitting to a certain extent; on the other hand, the downsampling layer may perform feature compression to extract main features of the input image. The downsampling layer can reduce the size of the feature images without changing the number of feature images. For example, an input image of size 12 × 12, which is sampled by a convolution kernel of 6 × 6, then a 2 × 2 output image can be obtained, which means that 36 pixels on the input image are combined to 1 pixel in the output image. The last downsampled or convolutional layer may be connected to one or more fully-connected layers that are used to connect all the extracted features. The output of the fully connected layer is a one-dimensional matrix, i.e., a vector.
Fig. 3 shows a schematic structural diagram of another convolutional neural network. For example, referring to the example shown in FIG. 3, the output of the last convolutional layer (i.e., the t-th convolutional layer) is input to a planarization layer for a planarization operation (Flatten). The planarization layer may convert the feature image (2D image) into a vector (1D). The planarization operation may be performed as follows:
vk=fk/j,k%j
where v is a vector containing k elements and f is a matrix with i rows and j columns.
The output of the planarization layer (i.e., the 1D vector) is then input to a fully connected layer (FCN). The fully-connected layer may have the same structure as the convolutional neural network, but differs in that the fully-connected layer uses a different scalar value instead of the convolution kernel.
For example, the output of the last convolutional layer may also be input to an averaging layer (AVG). The averaging layer is used to average the output, i.e. represent the output image with the mean of the feature images, so that a 2D feature image is converted into a scalar. For example, if a convolutional neural network includes an equalization layer, it may not include a planarization layer.
For example, according to actual needs, the equalization layer or the full-link layer may be connected to a classifier, the classifier may perform classification according to the extracted features, and the output of the classifier may be used as the final output of the convolutional neural network, i.e., a class identifier (label) representing a class of an image.
For example, the classifier may be a Support Vector Machine (SVM) classifier, a softmax classifier, a nearest neighbor rule (KNN) classifier, and the like. As shown in fig. 3, in one example, the convolutional neural network includes a softmax classifier, which is a generator of a logic function that can compress a K-dimensional vector z containing arbitrary real numbers into a K-dimensional vector σ (z). The formula of the softmax classifier is as follows:
wherein Z isjRepresents the jth element in a K-dimensional vector z, σ (z) represents the prediction probability of each class identifier (label), σ (z) is a real number and ranges from (0,1), and the sum of the K-dimensional vectors σ (z) is 1. According to the above formula, each class identifier in the K-dimensional vector z is given a certain prediction probability, and the class identifier having the largest prediction probability is selected as the identifier or class of the input image.
Some embodiments of the present disclosure and examples thereof are described in detail below with reference to the accompanying drawings.
Fig. 4 is a flowchart of a method for compressing and accelerating a neural network model according to at least one embodiment of the present disclosure. For example, the compression and acceleration method can be used for quantifying various neural network models such as ResNet (e.g., ResNet-50), MobileNet-V1, MobileNet-V2, VGG-Net, and the like, so as to realize compression and acceleration of the various neural network models. It should be noted that the applicable scope of the compression and acceleration method includes, but is not limited to, the above listed neural network models.
For example, as shown in fig. 4, the compression and acceleration method includes steps S000 to S300.
Step S000: and pre-training the neural network model to obtain a preparation weight parameter of the neural network model.
For example, in step S000, the neural network model may be an untrained full-precision model (full-precision model). For example, the full-precision model may be pre-trained using conventional training methods, training techniques (ticks), and training parameter (e.g., including hyper-parameters) configurations.
For example, training parameter configuration typically includes: initial learning rate (initial learning rate), learning rate adjustment scheme (learning rate scheduler), weight decay (weight decay), number of iterations of training set (the epoch), optimizer (optimizer), batch size (batch size), and the like. For example, in some examples, the initial learning rate may be set to 0.05, the learning rate adjustment scheme may employ a cosine annealing adjustment scheme (cosine annealing scheduler), and the weight attenuation may be set to 4 × 10-5The number of iterations of the training set may be set to 150, the optimizer may employ a Stochastic Gradient Descent (SGD) optimizer, the batch size may be set to 2048 or 1024, and so on. It should be noted that the above training parameter configuration is exemplary and should not be considered as limiting the present disclosure. In the embodiment of the present disclosure, the training parameter configuration may be set according to actual needs.
For example, the pre-training process of neural network models typically includes: initializing parameters of the neural network model; processing training input data by using a neural network model to obtain training output data; calculating a loss value through a loss function based on the training output data; gradients are calculated based on the loss values and parameters of the neural network model are modified.
For example, in some examples, an happy ming Initialization (Kaiming Initialization) scheme may be employed to initialize parameters of the neural network model. For example, the parameters of the neural network model may be initialized to random numbers that conform to a gaussian distribution. For example, the initial weight parameters of each functional layer (e.g., convolutional layer, fully-connected layer, etc.) of the neural network model may be made to conform to a gaussian distribution, e.g., the expectation of the gaussian distribution is 0, and the standard deviation of the gaussian distribution is the inverse of the number of output neurons of that functional layer. For example, for a convolutional layer, the number of output neurons of the convolutional layer is equal to the product of the number of output channels of the convolutional layer and the number of elements in the convolutional kernel of the convolutional layer; for example, for a fully-connected layer, the number of output neurons of the fully-connected layer is equal to the number of features output by the fully-connected layer.
For example, in some examples, the type of training input data depends on the processing objectives of the neural network model, e.g., the training input data may include images, text, speech, etc., depending on the processing objectives of the neural network model. Taking neural network models such as ResNet, Mobile Net-V1, Mobile Net-V2, and VGG-Net as examples, the training input data may be images, and images in an ImageNet database may be used as the training input data.
For example, in some examples, the loss function may be selected according to actual needs, for example, the loss function may include, but is not limited to, one or any combination of a 0-1 loss function, a square loss function, a logarithmic loss function, a cross-entropy loss function (cross-entropy cost function), and the like, which is not limited by the embodiments of the disclosure.
For example, in some examples, a random gradient descent (stochastic gradient descent) algorithm, a Batch Gradient Descent (BGD) algorithm, or the like may be used to calculate the gradient and modify the parameters of the neural network model based on the gradient.
For example, in some examples, the pre-training process of the neural network model may further include: judging whether the training of the neural network model meets a preset condition or not, and if not, repeatedly training the neural network model; and if the preset conditions are met, stopping training the neural network model to obtain the trained neural network model. For example, in one example, the predetermined condition is that the loss value corresponding to the training input data is no longer significantly reduced; for example, in another example, the predetermined condition is that the number of times of training or the training period of the neural network model reaches a predetermined number; embodiments of the present disclosure are not limited in this regard.
It should be noted that the above description only schematically illustrates the training process of the neural network model. Those skilled in the art will appreciate that in the training process, a large amount of sample data is required to train the neural network model; meanwhile, in the training process of each sample data, a plurality of repeated iterations can be included to correct the parameters of the neural network model. As another example, the training phase may also include fine-tuning (fine-tune) parameters of the neural network model to obtain more optimal parameters.
For example, in some examples, the neural network model includes linear layers, e.g., the linear layers include at least one of convolutional layers (convolution layer), recursive layers (recursive layer), and fully-connected layers (full-connected layer). For example, in some examples, the neural network model also includes non-linear layers, e.g., the non-linear layers include a batch normalization layer (batch normalization layer) and an activation layer (activation layer, e.g., employing a non-linear activation function), and so on.
For example, after pre-training, the parameters of the neural network model are the preparatory weight parameters. For example, in some examples, the provisioning weight parameter is a full precision 32-bit floating point number. It should be noted that, in some examples, the compression and acceleration method provided by the embodiments of the present disclosure may not include step S000, for example, steps S100 to S300 may be performed directly based on a neural network model that is trained in the art to obtain a target quantization model. In this case, the parameters of the trained neural network model are the preparatory weight parameters.
Step S100: and quantizing the parameters of the neural network model to obtain a quantized model.
For example, in step S100, parameters of the neural network model may be quantified using a DoReFa scheme. For example, quantizing parameters of the neural network model refers to changing at least some parameters of the neural network model from, for example, a high-precision floating point number (for example, a full-precision 32-bit floating point number) to, for example, a low-precision fixed point number (for example, a 1-8-bit fixed point number), thereby compressing and accelerating the neural network model. It should be noted that, in step S100, other types of quantization schemes may also be used to quantize the parameters of the neural network model, and the embodiment of the present disclosure is not limited thereto. Hereinafter, the quantization process in step S100 is explained in detail based on the DoReFa scheme. For example, specific details of the DoReFa protocol can be found in Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. This document is hereby incorporated by reference in its entirety as part of the present disclosure.
Fig. 5 is an exemplary flowchart corresponding to step S100 shown in fig. 4 provided in at least one embodiment of the present disclosure. For example, as shown in fig. 5, the parameters of the neural network model are quantized to obtain a quantized model, i.e., step S100, which includes steps S110 to S120.
Step S110: and clamping the preparation weight parameter of the linear layer to obtain a clamping weight parameter of the linear layer.
For example, "clipping" refers to scaling a set of parameters (e.g., preparatory weight parameters of a linear layer) according to a certain rule (e.g., according to a certain formula), so that the value range of the scaled parameters is limited to a certain interval, so as to facilitate subsequent further processing. For example, in some examples, the preparation weight parameter of the linear layer may be clamped according to a clamping formula to limit a value range of the clamping weight parameter of the linear layer to a predetermined interval, for example, the predetermined interval may be [0,1], but is not limited thereto. For example, by the clamping process, the distribution of the parameters of the linear layer (i.e., the clamping weight parameters of the linear layer) in the predetermined interval can be made more uniform, thereby being beneficial to reducing quantization errors in subsequent steps. For example, in some examples, the clamp formula may be expressed as:
Figure BDA0002209441190000151
wherein,
Figure BDA0002209441190000152
a clamp weight matrix representing the linear layer (including clamp weight parameters of the linear layer),
Figure BDA0002209441190000153
the parameters of the ith row and the jth column of the clamping weight matrix are represented, W represents the preparation weight matrix of the linear layer (including the preparation weight parameters of the linear layer), WijParameter, W, representing the ith row and jth column of the preparatory weight matrix for a linear layermnThe parameter of the mth row and nth column of the preparation weight matrix of the linear layer is represented, tanh (·) represents a hyperbolic tangent function, and max (·) represents a max-valued function.
For example, the above-mentioned clipping formula can limit the value range of the clipping weight parameter of the linear layer to the interval [0,1 ].
Step S120: and carrying out quantization processing on the clamping weight parameters of the linear layer to obtain the quantization weight parameters of the linear layer.
For example, in some examples, the clamp weight parameters of the linear layer may be quantized according to a weight quantization formula to obtain the quantized weight parameters of the linear layer. For example, in some examples, the weight quantization formula may be expressed as:
Figure BDA0002209441190000154
wherein Q represents the quantization weight matrix of the linear layer (including the quantization weight parameter of the linear layer), QijThe parameter of the ith row and the jth column of the quantization weight matrix of the linear layer is represented, b represents the number of bits of the quantization weight parameter of the linear layer, and round (·) represents a rounding function.
For example, the parameters of the quantization model include quantization weight parameters of the linear layer. For example, to facilitate the transfer of the quantization model to the mobile terminal and the embedded system, the bit number b of the quantization weight parameter of the linear layer is generally set to 1-8 bits (bit). Of course, the number of bits of the quantization weight parameter of the linear layer may also be set to more bits as needed, which is not limited by the embodiments of the present disclosure.
Fig. 6 is another exemplary flowchart corresponding to step S100 shown in fig. 4 provided in at least one embodiment of the present disclosure. Step S100 shown in fig. 6 includes step S130 in addition to step S110 and step S120 shown in fig. 5.
For example, in some examples, the neural network model includes an activation layer. For example, the activation layer may include a PACT activation function, but is not limited to such. For example, the PACT activation function is expressed as:
Figure BDA0002209441190000161
wherein,
Figure BDA0002209441190000162
the output of the activation layer, x represents the input of the activation layer, and α represents the activation value parameter of the PACT activation function. For example, α is a floating number (floating number). For example, the PACT activation function may reduce quantization error of the output of the active layer.
For example, as shown in fig. 6, the parameters of the neural network model are quantized to obtain a quantized model, i.e., step S100, which further includes step S130.
Step S130: and carrying out quantization processing on the output of the active layer.
For example, in some examples, the output of the active layer may be quantized according to an active value quantization formula. For example, the activation value quantization formula may be expressed as:
Figure BDA0002209441190000163
where q represents the quantized value of the output of the active layer, a represents the number of bits of the quantized value of the output of the active layer, and round (·) represents a rounding function. For example, q is a dynamic fixed-point number (dynamic fixed-point number); for example, the number a of bits of the quantized value of the output of the active layer is generally set to, for example, 1 to 8 bits, for example, 2 to 4 bits.
For example, in the embodiment of the present disclosure, the output of the active layer is quantized, which is beneficial to increasing the operation speed of the quantization model, so as to be beneficial to implementing the acceleration function of the compression and acceleration method provided by the embodiment of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, the quantization process may not be performed on the batch normalization layer in the neural network model, or may not be performed on the bias (bias) of the last fully-connected layer in the neural network model.
In the research, the inventors of the present application found that: on one hand, the quantization model obtained according to step S100 generally has the problems of accuracy degradation and performance degradation; on the other hand, in the neural network model or/and the quantitative model, if the gradient of the weight is kept at the same scale order, the problems of gradient explosion and gradient disappearance can be prevented, thereby being beneficial to improving the precision of the quantitative model and improving the performance of the quantitative model. For example, in order to keep the gradient of the weight at the same scale order, in the neural network model, a batch normalization layer may be directly connected after the linear layer (the output of the linear layer is processed by the batch normalization layer and then input into a subsequent functional layer); however, in the neural network model, there are often also linear layers not directly followed by the batch normalization layer, for example, the last fully-connected layer for output in the neural network model such as ResNet, MobileNet-V1, MobileNet-V2, and VGG-Net. Therefore, the compression and acceleration method provided by the embodiment of the present disclosure further includes, after step S100, step S200 to further process the quantization model.
Step S200: and carrying out scale transformation processing on the quantization model to obtain a target quantization model.
For example, in some examples, the target quantization model obtained in step S200 may have higher accuracy and better performance than the quantization model obtained in step S100 under the same efficiency constraints (efficiencies constraints). For example, the same efficiency constraint means that the size of the model (corresponding to the memory space occupied by the model), power consumption, latency (corresponding to the processing speed of the model), and the like are substantially the same. For example, in some examples, the performance of the target quantization model obtained in step S200 may be comparable to or better than the performance of the corresponding full-precision model (see subsequent tables 1-2).
Fig. 7 is an exemplary flowchart corresponding to step S200 shown in fig. 4 provided in at least one embodiment of the present disclosure. For example, as shown in fig. 7, the quantization model is subjected to a scaling process to obtain a target quantization model, i.e., step S200 includes steps S210 to S220.
Step S210: and calculating the scale transformation parameter of the linear layer based on the number of output neurons of the linear layer or the standard deviation of the preparation weight parameter of the linear layer.
For example, in some examples, scaling parameters for a linear layer are calculated based on a number of output neurons for the linear layer, including: and calculating the scale transformation parameters of the linear layer according to a first scale transformation parameter calculation formula. For example, the first scaling parameter calculation formula is expressed as:
Figure BDA0002209441190000171
wherein RSF represents the scaling parameters of the linear layer,
Figure BDA0002209441190000172
represents the number of output neurons of the linear layer, Q represents the quantization weight matrix of the linear layer (including the quantization weight parameters of the linear layer), and var (Q) represents the variance of the elements of the quantization weight matrix of the linear layer.
For example, in some examples, when the number of bits of the quantization weight parameter of the linear layer is 1-2 bits, the scaling parameter RSF of the linear layer calculated by using the first scaling parameter calculation formula may cause the target quantization model to converge faster than the scaling parameter RSF of the linear layer calculated by using the subsequent two scaling parameter calculation formulas. It should be noted that, in the embodiment of the present disclosure, when the number of bits of the quantization weight parameter of the linear layer is other values (for example, 3-8 bits), the scaling parameter RSF of the linear layer may still be calculated by using the first scaling parameter calculation formula.
For example, in other examples, calculating the scaling parameters for the linear layer based on the number of output neurons for the linear layer includes: and calculating the scale transformation parameters of the linear layer according to a second scale transformation parameter calculation formula. For example, the second scaling parameter calculation formula is expressed as:
Figure BDA0002209441190000181
wherein RSF represents the scaling parameters of the linear layer,representing the number of output neurons of the linear layer,
Figure BDA0002209441190000183
an auxiliary weight matrix representing the linear layer,
Figure BDA0002209441190000184
the variance of the elements of the auxiliary weight matrix representing the linear layer. Auxiliary weight matrix of linear layer
Figure BDA0002209441190000185
Expressed as:
Figure BDA0002209441190000186
wherein,
Figure BDA0002209441190000187
representing the clamping weight matrix of the linear layer.
It should be noted that, in the above example, the auxiliary weight matrix of the linear layer
Figure BDA0002209441190000188
Is introduced for explaining the calculation formula of the second scale transformation parameter, and does not include the auxiliary weight matrix of the linear layer in the neural network model and the quantization model thereof
Figure BDA0002209441190000189
For example, in still other examples, calculating the scaling parameters for the linear layer based on the standard deviation of the preparation weight parameters for the linear layer includes: and calculating the scale transformation parameters of the linear layer according to a third scale transformation parameter calculation formula. For example, the third scaling parameter calculation formula is expressed as:
wherein RSF represents a scaling parameter of the linear layer, W represents a preparation weight matrix of the linear layer, VAR (W) represents a variance of elements of the preparation weight matrix of the linear layer,
Figure BDA00022094411900001811
an auxiliary weight matrix representing the linear layer,the variance of the elements of the auxiliary weight matrix representing the linear layer. Auxiliary weight matrix of linear layer
Figure BDA00022094411900001813
Expressed as:
wherein,
Figure BDA00022094411900001815
representing the clamping weight matrix of the linear layer.
It should be noted that, in the above example, the auxiliary weight matrix of the linear layer
Figure BDA00022094411900001816
Is introduced for explaining the calculation formula of the third scale transformation parameter, and does not include the auxiliary weight matrix of the linear layer in the neural network model and the quantization model thereof
Figure BDA0002209441190000191
It should be noted that, in some examples, the accuracy and the performance of the target quantization model obtained based on the scaling parameter RSF of the linear layer calculated by the first scaling parameter calculation formula, the target quantization model obtained based on the scaling parameter RSF of the linear layer calculated by the second scaling parameter calculation formula, and the target quantization model obtained based on the scaling parameter RSF of the linear layer calculated by the third scaling parameter calculation formula are substantially equivalent.
For example, in some examples, when the number of bits of the quantization weight parameter of the linear layer is 3 to 8, any one of the first scaling parameter calculation formula, the second scaling parameter calculation formula, and the third scaling parameter calculation formula may be selected to calculate the scaling parameter RSF of the linear layer, and meanwhile, the accuracy and the performance of the obtained target quantization model are substantially equivalent. It should be noted that, in at least one embodiment of the present disclosure, when the number of bits of the quantization weight parameter of the linear layer is other values (for example, 1-2 bits), the scaling parameter RSF of the linear layer may still be calculated by using the second scaling parameter calculation formula or the third scaling parameter calculation formula.
Step S220: and carrying out scale transformation processing on the quantization weight parameters of the linear layer based on the scale transformation parameters of the linear layer to obtain standard quantization weight parameters of the linear layer.
For example, in some examples, scaling quantization weight parameters of a linear layer (e.g., a linear layer not directly followed by a batch normalization layer) based on scaling parameters of the linear layer facilitates maintaining gradients of weights in a quantization model at the same scale order, thereby facilitating improving accuracy and performance of the quantization model.
For example, in some examples, the quantization weight parameters of the linear layers may be scaled according to a scaling formula. For example, the scaling formula may be expressed as:
Figure BDA0002209441190000192
wherein Q is*A standard quantization weight matrix representing the linear layer (including standard quantization weight parameters of the linear layer),representing the parameters of the ith row and the jth column of the standard quantization weight matrix of the linear layer, Q representing the quantization weight matrix of the linear layer, QijAnd representing the parameter of the ith row and the jth column of the quantization weight matrix of the linear layer.
It should be noted that, in the embodiment of the present disclosure, only the quantization weight parameters of the linear layer that is not directly followed by the batch normalization layer may be subjected to the scaling processing, that is, the quantization weight parameters of the linear layer that is directly followed by the batch normalization layer may not be subjected to the scaling processing. Of course, the quantization weight parameters of the linear layer not directly followed by the batch normalization layer and the linear layer directly followed by the batch normalization layer may be subjected to the scaling processing at the same time. Embodiments of the present disclosure are not limited in this regard.
Step S300: and training the target quantization model by adopting the same training parameter configuration as the neural network model.
For example, in step S300, the training parameter configuration of the neural network model may refer to the relevant description in step S000, and will not be repeated herein.
Fig. 8 is an exemplary flowchart corresponding to step S300 shown in fig. 4 provided in at least one embodiment of the present disclosure. For example, as shown in fig. 8, the target quantization model is trained by using the same training parameter configuration as the neural network model, that is, step S300 includes: the method comprises a forward propagation stage, a backward propagation stage and a standard quantization stage, and repeatedly executing the three stages to obtain a trained target quantization model. The forward propagation stage, the backward propagation stage and the standard quantization stage correspond to step S310, step S320 and step S330, respectively, described below.
Step S310: the training input data is processed using the current target quantization model to obtain training output data, and a loss value is calculated based on the training output data.
For example, the operation of the forward propagation phase of the training process of the target quantization model, i.e., step S310, may be referred to the operation of the forward propagation phase of the neural network model (e.g., full-precision model) accordingly, and will not be repeated herein.
Step S320: calculating a gradient based on the loss value, and correcting the parameters of the current neural network model based on the gradient to obtain an updated neural network model;
for example, the operation of the back propagation stage of the training process of the target quantization model, i.e., step S320, may be referred to the operation of the back propagation stage of the neural network model (e.g., full-precision model) accordingly, and will not be repeated herein.
For example, in some examples, in a case that the compression and acceleration method provided by the embodiment of the present disclosure further includes step S130 (i.e., performing quantization processing on the output of the activation layer), in step S320, an activation value gradient may be calculated according to the activation value gradient formula, and the current activation value parameter may be modified based on the activation value gradient to obtain an updated activation value parameter. For example, in some examples, for the foregoing PACT activation function and activation value quantization formula, the activation value gradient formula may be expressed as:
wherein,
Figure BDA0002209441190000202
representing the activation value gradient.
For example, the activation value gradient formula is used for calculating the activation value gradient, which is beneficial to reducing the quantization error.
Step S330: quantizing the parameters of the updated neural network model to obtain an updated quantization model, and performing scale transformation on the updated quantization model to obtain an updated target quantization model.
For example, the operation of the standard quantization stage of the training process of the target quantization model, i.e., step S330, can refer to the related expressions of step S100 and step S200, and will not be repeated herein.
For example, by training the target quantization model in the above steps S310 to S330, the accuracy of the target quantization model can be improved, and the performance of the target quantization model can be improved.
It should be noted that, in the training process of the target quantization model, the parameters of the target quantization model (including the standard quantization weight parameters of the linear layer) are not directly updated, but the parameters of the neural network model are modified and then subjected to quantization and scale transformation, so as to update the parameters of the target quantization model.
It should be noted that, compared with the calculation of the scale transformation parameters of the linear layer based on the standard deviation of the preparation weight parameters of the linear layer (i.e., the calculation of the scale transformation parameters of the linear layer by using the third scale transformation parameter calculation formula or the second scale transformation parameter calculation formula), the calculation of the scale transformation parameters of the linear layer (i.e., the calculation of the scale transformation parameters of the linear layer by using the first scale transformation parameter calculation formula or the second scale transformation parameter calculation formula) based on the number of output neurons of the linear layer is not required to calculate var (w), so that the computation amount can be reduced, which is beneficial to accelerate the training speed of the target quantization model.
It should be noted that, in some examples, the target quantization model may not store the standard quantization weight parameters of the linear layer, but store the quantization weight parameters and the scaling parameters of the linear layer, so as to reduce the size (i.e., the occupied storage space) of the target quantization model. When the target quantization model is applied to data processing, the standard quantization weight parameter of the linear layer may be obtained through calculation of the quantization weight parameter and the scale transformation parameter of the linear layer, or the input of the linear layer may be processed through the quantization weight parameter of the linear layer to obtain the output of the linear layer, and then the output of the linear layer is processed through the scale transformation parameter. For example, the target quantization model may, accordingly, store not the bias of the linear layer (e.g., fully-connected layer) in the target quantization model, but the bias of the linear layer (e.g., fully-connected layer) in the quantization model; therefore, when the target quantization model is applied to data processing, the offset of the linear layer in the quantization model may be converted into the offset of the linear layer in the target quantization model through the scale change parameter, or the input of the linear layer may be processed through the quantization weight parameter of the linear layer in the quantization model and the offset of the linear layer in the quantization model to obtain the output of the linear layer, and then the output of the linear layer is processed through the scale change parameter, which is not limited in this embodiment of the present disclosure.
It should be noted that, in practical applications, the compression and acceleration method provided by the embodiments of the present disclosure may selectively (for example, either one of them or both of them) quantize the weight parameters of the neural network model (i.e., weight quantization) and the output of the activation layer (i.e., activation value quantization) according to practical needs.
It should be noted that, in the embodiment of the present disclosure, the neural network model and the quantization model thereof may be implemented by software, hardware, firmware, or any combination thereof, so as to execute the corresponding processing procedure.
It should be noted that, in the embodiment of the present disclosure, the flow of the compression and acceleration method of the neural network model may include more or less operations, and these operations may be performed sequentially or in parallel. Although the flow of the compression and acceleration method of the neural network model described above includes a plurality of operations occurring in a specific order, it should be clearly understood that the order of the plurality of operations is not limited. The above-described neural network model compression and acceleration method may be performed once or may be performed a plurality of times according to a predetermined condition.
According to the compression and acceleration method of the neural network model, the target quantization model is obtained by carrying out scale transformation processing on the quantization model, the precision of the target quantization model can be improved, and the performance of the target quantization model is improved.
At least one embodiment of the present disclosure further provides a data processing method, where the data processing method includes: the target quantization model obtained by adopting the compression and acceleration method provided by any embodiment of the disclosure is used for processing the input data to obtain the output data.
For example, in some examples, the type of input data depends on the processing objectives of the target quantization model, e.g., the input data may include images, text, speech, etc., depending on the processing objectives of the target quantization model. Taking neural network models such as ResNet, Mobile Net-V1, Mobile Net-V2 and VGG-Net and their target quantization models as examples, the input data can be images.
For example, the output data may represent the results of inferential predictions made by the target quantization model over the input data. Taking neural network models such as ResNet, Mobile Net-V1, Mobile Net-V2, VGG-Net, and the like, and target quantization models thereof as examples, the output data thereof can represent the classification results of the images (i.e., the input data).
For example, in some examples, the target quantization model may be deployed in a mobile terminal and an embedded system such as a smart phone, a tablet computer, a car navigator, and the like, so that the mobile terminal and the embedded system and the like may perform the data processing method.
In the following, taking the MobileNet-V1 neural network model and the MobileNet-V2 neural network model as examples, the quantization scheme precision comparison at different bit widths is exemplarily shown by tables 1-2. Table 1 is a quantization scheme precision comparison table (quantizing weights and activation values) for MobileNet-V1 and MobileNet-V2 under different bit widths (i.e., the number of quantization bits); table 2 shows a comparison table of quantization scheme accuracies (quantization of weights, no quantization of activation values) for different bit widths of MobileNet-V1 and MobileNet-V2.
It should be noted that, in table 1-2, pact (quantized quantization activation), HAQ (hard-ware automatic quantization), Deep Compression are known quantization schemes, and SAT is a quantization scheme (i.e., Compression and acceleration method) provided by the embodiments of the present disclosure, where the scaling parameters of the linear layer are calculated based on the number of output neurons of the linear layer (using a third scaling parameter calculation formula). It should be noted that the bit width of the HAQ scheme is flexible (flexible), so the bit width of the HAQ scheme in tables 1-2 is equivalent bit width, for example, the equivalent bit width is 2, 3, 4, 5, 6, 8, etc., respectively, so that the precision comparison can be performed with other quantization schemes under the corresponding bit width. In addition, in tables 1-2, FP represents the corresponding full-precision model; acc. -1 represents the probability that one candidate class of model output is the correct class of the input image, and Acc. -5 represents the probability that the five candidate classes of model output include the correct class of the input image. For example, specific details of PACT protocols can be found in the literature, Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan PACT, Parametric Clipping Activation for quantized Neural Networks, arXiv:1805.06085,2018; specific details of HAQ protocols can be found in the literature, Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han.HAQ: Hardware-aware automated quantification with Mixed precipitation, arXiv:1811.08886,2019; details of the Deep Compression scheme can be found in the literature, Song Han, Huizi Mao, and William JDall. Deep Compression: Compressing Deep Neural Networks with sounding, TracedQuantization and Huffman coding. arXiv:1510.00149,2015. The above documents are hereby incorporated by reference in their entirety as part of the present disclosure.
TABLE 1 quantization scheme precision comparison tables for different bit widths for MobileNet-V1 and MobileNet-V2 (quantizing weights and activation values)
Figure BDA0002209441190000231
Figure BDA0002209441190000241
TABLE 2 quantization scheme precision comparison tables for different bit widths for MobileNet-V1 and MobileNet-V2 (quantize weights, not activate values)
Figure BDA0002209441190000242
As can be seen from tables 1-2, the accuracy of the target quantization model obtained by using the compression and acceleration method provided by the embodiment of the present disclosure is in most cases higher than that of the quantization models obtained by using other known quantization schemes, which indicates that the compression and acceleration method provided by the embodiment of the present disclosure can improve the accuracy of the target quantization model and improve the performance of the target quantization model.
For technical effects of the data processing method provided by the embodiments of the present disclosure, reference may be made to the corresponding description of the compression and acceleration method of the neural network model in the above embodiments, and details are not repeated herein.
At least one embodiment of the present disclosure further provides a data processing apparatus. Fig. 9 is a schematic block diagram of a data processing apparatus according to at least one embodiment of the present disclosure.
For example, as shown in FIG. 9, the data processing apparatus 500 includes a memory 510 and a processor 520. For example, the memory 510 is used for non-transitory storage of computer readable instructions, and the processor 520 is used for executing the computer readable instructions, and the computer readable instructions are executed by the processor 520 to perform the compression and acceleration method of the neural network model or/and the data processing method provided by any embodiment of the disclosure.
For example, the memory 510 and the processor 520 may be in direct or indirect communication with each other. For example, in some examples, as shown in FIG. 9, the data processing apparatus 500 may further include a system bus 530, and the memory 510 and the processor 520 may communicate with each other via the system bus 530, for example, the processor 520 may access the memory 510 via the system bus 1006. For example, in other examples, components such as memory 510 and processor 520 may communicate over a network connection. The network may include a wireless network, a wired network, and/or any combination of wireless and wired networks. The network may include a local area network, the Internet, a telecommunications network, an Internet of Things (Internet of Things) based on the Internet and/or a telecommunications network, and/or any combination thereof, and/or the like. The wired network may communicate by using twisted pair, coaxial cable, or optical fiber transmission, for example, and the wireless network may communicate by using 3G/4G/5G mobile communication network, bluetooth, Zigbee, or WiFi, for example. The present disclosure is not limited herein as to the type and function of the network.
For example, the processor 520 may control other components in the data processing apparatus to perform desired functions. The processor 520 may be a device having data processing capability and/or program execution capability, such as a Central Processing Unit (CPU), Tensor Processor (TPU), or Graphics Processor (GPU). The Central Processing Unit (CPU) may be an X86 or ARM architecture, etc. The GPU may be separately integrated directly onto the motherboard, or built into the north bridge chip of the motherboard. The GPU may also be built into the Central Processing Unit (CPU).
For example, memory 510 may include any combination of one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like.
For example, one or more computer instructions may be stored on memory 510 and executed by processor 520 to implement various functions. Various applications and various data, such as preparation weight parameters of the linear layer, standard quantization weight parameters of the linear layer, scaling parameters of the linear layer, activation value parameters, and various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
For example, some of the computer instructions stored by memory 510, when executed by processor 520, may perform one or more steps according to the compression and acceleration methods described above. As another example, other computer instructions stored by memory 510 may, when executed by processor 520, perform one or more steps in accordance with the data processing methods described above.
For example, as shown in fig. 9, the data processing apparatus 500 may further include an input interface 540 that allows an external device to communicate with the data processing apparatus 500. For example, input interface 540 may be used to receive instructions from an external computer device, from a user, and the like. The data processing apparatus 500 may also include an output interface 550 to interconnect the data processing apparatus 500 and one or more external devices. For example, the data processing apparatus 500 may display an image or the like through the output interface 550. External devices that communicate with the data processing apparatus 500 through the input interface 1010 and the output interface 1012 may be included in an environment that provides any type of user interface with which a user may interact. Examples of user interface types include graphical user interfaces, natural user interfaces, and the like. For example, the graphical user interface may accept input from a user using input device(s) such as a keyboard, mouse, remote control, etc., and provide output on an output device such as a display. Furthermore, a natural user interface may enable a user to interact with the data processing apparatus 500 in a manner that does not require the constraints imposed by input devices such as a keyboard, mouse, remote control, and the like. Instead, natural user interfaces may rely on speech recognition, touch and stylus recognition, gesture recognition on and near the screen, air gestures, head and eye tracking, speech and speech, vision, touch, gestures, and machine intelligence, among others.
In addition, although illustrated as a single system in fig. 9, it is to be understood that the data processing apparatus 500 may also be a distributed system, and may also be arranged as a cloud infrastructure (including a public cloud or a private cloud). Thus, for example, several devices may communicate over a network connection and may collectively perform tasks described as being performed by the data processing apparatus 500.
For example, for the detailed description of the processing procedure of the compression and acceleration method, reference may be made to the related description in the embodiment of the compression and acceleration method, and for the detailed description of the processing procedure of the data processing method, reference may be made to the related description in the embodiment of the data processing method, and repeated parts are not repeated.
For example, in some examples, the data processing device may include, but is not limited to, a mobile terminal such as a smart phone, a tablet computer, a car navigator, and an embedded system.
It should be noted that the data processing apparatus provided in the embodiments of the present disclosure is illustrative and not restrictive, and the data processing apparatus may further include other conventional components or structures according to practical application needs, for example, in order to implement the necessary functions of the data processing apparatus, a person skilled in the art may set other conventional components or structures according to a specific application scenario, and the embodiments of the present disclosure are not limited thereto.
For technical effects of the data processing apparatus provided by the embodiments of the present disclosure, reference may be made to corresponding descriptions about the compression and acceleration method and the data processing method in the foregoing embodiments, and details are not repeated herein.
At least one embodiment of the present disclosure also provides a storage medium. Fig. 10 is a schematic diagram of a storage medium according to an embodiment of the disclosure. For example, as shown in fig. 10, the storage medium 600 non-transitory stores computer readable instructions 601, and when the non-transitory computer readable instructions 601 are executed by a computer (including a processor), the instructions of the compression and acceleration method provided by any embodiment of the disclosure may be executed or the instructions of the data processing method provided by any embodiment of the disclosure may be executed.
For example, one or more computer instructions may be stored on the storage medium 600. Some of the computer instructions stored on the storage medium 600 may be, for example, instructions for implementing one or more steps of the compression and acceleration methods described above. Further computer instructions stored on the storage medium may be, for example, instructions for carrying out one or more steps of the above-described data processing method.
For example, the storage medium may include a storage component of a tablet computer, a hard disk of a personal computer, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a compact disc read only memory (CD-ROM), a flash memory, or any combination of the above storage media, as well as other suitable storage media.
For technical effects of the storage medium provided by the embodiments of the present disclosure, reference may be made to corresponding descriptions about a compression and acceleration method and a data processing method in the foregoing embodiments, and details are not repeated herein.
For the present disclosure, there are the following points to be explained:
(1) in the drawings of the embodiments of the present disclosure, only the structures related to the embodiments of the present disclosure are referred to, and other structures may refer to general designs.
(2) Features of the disclosure in the same embodiment and in different embodiments may be combined with each other without conflict.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (25)

1. A compression and acceleration method of a neural network model, the neural network model including a linear layer, parameters of the neural network model including preparatory weight parameters, the compression and acceleration method comprising:
quantizing the parameters of the neural network model to obtain a quantization model, wherein the parameters of the quantization model comprise quantization weight parameters of the linear layer; and
carrying out scale transformation processing on the quantization model to obtain a target quantization model;
wherein performing the scaling process on the quantization model comprises:
calculating a scale transformation parameter of the linear layer based on the number of output neurons of the linear layer or a standard deviation of preparation weight parameters of the linear layer; and
and based on the scale transformation parameters of the linear layer, carrying out the scale transformation processing on the quantization weight parameters of the linear layer to obtain standard quantization weight parameters of the linear layer.
2. The compression and acceleration method of claim 1, wherein the linear layer comprises at least one selected from the group consisting of a convolutional layer, a recursive layer, and a fully-connected layer.
3. A compression and acceleration method according to claim 1 or 2, wherein the linear layer is not directly followed by a batch normalization layer.
4. A compression and acceleration method according to any of the claims 1-3, wherein quantizing parameters of the neural network model to obtain the quantized model comprises:
clamping the preparation weight parameter of the linear layer to obtain a clamping weight parameter of the linear layer; and
and carrying out quantization processing on the clamping weight parameters of the linear layer to obtain the quantization weight parameters of the linear layer.
5. The compression and acceleration method of claim 4, wherein calculating the scaling parameters of the linear layer based on the number of output neurons of the linear layer comprises:
calculating the scale transformation parameters of the linear layer according to a first scale transformation parameter calculation formula, wherein the first scale transformation parameter calculation formula is expressed as:
Figure FDA0002209441180000011
wherein RSF represents a scaling parameter of the linear layer,
Figure FDA0002209441180000012
represents a number of output neurons of the linear layer, Q represents a quantization weight matrix of the linear layer, and VAR (Q) represents a variance of elements of the quantization weight matrix of the linear layer.
6. The compression and acceleration method of claim 5, wherein the number of bits of the quantization weight parameter of the linear layer is 1-8.
7. The compression and acceleration method of claim 6, wherein the number of bits of the quantization weight parameter of the linear layer is 1-2.
8. The compression and acceleration method of claim 4, wherein calculating the scaling parameters of the linear layer based on the number of output neurons of the linear layer comprises:
calculating the scale transformation parameters of the linear layer according to a second scale transformation parameter calculation formula, wherein the second scale transformation parameter calculation formula is expressed as:
Figure FDA0002209441180000021
wherein RSF represents a scaling parameter of the linear layer,
Figure FDA0002209441180000022
representing the number of output neurons of the linear layer,
Figure FDA0002209441180000023
an auxiliary weight matrix representing the linear layer,representing a variance of an element of an auxiliary weight matrix of the linear layer;
the auxiliary weight matrix of the linear layer
Figure FDA0002209441180000025
Expressed as:
Figure FDA0002209441180000026
wherein,a clamp weight matrix representing the linear layer.
9. The compression and acceleration method of claim 4, wherein calculating the scaling parameters of the linear layer based on the standard deviation of the preparation weight parameters of the linear layer comprises:
calculating the scale transformation parameters of the linear layer according to a third scale transformation parameter calculation formula, wherein the third scale transformation parameter calculation formula is expressed as:
Figure FDA0002209441180000028
wherein RSF represents a scaling parameter of the linear layer, W represents a preparation weight matrix of the linear layer, VAR (W) represents a variance of elements of the preparation weight matrix of the linear layer,an auxiliary weight matrix representing the linear layer,
Figure FDA00022094411800000210
representing a variance of an element of an auxiliary weight matrix of the linear layer;
the auxiliary weight matrix of the linear layer
Figure FDA00022094411800000211
Expressed as:
Figure FDA00022094411800000212
wherein,
Figure FDA00022094411800000213
a clamp weight matrix representing the linear layer.
10. The compression and acceleration method according to claim 8 or 9, wherein the number of bits of the quantization weight parameter of the linear layer is 1-8.
11. The compression and acceleration method of claim 10, wherein the number of bits of the quantization weight parameter of the linear layer is 3-8.
12. The compression and acceleration method according to any one of claims 5-11, wherein the scaling the quantization weight parameters of the linear layer based on the scaling parameters of the linear layer to obtain the standard quantization weight parameters of the linear layer comprises:
and carrying out the scale transformation processing on the quantization weight parameters of the linear layer according to a scale transformation formula, wherein the scale transformation formula is expressed as follows:
Figure FDA0002209441180000031
wherein Q is*A standard quantization weight matrix representing the linear layer,
Figure FDA0002209441180000032
representing the parameter of the ith row and the jth column of the standard quantization weight matrix of the linear layer, Q representing the quantization weight matrix of the linear layer, QijAnd representing the parameter of the ith row and the jth column of the quantization weight matrix of the linear layer.
13. The compression and acceleration method according to any of the claims 4-12, wherein the clipping the preparation weight parameters of the linear layer to obtain the clipping weight parameters of the linear layer comprises:
performing the clamping processing on the preparation weight parameter of the linear layer according to a clamping formula, wherein the clamping formula is expressed as:
Figure FDA0002209441180000033
wherein,
Figure FDA0002209441180000034
a clamping weight matrix representing the linear layer,
Figure FDA0002209441180000035
represents the parameters of the ith row and the jth column of the clamped weight matrix, W represents the preparation weight matrix of the linear layer, WijA parameter, W, representing the ith row and the jth column of the preparation weight matrix for the linear layermnThe parameter of the nth column of the mth row of the preparation weight matrix of the linear layer is represented, tanh (·) represents a hyperbolic tangent function, and max (·) represents a max-valued function.
14. The compression and acceleration method of claim 13, wherein the performing the quantization process on the clamped weight parameters of the linear layer to obtain the quantized weight parameters of the linear layer comprises:
and carrying out the quantization processing on the clamp weight parameter of the linear layer according to a weight quantization formula, wherein the weight quantization formula is expressed as:
Figure FDA0002209441180000036
wherein Q represents a quantization weight matrix of the linear layer, QijB represents the number of bits of the quantization weight parameter of the linear layer, roundd (-) represents a rounding function.
15. The compression and acceleration method of any of claims 4-14, further comprising:
and training the target quantization model by adopting the same training parameter configuration as the neural network model.
16. The compression and acceleration method of claim 15, wherein the training process of the target quantization model comprises: a forward propagation stage, a backward propagation stage and a standard quantization stage;
the forward propagation phase comprises: processing training input data by using a current target quantization model to obtain training output data, and calculating a loss value based on the training output data;
the back propagation phase comprises: calculating a gradient based on the loss value, and correcting parameters of the current neural network model based on the gradient to obtain an updated neural network model;
the standard quantization stage comprises: quantizing parameters of the updated neural network model to obtain an updated quantization model, and performing scale transformation processing on the updated quantization model to obtain an updated target quantization model.
17. The compression and acceleration method of claim 16 wherein the neural network model includes an activation layer that includes a PACT activation function represented as:
Figure FDA0002209441180000041
wherein,
Figure FDA0002209441180000042
represents the output of the active layer, x represents the input of the active layer, and α represents the activation value parameter of the PACT activation function;
quantifying parameters of the neural network model to obtain the quantified model, further comprising:
performing the quantization process on the output of the active layer according to an active value quantization formula, the active value quantization formula being represented as:
Figure FDA0002209441180000043
where q represents a quantized value of the output of the active layer, a represents the number of bits of the quantized value of the output of the active layer, and round (·) represents a rounding function.
18. The compression and acceleration method of claim 17, wherein the back propagation phase further comprises:
calculating an activation value gradient according to an activation value gradient formula, correcting a current activation value parameter based on the activation value gradient to obtain an updated activation value parameter,
the activation value gradient formula is expressed as:
Figure FDA0002209441180000051
wherein,
Figure FDA0002209441180000052
representing the activation value gradient.
19. The compression and acceleration method according to any of the claims 15-18, wherein the training parameter configuration comprises: initial learning rate, learning rate adjustment scheme, weight attenuation, iteration times of a training set, optimizer and batch size.
20. The compression and acceleration method according to any of the claims 1-19, wherein prior to quantizing the parameters of the neural network model, the compression and acceleration method further comprises:
and pre-training the neural network model to obtain a preparation weight parameter of the neural network model.
21. The compression and acceleration method of claim 20, wherein the pre-training of the neural network model comprises:
parameters of the neural network model are initialized using an happy-inch initialization scheme.
22. The compression and acceleration method of any one of claims 1-21, wherein the neural network model includes one of ResNet, MobileNet-V1, MobileNet-V2, and VGG-Net.
23. A method of data processing, comprising:
processing input data using the target quantization model obtained by the compression and acceleration method of any one of claims 1 to 22.
24. A data processing apparatus comprising:
a memory for non-transitory storage of computer readable instructions; and
a processor for executing computer readable instructions;
wherein the computer readable instructions, when executed by the processor, perform the compression and acceleration method of any one of claims 1-22 or perform the data processing method of claim 23.
25. A storage medium storing non-transitory computer readable instructions, wherein the non-transitory computer readable instructions, when executed by a computer, may perform instructions of the compression and acceleration method according to any one of claims 1-22 or may perform instructions of the data processing method according to claim 23.
CN201910893276.XA 2019-09-20 2019-09-20 Neural network model compression and acceleration method, data processing method and device Active CN110659725B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910893276.XA CN110659725B (en) 2019-09-20 2019-09-20 Neural network model compression and acceleration method, data processing method and device
PCT/IB2019/059565 WO2021053381A1 (en) 2019-09-20 2019-11-07 Compression and acceleration method for neural network model, and data processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910893276.XA CN110659725B (en) 2019-09-20 2019-09-20 Neural network model compression and acceleration method, data processing method and device

Publications (2)

Publication Number Publication Date
CN110659725A true CN110659725A (en) 2020-01-07
CN110659725B CN110659725B (en) 2023-03-31

Family

ID=69038294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910893276.XA Active CN110659725B (en) 2019-09-20 2019-09-20 Neural network model compression and acceleration method, data processing method and device

Country Status (2)

Country Link
CN (1) CN110659725B (en)
WO (1) WO2021053381A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783976A (en) * 2020-04-21 2020-10-16 北京大学 Neural network training process intermediate value storage compression method and device based on window gradient updating
CN111967608A (en) * 2020-08-06 2020-11-20 北京灵汐科技有限公司 Data processing method, device, equipment and storage medium
CN111967583A (en) * 2020-08-13 2020-11-20 北京嘀嘀无限科技发展有限公司 Method, apparatus, device and medium for compressing neural network
CN112085195A (en) * 2020-09-04 2020-12-15 西北工业大学 X-ADMM-based deep learning model environment self-adaption method
CN112598020A (en) * 2020-11-24 2021-04-02 深兰人工智能(深圳)有限公司 Target identification method and system
CN113222098A (en) * 2020-01-21 2021-08-06 上海商汤智能科技有限公司 Data processing method and related product
CN113469324A (en) * 2021-03-23 2021-10-01 中科创达软件股份有限公司 Model dynamic quantization method and device, electronic equipment and computer readable medium
CN113537340A (en) * 2021-07-14 2021-10-22 深圳思悦创新有限公司 Yolo target detection model compression method, system and storage medium
WO2023020456A1 (en) * 2021-08-16 2023-02-23 北京百度网讯科技有限公司 Network model quantification method and apparatus, device, and storage medium
CN117391175A (en) * 2023-11-30 2024-01-12 中科南京智能技术研究院 Pulse neural network quantification method and system for brain-like computing platform

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11687764B2 (en) * 2020-04-17 2023-06-27 Samsung Electronics Co., Ltd. System and method for increasing utilization of dot-product based neural network accelerator
CN113554147A (en) * 2021-04-27 2021-10-26 北京小米移动软件有限公司 Sample feature processing method and device, electronic equipment and storage medium
CN113920720A (en) * 2021-09-17 2022-01-11 上海吞山智能科技有限公司 Highway tunnel equipment fault processing method and device and electronic equipment
WO2024060002A1 (en) * 2022-09-20 2024-03-28 华为技术有限公司 Communication method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170286830A1 (en) * 2016-04-04 2017-10-05 Technion Research & Development Foundation Limited Quantized neural network training and inference
CN107480770A (en) * 2017-07-27 2017-12-15 中国科学院自动化研究所 The adjustable neutral net for quantifying bit wide quantifies the method and device with compression
CN108334945A (en) * 2018-01-30 2018-07-27 中国科学院自动化研究所 The acceleration of deep neural network and compression method and device
US20190114511A1 (en) * 2017-10-16 2019-04-18 Illumina, Inc. Deep Learning-Based Techniques for Training Deep Convolutional Neural Networks
CN109840589A (en) * 2019-01-25 2019-06-04 深兰人工智能芯片研究院(江苏)有限公司 A kind of method, apparatus and system running convolutional neural networks on FPGA
US20190171935A1 (en) * 2017-12-04 2019-06-06 International Business Machines Corporation Robust gradient weight compression schemes for deep learning applications
CN110096647A (en) * 2019-05-10 2019-08-06 腾讯科技(深圳)有限公司 Optimize method, apparatus, electronic equipment and the computer storage medium of quantitative model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373050B2 (en) * 2015-05-08 2019-08-06 Qualcomm Incorporated Fixed point neural network based on floating point neural network quantization
WO2017031630A1 (en) * 2015-08-21 2017-03-02 中国科学院自动化研究所 Deep convolutional neural network acceleration and compression method based on parameter quantification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170286830A1 (en) * 2016-04-04 2017-10-05 Technion Research & Development Foundation Limited Quantized neural network training and inference
CN107480770A (en) * 2017-07-27 2017-12-15 中国科学院自动化研究所 The adjustable neutral net for quantifying bit wide quantifies the method and device with compression
US20190114511A1 (en) * 2017-10-16 2019-04-18 Illumina, Inc. Deep Learning-Based Techniques for Training Deep Convolutional Neural Networks
US20190171935A1 (en) * 2017-12-04 2019-06-06 International Business Machines Corporation Robust gradient weight compression schemes for deep learning applications
CN108334945A (en) * 2018-01-30 2018-07-27 中国科学院自动化研究所 The acceleration of deep neural network and compression method and device
CN109840589A (en) * 2019-01-25 2019-06-04 深兰人工智能芯片研究院(江苏)有限公司 A kind of method, apparatus and system running convolutional neural networks on FPGA
CN110096647A (en) * 2019-05-10 2019-08-06 腾讯科技(深圳)有限公司 Optimize method, apparatus, electronic equipment and the computer storage medium of quantitative model

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222098A (en) * 2020-01-21 2021-08-06 上海商汤智能科技有限公司 Data processing method and related product
CN111783976A (en) * 2020-04-21 2020-10-16 北京大学 Neural network training process intermediate value storage compression method and device based on window gradient updating
CN111967608A (en) * 2020-08-06 2020-11-20 北京灵汐科技有限公司 Data processing method, device, equipment and storage medium
WO2022028577A1 (en) * 2020-08-06 2022-02-10 北京灵汐科技有限公司 Processing mode determining method, and data processing method
CN111967583A (en) * 2020-08-13 2020-11-20 北京嘀嘀无限科技发展有限公司 Method, apparatus, device and medium for compressing neural network
CN112085195A (en) * 2020-09-04 2020-12-15 西北工业大学 X-ADMM-based deep learning model environment self-adaption method
CN112598020A (en) * 2020-11-24 2021-04-02 深兰人工智能(深圳)有限公司 Target identification method and system
CN113469324A (en) * 2021-03-23 2021-10-01 中科创达软件股份有限公司 Model dynamic quantization method and device, electronic equipment and computer readable medium
CN113469324B (en) * 2021-03-23 2024-03-22 中科创达软件股份有限公司 Model dynamic quantization method, device, electronic equipment and computer readable medium
CN113537340A (en) * 2021-07-14 2021-10-22 深圳思悦创新有限公司 Yolo target detection model compression method, system and storage medium
WO2023020456A1 (en) * 2021-08-16 2023-02-23 北京百度网讯科技有限公司 Network model quantification method and apparatus, device, and storage medium
CN117391175A (en) * 2023-11-30 2024-01-12 中科南京智能技术研究院 Pulse neural network quantification method and system for brain-like computing platform

Also Published As

Publication number Publication date
WO2021053381A1 (en) 2021-03-25
CN110659725B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN110659725B (en) Neural network model compression and acceleration method, data processing method and device
CN110852439B (en) Data processing method and device and storage medium
US11875268B2 (en) Object recognition with reduced neural network weight precision
US12008461B2 (en) Method for determining neuron events based on cluster activations and apparatus performing same method
US11481613B2 (en) Execution method, execution device, learning method, learning device, and recording medium for deep neural network
CN107622303B (en) Method for neural network and device for performing the method
US11562247B2 (en) Neural network activation compression with non-uniform mantissas
WO2020167480A1 (en) Adjusting activation compression for neural network training
EP3877913A1 (en) Training neural network accelerators using mixed precision data formats
CN111767979A (en) Neural network training method, image processing method, and image processing apparatus
CN111095302A (en) Compression of sparse deep convolutional network weights
WO2020142192A1 (en) Neural network activation compression with narrow block floating-point
WO2022228425A1 (en) Model training method and apparatus
CN115129386A (en) Efficient optimization for neural network deployment and execution
CN113128478A (en) Model training method, pedestrian analysis method, device, equipment and storage medium
Mamatkulovich Lightweight residual layers based convolutional neural networks for traffic sign recognition
CN114266897A (en) Method and device for predicting pox types, electronic equipment and storage medium
CN116863194A (en) Foot ulcer image classification method, system, equipment and medium
CN115860100A (en) Neural network model training method and device and computing equipment
WO2024060839A1 (en) Object operation method and apparatus, computer device, and computer storage medium
CN114298289A (en) Data processing method, data processing equipment and storage medium
CN116958728A (en) Method and memory device for training neural network for image recognition
Ososkov et al. Two-stage approach to image classification by deep neural networks
US20230410496A1 (en) Omni-scale convolution for convolutional neural networks
CN116758618B (en) Image recognition method, training device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant