CN113159312A - Method, computer system and storage medium for compressing neural network model - Google Patents

Method, computer system and storage medium for compressing neural network model Download PDF

Info

Publication number
CN113159312A
CN113159312A CN202110066485.4A CN202110066485A CN113159312A CN 113159312 A CN113159312 A CN 113159312A CN 202110066485 A CN202110066485 A CN 202110066485A CN 113159312 A CN113159312 A CN 113159312A
Authority
CN
China
Prior art keywords
neural network
computer
weight coefficients
weight
unity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110066485.4A
Other languages
Chinese (zh)
Other versions
CN113159312B (en
Inventor
蒋薇
王炜
刘杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent America LLC
Original Assignee
Tencent America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/088,061 external-priority patent/US20210232891A1/en
Application filed by Tencent America LLC filed Critical Tencent America LLC
Publication of CN113159312A publication Critical patent/CN113159312A/en
Application granted granted Critical
Publication of CN113159312B publication Critical patent/CN113159312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure provides a method, computer system, and storage medium for compressing a neural network model. The method comprises the following steps: reordering at least one index corresponding to a multidimensional tensor associated with a neural network. A set of weight coefficients associated with the at least one reordered index is determined. The model of the neural network is compressed according to the determined set of weight coefficients.

Description

Method, computer system and storage medium for compressing neural network model
Cross Reference to Related Applications
Priority is claimed for united states provisional patent application No. 62/964,996 filed at united states patent and trademark office on 23/1/2020 and united states patent application No. 17/088,061 filed at united states patent and trademark office on 3/11/2020, which are all incorporated herein by reference.
Technical Field
The present disclosure relates generally to the field of data processing, and more particularly to neural networks.
Background
The International Organization for Standardization (International Organization for Standardization) ISO/International Electrotechnical Commission (International Electrotechnical Commission) IEC Moving Picture Experts Group (MPEG) (JTC 1/SC 29/WG 11) has been actively looking for the potential need for Standardization of future video codec technologies for visual analysis and understanding. ISO adopted the Visual Search Compact Descriptor (CDVS) standard as a still image standard in 2015, which extracted feature representations for image similarity matching. The CDVS standard is listed as part 15 of MPEG 7 and ISO/IEC 15938-15, and was finalized in 2018, which extracts global and local, manual design of video segments and Deep Neural Network (DNN) based feature descriptors. The success of DNN in a number of video applications, such as semantic classification, target detection/recognition, target tracking, video quality enhancement, etc., presents a strong need for a compressed DNN model.
Disclosure of Invention
Embodiments of the present disclosure relate to methods, systems, and computer-readable storage media for compressing neural network models, which may compress the neural network models and improve the computational efficiency of the neural network models.
According to one aspect, a method for compressing a neural network model is provided. The method can include reordering at least one index corresponding to a multidimensional tensor associated with a neural network. A set of weight coefficients associated with the at least one reordered index is determined. The model of the neural network is compressed according to the determined set of weight coefficients.
According to another aspect, a computer system for compressing a neural network model is provided. The computer system can include a reordering module to reorder at least one index corresponding to a multidimensional tensor associated with a neural network. A set of weight coefficients associated with the at least one reordered index is determined. The model of the neural network is compressed according to the determined set of weight coefficients.
According to yet another aspect, a non-transitory computer-readable medium for compressing a neural network model is provided. The non-transitory computer readable medium may include a processor and a memory; the memory stores a computer program that, when executed by the processor, causes the processor to execute at least one computer-readable storage device and program instructions stored on at least one of the at least one tangible storage device that are executable by the processor. The program instructions are executable by a processor to perform a method that may accordingly include reordering at least one index corresponding to a multidimensional tensor associated with a neural network. A set of weight coefficients associated with the at least one reordered index is determined. A model of the neural network is compressed based on the determined set of weight coefficients.
By the method, the system and the computer readable storage medium for compressing the neural network model provided by the embodiment of the disclosure, the efficiency of compressing the learned weight coefficients can be improved, so that the calculation using the optimized weight coefficients is accelerated, the neural network model can be remarkably compressed, and the calculation efficiency of the neural network model is improved.
Drawings
These and other objects, features and advantages will become apparent from the following detailed description of illustrative embodiments which is to be read in connection with the accompanying drawings. The various features of the drawings are not to scale, as they are intended to be clearly understood by those skilled in the art in conjunction with the detailed description. In the drawings:
FIG. 1 illustrates a networked computer environment, according to at least one embodiment;
FIG. 2 is a block diagram of a neural network model compression system in accordance with at least one embodiment;
FIG. 3 illustrates an operational flow diagram of steps performed by a program compressing a neural network model in accordance with at least one embodiment;
FIG. 4 is a block diagram of internal and external components of the computer and server depicted in FIG. 1, in accordance with at least one embodiment;
FIG. 5 is a block diagram of an exemplary cloud computing environment including the computer system depicted in FIG. 1, in accordance with at least one embodiment; and
FIG. 6 is a block diagram of functional layers of the exemplary cloud computing environment of FIG. 5 in accordance with at least one embodiment.
Detailed Description
Detailed embodiments of the claimed structures and methods are disclosed herein; however, it is to be understood that the disclosed embodiments are merely illustrative of the claimed structures and methods that may be embodied in various forms. These structures and methods may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope to those skilled in the art. In the description, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments.
Embodiments of the present disclosure relate generally to the field of data processing, and more particularly, to neural networks. The exemplary embodiments described below provide systems, methods and computer programs for compressing neural network models. Thus, some embodiments have the ability to improve the computational domain by allowing for increased compression efficiency of learned weight coefficients, which can significantly reduce the deep neural network model size.
As previously mentioned, ISO/IEC MPEG (JTC 1/SC 29/WG 11) has been actively looking for the potential need for standardization of future video codec technologies for visual analysis and understanding. ISO adopted the CDVS standard as a static image standard in 2015, which extracted Feature Representations (features) for image similarity matching. The CDVS standard is listed as part 15 of MPEG 7 and ISO/IEC 15938-15 and finalized in 2018, which extracts global and local, manual design and DNN-based feature descriptors of video clips. The success of DNN in a number of video applications, such as semantic classification, target detection/recognition, target tracking, video quality enhancement, etc., presents a strong need for a compressed DNN model.
Thus, MPEG is actively working on Coded Representation (coding) of the Neural Network standard (NNR) which codes DNN models to save storage and computation. There are several methods of learning compact DNN models. The goal is to delete the insignificant weight coefficients and assume that the smaller the value of the weight coefficients, the lower their significance. Several network pruning methods have been proposed to specifically achieve this goal by adding sparsity-promoting regularization terms to the network training goal or greedily deleting network parameters. From the perspective of compressing the DNN model, after learning the compact network model, the weight coefficients may be further compressed by quantization followed by entropy coding. Such further compression processes may significantly reduce the memory size of the DNN model, which is essential for deploying the model on mobile devices, chips, etc.
The unified regularization of the weights can improve the compression efficiency in the subsequent compression processing. The iterative network retraining/modification framework is used to jointly optimize the original training target and the weighted unity loss including the compressibility loss, unity distortion loss, and computational speed loss, so that the learned network weighting coefficients maintain the original target performance, are suitable for further compression, and can accelerate the computation using the learned weighting coefficients. The proposed method can be applied to compress the original pre-trained DNN model. It may also be used as an additional processing module to further compress any pruned DNN model.
Unified regularization can improve the efficiency of further compression of the learned weight coefficients, thereby speeding up the computation of the weight coefficients using optimization. This can significantly reduce the DNN model size and speed up inferential computations. By iterative retraining processes, the performance of the original training targets may be preserved, which may allow for compression and computational efficiency. The iterative retraining process also gives the flexibility to introduce different penalties at different times, so that the system focuses on different targets during the optimization process. The methods, computer systems, and computer programs disclosed herein can generally be applied to data sets having different data forms. The input/output data is typically a 4D tensor, which can be a real video clip, image or extracted feature map.
Aspects are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer-readable media according to various embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
Referring now to FIG. 1, a functional block diagram of a networked computer environment illustrates a neural network model compression system 100 (hereinafter "system") for compressing a neural network model. It should be understood that FIG. 1 provides only an illustration of one embodiment and is not intended to suggest any limitation as to the environments in which the different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements.
The system 100 may include a computer 102 and a server computer 114. The computer 102 may communicate with a server computer 114 over a communication network 110 (hereinafter "network"). The computer 102 includes a processor 104 and a software program 108 stored on a data storage device 106 and is capable of interfacing with a user and communicating with a server computer 114. As will be discussed below with reference to fig. 4, computer 102 may include internal components 800A and external components 900A, respectively, and server computer 114 may include internal components 800B and external components 900B, respectively. The computer 102 may be, for example, a mobile device, a telephone, a personal digital assistant, a netbook, a laptop computer, a tablet computer, a desktop computer, or any type of computing device capable of running programs, accessing a network, and accessing a database.
As discussed below in connection with fig. 5 and 6, the server computer 114 may also operate in a cloud computing Service model, such as Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS). The server computer 114 may also be located in a cloud computing deployment model, such as a private cloud, a community cloud, a public cloud, or a hybrid cloud.
The server computer 114 for compressing the Neural Network Model can run a Neural Network Model Compression Program (hereinafter referred to as "Program") 116 that interacts with the database 112. The neural network model compression procedure method will be explained in more detail below in conjunction with fig. 3. In one embodiment, computer 102 may operate as an input device including a user interface, and program 116 may run primarily on server computer 114. In alternative embodiments, the program 116 may run primarily on at least one computer 102, while the server computer 114 may be used to process and store data used by the program 116. It should be noted that the program 116 may be a stand-alone program or may be integrated into a larger neural network model compression program.
It should be noted, however, that in some instances, processing of program 116 may be shared between computer 102 and server computer 114 in any proportion. In another embodiment, the program 116 may operate on more than one computer, a server computer, or some combination of computers and server computers, such as multiple computers 102 in communication with a single server computer 114 over the network 110. In another embodiment, for example, the program 116 may operate on multiple server computers 114, the multiple server computers 114 in communication with multiple client computers over the network 110. Alternatively, the program may run on a network server that communicates with the server and a plurality of client computers over a network.
Network 110 may include wired connections, wireless connections, fiber optic connections, or some combination thereof. In general, the network 110 may be any combination of connections and protocols that support communication between the computer 102 and the server computer 114. Network 110 may include various types of networks, such as, for example, a Local Area Network (LAN), a Wide Area Network (WAN) (e.g., the internet), a telecommunications Network (e.g., a Public Switched Telephone Network (PSTN)), a wireless Network, a Public Switched Network, a satellite Network, a cellular Network (e.g., the fifth generation (5G) Network, a Long-Term Evolution (Long-Term Evolution LTE) Network, a third generation (3G) Network, a Code Division Multiple Access (CDMA) Network, etc.), a Public Land Mobile Network (PLMN), a Metropolitan Area Network (MAN), a private Network, an ad hoc Network, an intranet, a fiber-based Network, and/or the like, and/or combinations of these or other types of networks.
The number and arrangement of devices and networks shown in fig. 1 are provided as examples. Indeed, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or devices and/or networks arranged differently than those shown in fig. 1. Further, two or more of the devices shown in fig. 1 may be implemented in a single device, or a single device shown in fig. 1 may be implemented as multiple distributed devices. Additionally or alternatively, a set of devices (e.g., at least one device) of system 100 may perform at least one function described as being performed by another set of devices of system 100.
Referring now to FIG. 2, a neural network model compression system 200 is depicted. The neural network model compression system 200 may be used as a framework for an iterative learning process. The neural network model compression system 200 may include a unified indexing order and method selection module 202, a weight unification module 204, a network forwarding computation module 206, a compute target loss module 208, a compute gradient module 210, and a back propagation and weight update module 212.
Is provided with
Figure BDA0002904206270000061
Indicating that target y is assigned to the data set of input x. Let Θ ═ { w } denote the set of weighting coefficients for DNN. The goal of neural network training is to learn the optimal set of weight coefficients Θ, so that the goal can be lost
Figure BDA0002904206270000062
And (4) minimizing. For example, in previous network pruning methods, the target is lost
Figure BDA0002904206270000063
There are two parts, empirical data loss
Figure BDA0002904206270000064
And sparsity-promoting regularization loss £ £R(Θ):
Figure BDA0002904206270000065
Wherein λR≧ 0 is a superparameter that balances the contributions of data loss and regularization loss.
Sparsity facilitates regularization loss to place regularization over the entire weight coefficient, and the resulting sparse weights have a weak relationship to inference efficiency or computational acceleration. From another perspective, after pruning, the sparse weights may be further subjected to another network training process, where an optimal set of weight coefficients may be learned, which may improve the efficiency of further model compression.
The present disclosure addresses weight unity loss
Figure BDA0002904206270000066
The weight is lost uniformly
Figure BDA0002904206270000067
Optimization was performed along with the original target loss:
Figure BDA0002904206270000068
wherein λUAnd the more than or equal to 0 is a hyper-parameter for balancing the contribution of the original training target and the weight unification. By joint optimization
Figure BDA0002904206270000069
An optimal set of weight coefficients can be obtained, the optimal weightsThe set of weights can greatly aid the efficiency of further compression. The weighted unity loss considers how the convolution operation is taken as a basic process executed by the GEMM matrix multiplication process, so that an optimized weighting coefficient is generated, and the calculation speed can be greatly accelerated. Notably, our weight unity penalty can be viewed as an additional regularization term to the general target penalty, with (when λ isR>0) Or does not (when lambda)R0) general regularization. Moreover, our method can be flexibly applied to any regularization loss £R(Θ)。
In at least one embodiment, the weight is uniformly lost £U(Θ) further includes a loss of compressibility £C(theta) and uniform distortion loss £I(Θ), and calculating the velocity loss £S(Θ):
U(Θ)=£I(Θ)+λCC(Θ)+λSS(Θ),
The detailed description of these loss terms will be described in the later section. An iterative optimization process is further proposed for both learning effectiveness and learning efficiency. The partial weight coefficients that satisfy the desired structure may be fixed. The non-fixed part of the weight coefficients can be updated by back-propagating the training loss. By iteratively performing these two steps, more and more weights can be determined gradually, and joint losses can be optimized gradually and efficiently.
And, in at least one embodiment, each layer is compressed individually,
Figure BDA00029042062700000610
can be further written as:
Figure BDA00029042062700000611
wherein L isU(Wj) Is the unity loss defined at level j; n is the total number of layers to measure the quantization loss; wjRepresenting the weight coefficients of the j-th layer. Also, due to LU(Wj) Is directed to eachOne layer is independently computed, so in the remainder of this disclosure, script j is omitted without loss of generality.
For each network layer, its weight factor W is of size (c)i,k1,k2,k3,co) A general 5-dimensional (5D) tensor. The input for a layer is a size of (h)i,wi,di,ci) And the output of the layer is a size of (h)o,wo,do,co) The 4D tensor B. Size ci、k1、k2、k3、co、hi、wi、di、ho、wo、doIs an integer greater than or equal to 1. When the size ci、k1、k2、k3、co、hi、wi、di、ho、wo、doWhen any of the values is the number 1, the corresponding tensor is reduced to a lower dimension. Each term in each tensor is a floating point number. Let M denote a 5D binary mask of the same size as W, where each entry in M is a binary number 0/1, which binary number 0/1 is used to indicate whether the corresponding weight coefficient is clipped or reserved. The case of M associated with W to cope with the DNN model where W comes from pruning is introduced, where some connections between neurons in the network are removed from the computation. When W comes from the original, untrimmed, pre-trained model, all terms in M take the value 1. Output B is calculated by convolution operation [ ] based on A, M and W:
Figure BDA0002904206270000071
parameter hi、wiAnd di(h0、woAnd do) Is the height, weight and depth of the input tensor a (output tensor B). Parameter ci(co) Is the number of input (output) channels. Parameter k1、k2And k3Is the size of the convolution kernel corresponding to the height, weight and depth axes, respectively.That is, for each output channel v 1, …, coThe operation can be viewed as a convolution with input A of size (c)i,k1,k2,k3) The 4D weight tensor Wv.
The order of the summing operations may be changed. The 5D weight tensor can be reshaped (reshape) to a size of (c)i,coK) 3D tensor, where k ═ k1·k2·k3. The order of the reshaping indexes along the k-axis is determined by a reshaping algorithm in the reshaping process, which will be described in detail later.
The desired structure of the weight coefficients can be designed by considering two aspects. First, the structure of the weight coefficients is consistent with the basic GEMM matrix multiplication process of how convolution operations are implemented, in order to speed up the inferential computation using learned weight coefficients. Second, the structure of the weight coefficients may help to improve quantization and entropy coding efficiency for further compression. In at least one embodiment, a block-wise structure of weight coefficients in each layer may be used in the 3D reshaping weight tensor. Specifically, the 3D tensor can be partitioned into sizes (g)i,go,gk) And may unify all coefficients within the block. The uniform weights in the blocks are set to follow a predefined uniform rule, e.g., all values are set to be the same, so that one value can be used to represent the entire block in the quantization process that yields high efficiency. There may be a plurality of uniformly weighted rules, each associated with a uniform distortion loss that is used to measure the error introduced by employing the rule. For example, instead of setting the weights to be the same, the weights may be set to have the same absolute value while maintaining their original signs. Given the structure of the design, the partial weight coefficients to be fixed may be determined by considering the uniform distortion loss, the estimated compression ratio loss, and the estimated velocity loss during the iteration. A neural network training process is performed to update the remaining non-fixed weight coefficients through a back propagation mechanism.
The overall framework of the iterative retraining/fine tuning process may iteratively alternate selection steps to gradually optimize the linkageAnd (4) loss. Given a pre-trained DNN model with weight coefficients W and a mask M, which may be a pruned sparse model or an untrimmed non-sparse model, in a first step, the index i (W) ═ i may be determined by the unified index order and method selection module 2020,…,ik]In order to reshape the weight coefficients W (and the corresponding mask M), where k-k1·k2·k3Is a reshaped 3D tensor of weight W. In particular, the reshaped 3D tensor of weight W may be partitioned into sizes (g)i,go,gk) The super block of (1). Let S denote a superblock. The weight based on the weight coefficient in the super block S is uniformly lost, i.e. the weight based on the uniform loss £T(Θ), i (w) is determined separately for each superblock S. The size of the superblock is usually chosen according to the later compression method. For example, in at least one embodiment, a superblock of size (64,64,2) may be selected to coincide with a 3-dimensional Coding Tree Unit (CTU 3D) used by later compression processes.
Each superblock S is further partitioned into sizes of (d)i,do,dk) The block of (1). The weights within the blocks are uniform. For each superblock S, a weight unifier is used to unify the weight coefficients within the blocks of S. Let b denote the block in S, the weighting coefficients in b can be unified in different ways. For example, the weight unifier may reset ownership in b to be the same, e.g., to be an average of all weights in b. In this case, L of the weight coefficient in bNNorm (e.g., L as variance of weights in b2Norm) reflects the uniform distortion loss using the mean to represent the entire block £I(b) In that respect Further, the weight unifier may set all weights to have the same absolute value while maintaining the original sign. In this case, L of the absolute value of the weight in bNThe norm may be used to measure LI(b) In that respect In other words, given the weight unification method u, the weight unifier can use the signal with an associated unified distortion loss LIMethod u of (u, b) to unify the weights in b. Unified distortion loss of the entire superblock S £I(u, S) may be passed on all blocks in STo LI(u, b) is determined by averaging, i.e. LI(u,S)=averageb(LI(u,b))。
Similarly, the compression ratio is lost £C(u, S) reflects the compression efficiency of the uniform weight in the superblock S using method u. For example, when all weights are set to be the same, only one number is used to represent the entire block, and the compression rate is rcompression=gi·go·gk。£C(u, S) can be defined as 1/rcompression
Velocity loss £S(u, S) reflects the estimated computation speed in S using uniform weight coefficients using method u £ £S(u, S) is a function of the number of multiplications in the calculation using the uniform weight coefficient.
So far, for each possible way of reordering the indices to generate a 3D tensor for the weight W, and for each possible method u for unifying the weights by the weight unifier, may be based on £I(u,S)、£C(u,S)、£S(u, S) to calculate the weight unity loss £U(u, S). An optimal weight unity method u and an optimal reordering index I (W) can be chosen, the combination of which has the smallest weight unity loss £U(u, S). When k is small, the best I (W) and u can be searched thoroughly. For large k, other methods can be used to find the sub-optimal I (W) and u. The invention does not impose any limitation on the specific way of determining I (W) and u.
Once the order of indices I (W) and the weight unification method u is determined for each superblock S, the goal is turned to find an updated optimal set of weight coefficients W and corresponding weight masks M by iteratively minimizing the joint loss. Specifically, for the t-th iteration, the current weighting factor W (t-1) and mask M (t-1) may be used. In addition, the weight uniform mask Q (t-1) may be maintained throughout the training process. The weight unification mask Q (t-1) has the same shape as W (t-1), and W (t-1) records whether or not the corresponding weight coefficients are unified. Then, a unified weight coefficient W is calculated by the weight unification module 204U(t-1) and a new unified mask Q (t-1). Weight unification module204, the weight coefficients in S may be reordered according to the determined order of indices I (W) and may be based on a uniform loss £ of the superblockU(u x, S) sort the superblocks in ascending order. Given the superparameter q, the top q superblocks are selected for unification. And, the weight unifier unifies the blocks in the selected superblock S using the correspondingly determined method u to obtain a unified weight WU(t-1) and weight mask MU(t-1). The corresponding entries in the uniform mask Q (t-1) are marked as uniform. In at least one embodiment, MU(t-1) differs from M (t-1) in that for blocks with clipped and unclipped weight coefficients, the weight unifier sets the originally clipped weight coefficients again to have a non-zero value and will change MUThe corresponding term in (t-1). For other types of blocks, MU(t-1) remains unchanged naturally.
The weighting coefficients marked as uniform in Q (t-1) may be fixed, and the remaining unfixed weighting coefficients of W (t-1) may be updated by a neural network training process, resulting in updated W (t) and M (t).
Is provided with
Figure BDA0002904206270000091
Represents a training data set, wherein
Figure BDA0002904206270000092
Can be compared with the original data set
Figure BDA0002904206270000093
Similarly, a pre-trained weight coefficient W is obtained based on the raw data set.
Figure BDA0002904206270000094
Or may be different from
Figure BDA0002904206270000095
But with the original data set
Figure BDA0002904206270000096
Are identical to each otherThe data distribution of (2). In a second step, the current uniform weight factor W is used by the forward-over-network computation module 206U(t-1) and mask MU(t-1) passing each input x through the current network, producing an estimated output
Figure BDA0002904206270000097
Output based on ground truth annotation y and estimation
Figure BDA0002904206270000098
Calculate target loss module 208 calculates a target training loss
Figure BDA0002904206270000099
The target loss G (W) may then be calculated by the calculate gradient module 210U(t-1)). An automatic gradient computation method used by a deep learning framework such as tensorflow or pytorch can be used to compute G (W)U(t-1)). Based on the gradient G (W)U(t-1)) and unified mask Q (t-1), W may be updated by backpropagation using a backpropagation and weight update module 212UNon-fixed weight coefficient of (t-1) and corresponding mask MU(t-1). The retraining process may be an iterative process. Multiple iterations are typically performed to update WUThe non-stationary portion of (t-1) and the corresponding M (t-1), for example, until the target loss converges. The system then proceeds to the next iteration t, where a new hyperparameter q (t) is given, according to WU(t-1), u and I (W), calculating a new uniform weight coefficient W by a weight uniform processU(t), mask MU(t), and a corresponding unified mask q (t).
In at least one embodiment, the value of the hyperparameter q (t) increases with increasing t during each iteration, such that more and more weight coefficients are unified and fixed throughout the iterative learning process.
Referring now to FIG. 3, an operational flow diagram illustrating the steps of a method 300 of compressing a neural network model is depicted. In some embodiments, at least one of the process blocks of FIG. 3 may be performed by computer 102 (FIG. 1) and server computer 114 (FIG. 1). In some embodiments, at least one of the process blocks of FIG. 3 may be performed by another device or group of devices separate from or including computer 102 and server computer 114.
At 302, the method 300 includes reordering at least one index corresponding to a multidimensional tensor associated with a neural network.
At 304, the method 300 includes determining a set of weight coefficients associated with at least one reordered index.
In some embodiments, determining the set of weight coefficients associated with at least one reordered index comprises: quantizing the weight coefficients; and selecting a weight coefficient that minimizes a unity loss value, wherein the unity loss value is associated with the weight coefficient.
In some embodiments, the minimized unified loss value is back propagated, and the neural network is trained according to the back propagated minimized unified loss value.
In some embodiments, the minimized unity loss value is back propagated, at least one of the weight coefficients is fixed according to the back propagated minimized unity loss value.
In some embodiments, a gradient and a uniform mask associated with the set of weight coefficients are determined, and at least one of the non-fixed weight coefficients is updated according to the gradient and the uniform mask.
In some embodiments, the set of weight coefficients is compressed by quantizing and entropy coding the weight coefficients.
In some embodiments, the unified weight coefficient set includes at least one weight coefficient having the same absolute value.
At 306, the method 300 includes compressing the model of the neural network according to the determined set of weight coefficients.
It will be appreciated that fig. 3 provides only an illustration of one embodiment and is not meant to imply any limitation as to how the different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements.
By the method for compressing the neural network model, efficiency of compressing the learned weight coefficients is improved, calculation using the optimized weight coefficients is accelerated, the neural network model can be remarkably compressed, and calculation efficiency of the neural network model is improved.
FIG. 4 is a block diagram 400 of internal and external components of the computer depicted in FIG. 1, according to an example embodiment. It should be understood that FIG. 4 provides only an illustration of one embodiment and is not intended to suggest any limitation as to the environments in which the different embodiments may be implemented. Many modifications to the depicted environments may be made based on design and implementation requirements.
Computer 102 (FIG. 1) and server computer 114 (FIG. 1) may include respective sets of internal components 800A, 800B and external components 900A, 900B as shown in FIG. 4. Each of the sets of internal components 800 includes at least one processor 820, at least one computer-readable RAM822, and at least one computer-readable ROM824 on at least one bus 826, on at least one operating system 828, and on at least one computer-readable tangible storage device 830.
The processor 820 is implemented in hardware, firmware, or a combination of hardware and software. Processor 820 is a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Accelerated Processing Unit (APU), a microprocessor, a microcontroller, a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), an Application-Specific Integrated Circuit (ASIC), or another type of Processing component. In some embodiments, processor 820 includes at least one processor that can be programmed to perform functions. The bus 826 includes components that allow communication between the internal components 800A, 800B.
At least one operating system 828, software programs 108 (fig. 1), and neural network model compression program 116 (fig. 1) on server computer 114 (fig. 1) are stored in at least one respective computer-readable tangible storage device 830 for execution by at least one respective processor 820 via at least one respective RAM822, which typically includes a cache memory. In the embodiment shown in fig. 4, each of the computer readable tangible storage devices 830 is a magnetic disk storage device of an internal hard disk drive. Alternatively, each computer readable tangible storage device 830 is a semiconductor memory device, such as a ROM824, an EPROM, a flash memory, an optical Disc, a magneto-optical Disc, a solid state Disc, a Compact Disc (CD), a Digital Versatile Disc (DVD), a floppy disk, a magnetic cassette, a magnetic tape, and/or another type of non-volatile computer readable tangible storage device that can store a computer program and Digital information.
Each set of internal components 800A, 800B also includes an R/W drive or interface 832 to read from and write to at least one portable computer readable tangible storage device 936, such as a CD-ROM, DVD, memory stick, tape, magnetic disk, optical disk, or semiconductor storage device. Software programs, such as the software program 108 (fig. 1) and the neural network model compressor 116 (fig. 1), can be stored on at least one respective portable computer-readable tangible storage device 936, read via a respective R/W drive or interface 832, and loaded into a respective hard disk drive 830.
Each set of internal components 800A, 800B also includes a network adapter or interface 836 (e.g., a TCP/IP adapter card), a wireless Wi-Fi interface card; or a 3G, 4G or 5G wireless interface card or other wired or wireless communication link. The software program 108 (fig. 1) and the neural network model compression program 116 (fig. 1) on the server computer 114 (fig. 1) may be downloaded to the computer 102 (fig. 1) from an external computer via a network (e.g., the internet, a local area network, or other wide area network) and a corresponding network adapter or interface 836. From the network adapter or interface 836, the software program 108 and the neural network model compression program 116 on the server computer 114 are loaded into the respective hard disk drive 830. The network may include copper wires, optical fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
Each set of external components 900A, 900B may include a computer display monitor 920, a keyboard 930, and a computer mouse 934. The external components 900A, 900B may also include touch screens, virtual keyboards, touch pads, pointing devices, and other human interface devices. Each set of internal components 800A, 800B also includes device drivers 840 to interface with computer display monitor 920, keyboard 930, and computer mouse 934. The device driver 840, the R/W driver or interface 832, and the network adapter or interface 836 include hardware and software (stored in the storage device 830 and/or ROM 824).
It should be understood in advance that although the present disclosure includes a detailed description of cloud computing, embodiments of the teachings referenced herein are not limited to cloud computing environments. Rather, some embodiments can be implemented in connection with any other type of computing environment, whether now known or later developed.
Cloud computing is a service delivery model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be deployed and released quickly with minimal management effort or interaction with a service provider. The cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
The characteristics are as follows:
on-demand self-service (On-demand self-service): cloud users can unilaterally provide computing functionality, such as server time and network storage, automatically as needed without manual interaction with the service provider.
Broad network access (broadcast network access): the functionality is available over a network and is accessed through standard mechanisms that facilitate the use of heterogeneous thin client platforms or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling (Resource pooling): the provider's computing resources may be aggregated through a multi-tenant model to provide services to multiple consumers and dynamically allocate and reallocate different physical and virtual resources as needed. Typically, the consumer has no control or knowledge of the exact location of the provided resources, but may be able to specify a location at a higher level of abstraction (e.g., country, state, or data center), and thus have location independence.
Fast elasticity (Rapid elasticity): in some cases, the functions may be quickly and flexibly deployed, in some cases automatically expanding outward quickly and releasing inward quickly. The functionality available for deployment often appears unlimited to the user and may be purchased in any number at any time.
Measured service (Measured service): cloud systems automatically control and optimize resource usage by leveraging metering capabilities at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency to both the provider and user of the service used.
The service model is as follows:
software as a Service (SaaS): the functionality provided to the consumer is an application running on the cloud infrastructure using the provider. The application programs may be accessed from various client devices through a thin client interface, such as a web browser (e.g., web-based email). In addition to possibly restricting user-specific application configuration settings, a user does not manage nor control the underlying cloud infrastructure including network, server, operating system, storage, or even individual application functionality.
Platform as a Service (PaaS): the functionality provided to the consumer is to deploy consumer-created or acquired applications onto the cloud infrastructure, the applications being created using programming languages and tools supported by the provider. The user does not manage or control the underlying cloud infrastructure, including the network, servers, operating system, or storage, but has control over deployed applications and possibly application hosting environment configurations.
Infrastructure as a Service (IaaS): the functionality provided to the consumer is to provide processing, storage, networking and other basic computing resources that enable the consumer to deploy and run arbitrary software, including operating systems and applications, therein. The user does not manage or control the underlying cloud infrastructure, but has control over the operating system, storage, deployed applications, and may have limited control over select network components (e.g., host firewalls).
The deployment model is as follows:
private cloud: the cloud infrastructure operates solely for an organization. It may be administered by the organization or a third party and may exist internally (on-predictions) or externally (off-predictions).
Community cloud: the cloud infrastructure is shared by several organizations and supports specific communities with common concerns, such as tasks, security requirements, policy and compliance issues. It may be managed by the organization or a third party and may exist internally or externally.
Public cloud: the cloud infrastructure is available to the public or large industry groups and is owned by organizations that sell cloud services.
Mixing cloud: the cloud infrastructure is made up of two or more clouds (private, community, or public) that remain the only entities, but are bound together through standardized or proprietary techniques that enable data and application portability (e.g., cloud explosion for load balancing between clouds).
Cloud computing environments are focus-oriented services that are stateless, low-coupling, modular, and semantically interoperable. At the heart of cloud computing is the infrastructure of a network consisting of interconnected nodes.
Referring to fig. 5, an exemplary cloud computing environment 500 is depicted. As shown, cloud computing environment 500 includes at least one cloud computing node 10 with which local computing devices used by cloud users, such as, for example, Personal Digital assistants (Personal Digital assistants PDAs) or cellular telephones 54A, desktop computers 54B, laptop computers 54C, and/or automobile computer systems 54N may communicate. The cloud computing nodes 10 may communicate with each other. They may be grouped physically or virtually (not shown) in at least one network, such as a private cloud, community cloud, public cloud, or hybrid cloud, as described above, or a combination thereof. This allows the cloud computing environment 500 to provide infrastructure as a service, platform as a service, and/or software as a service without the cloud consumer needing to maintain resources on the local computing device. It should be appreciated that the types of computing devices 54A-N shown in fig. 5 are intended to be exemplary only, and that cloud computing node 10 and cloud computing environment 500 may communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring to fig. 6, a set of functional abstraction layers 600 provided by cloud computing environment 500 (fig. 5) is illustrated. It should be understood in advance that the components, layers, and functions shown in fig. 6 are merely exemplary, and the embodiments are not limited thereto. As shown, the following layers and corresponding functions are provided:
the hardware and software layer 60 includes hardware components and software components. Examples of hardware components include: a host computer 61; a Reduced Instruction Set Computer (RISC) architecture based server 62; a server 63; a blade server 64; a storage device 65; and a network and networking component 66. In some embodiments, the software components include web application server software 67 and database software 68.
The virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: the virtual server 71; a virtual memory 72; virtual networks 73, including virtual private networks; virtual applications and operating systems 74; and virtual client 75.
In one example, the management layer 80 may provide the functionality described below. Resource provisioning 81 provides for dynamic acquisition of computing resources and other resources for performing tasks within the cloud computing environment. When resources are utilized in a cloud computing environment, metering and pricing 82 cost tracks the use of resources and provides bills and invoices for the consumption of these resources. In one example, these resources may include application software licenses. Security provides authentication for cloud users and tasks, as well as protection of data and other resources. The user portal 83 provides access to the cloud computing environment for users and system administrators. Service level management 84 provides cloud computing resource allocation and management to meet the required service level. Service Level Agreement (SLA) planning and fulfillment 85 provides prearrangement and procurement of cloud computing resources for which future needs are anticipated according to the SLA.
Workload layer 90 provides an example of the functionality that may utilize a cloud computing environment. Examples of workloads and functions that may be provided from this layer include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom instruction delivery 93; data analysis processing 94; transaction processing 95; and neural network model compression 96. Neural network model compression 96 may compress the neural network model.
Some embodiments may be directed to systems, methods, and/or computer-readable media at any possible level of technical detail integration. The computer-readable medium may include a computer-readable non-volatile storage medium having computer-readable program instructions for causing a processor to perform operations.
The computer readable storage medium may be a tangible device that can store and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer-readable storage medium would include the following: a portable computer diskette, a hard Disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM or flash Memory), a Static Random Access Memory (SRAM), a portable Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disc (DVD), a Memory stick, a floppy Disk, a mechanical encoding device, such as a card punch or bump structure in a slot having instructions recorded thereon, and any suitable combination of the foregoing. As used herein, a computer-readable storage medium is not to be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through an optical cable), or an electrical signal transmitted through a wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a corresponding computing/processing device, or to an external computer or external storage device via a network (e.g., the internet, a local area network, a wide area network, and/or a wireless network). The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium within the respective computing/processing device.
The computer-readable program code/instructions to perform operations may be assembler instructions, Instruction Set-Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, configuration data for an integrated circuit, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SmallTalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, an electronic circuit including, for example, a Programmable Logic circuit, a Field-Programmable Gate array (FPGA), or a Programmable Logic Array (PLA) may personalize the electronic circuit by executing computer-readable program instructions with state information of the computer-readable program instructions to perform various aspects or operations of the present disclosure.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable storage medium storing the instructions includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer-readable media according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises at least one executable instruction for implementing the specified logical function(s). The methods, computer systems, and computer-readable media may include additional blocks, fewer blocks, different blocks, or blocks arranged differently than those depicted in the figures. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is to be understood that the systems and/or methods described herein may be implemented in various forms of hardware, firmware, or combinations of hardware and software. The actual specialized control hardware or software code implementing the systems and/or methods is not limited to these embodiments. Thus, the operation and behavior of the systems and/or methods were described herein without reference to the specific software code-it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. In addition, as used herein, the articles "a" and "an" are intended to include at least one item, and may be used interchangeably with "at least one". Further, as used herein, the term "group" is intended to include at least one item (e.g., related items, unrelated items, combinations of related and unrelated items, etc.) and may be used interchangeably with "at least one". When only one item is intended, the term "one" or similar language is used. Further, as used herein, the terms "having", and the like are intended to be open-ended terms. Further, the phrase "in accordance with" is intended to mean "in accordance with at least in part" unless explicitly stated otherwise.
The description of the various aspects and embodiments has been presented for purposes of illustration but is not intended to be exhaustive or limited to the disclosed embodiments. Although combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible embodiments. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may be directly dependent on only one claim, the disclosure of possible embodiments includes each dependent claim in combination with every other claim in the set of claims. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the described embodiments. The terminology used herein is selected to best explain the principles of the embodiments, the practical application, or improvements relative to the technology found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (15)

1. A method of compressing a neural network model, comprising:
reordering at least one index corresponding to a multidimensional tensor associated with a neural network;
determining a set of weight coefficients associated with at least one reordered index; and
compressing the model of the neural network according to the determined set of weight coefficients.
2. The method of claim 1, wherein the determining the set of weight coefficients associated with at least one reordered index comprises:
quantizing the weight coefficients; and
selecting a weight coefficient that minimizes a unity loss value, wherein the unity loss value is associated with the weight coefficient.
3. The method of claim 2, further comprising back-propagating the minimized unity loss value, the neural network being trained according to the back-propagated minimized unity loss value.
4. The method of claim 2, wherein the minimized unity loss value is propagated back, and wherein at least one of the weighting coefficients is fixed based on the propagated minimized unity loss value.
5. The method of claim 4, further comprising determining a gradient and a uniform mask associated with the set of weight coefficients, and updating at least one of the non-fixed weight coefficients according to the gradient and the uniform mask.
6. The method of any of claims 1-5, further comprising compressing the set of weight coefficients by quantizing and entropy coding the weight coefficients.
7. The method according to any of claims 1-5, wherein the unified weight coefficient set comprises at least one weight coefficient having the same absolute value.
8. A computer system for compressing a neural network model, the computer system comprising:
a reordering module to reorder at least one index corresponding to a multi-dimensional tensor associated with a neural network;
a unification module for determining a set of weight coefficients associated with at least one reordered index; and
a compression module to compress a model of the neural network according to the determined set of weight coefficients.
9. The computer system of claim 8, wherein the unification module comprises:
a quantization module for quantizing the weight coefficients; and
a selection module to select a weight coefficient that minimizes a unity loss value, wherein the unity loss value is associated with the weight coefficient.
10. The computer system of claim 9, further comprising a training module to back-propagate the minimized unity loss value, the neural network being trained according to the back-propagated minimized unity loss value.
11. The computer system of claim 9, wherein the minimized unity loss value is propagated back, and wherein at least one of the weighting coefficients is fixed based on the propagated minimized unity loss value.
12. The computer system of claim 11, further comprising an update module configured to determine a gradient and a uniform mask associated with the set of weight coefficients, and update at least one of the non-fixed weight coefficients based on the gradient and the uniform mask.
13. The computer system of claim 8, further comprising a compression module to compress the set of weight coefficients by quantizing and entropy coding the weight coefficients.
14. A non-transitory computer-readable medium having stored thereon a computer program for compressing a neural network model, the computer program for causing at least one computer processor to perform the method of any one of claims 1-7.
15. A computing device comprising a processor and a memory; the memory stores a computer program that, when executed by the processor, causes the processor to perform the method of any one of claims 1 to 7.
CN202110066485.4A 2020-01-23 2021-01-19 Method for compressing neural network model, computer system and storage medium Active CN113159312B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202062964996P 2020-01-23 2020-01-23
US62/964,996 2020-01-23
US17/088,061 US20210232891A1 (en) 2020-01-23 2020-11-03 Neural network model compression with structured weight unification
US17/088,061 2020-11-03

Publications (2)

Publication Number Publication Date
CN113159312A true CN113159312A (en) 2021-07-23
CN113159312B CN113159312B (en) 2023-08-18

Family

ID=76878521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110066485.4A Active CN113159312B (en) 2020-01-23 2021-01-19 Method for compressing neural network model, computer system and storage medium

Country Status (1)

Country Link
CN (1) CN113159312B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130271A1 (en) * 2017-10-27 2019-05-02 Baidu Usa Llc Systems and methods for block-sparse recurrent neural networks
CN109948794A (en) * 2019-02-28 2019-06-28 清华大学 Neural network structure pruning method, pruning device and electronic equipment
WO2020014590A1 (en) * 2018-07-12 2020-01-16 Futurewei Technologies, Inc. Generating a compressed representation of a neural network with proficient inference speed and power consumption

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6699048B2 (en) * 2016-07-19 2020-05-27 日本電信電話株式会社 Feature selecting device, tag related area extracting device, method, and program
GB2574372B (en) * 2018-05-21 2021-08-11 Imagination Tech Ltd Implementing Traditional Computer Vision Algorithms As Neural Networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130271A1 (en) * 2017-10-27 2019-05-02 Baidu Usa Llc Systems and methods for block-sparse recurrent neural networks
WO2020014590A1 (en) * 2018-07-12 2020-01-16 Futurewei Technologies, Inc. Generating a compressed representation of a neural network with proficient inference speed and power consumption
CN109948794A (en) * 2019-02-28 2019-06-28 清华大学 Neural network structure pruning method, pruning device and electronic equipment

Also Published As

Publication number Publication date
CN113159312B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US10839255B2 (en) Load-balancing training of recommender system for heterogeneous systems
US11263223B2 (en) Using machine learning to determine electronic document similarity
WO2019100784A1 (en) Feature extraction using multi-task learning
US11443228B2 (en) Job merging for machine and deep learning hyperparameter tuning
US11580671B2 (en) Hash-based attribute prediction for point cloud coding
US20200125926A1 (en) Dynamic Batch Sizing for Inferencing of Deep Neural Networks in Resource-Constrained Environments
JP7398482B2 (en) Dataset-dependent low-rank decomposition of neural networks
WO2022251317A1 (en) Systems of neural networks compression and methods thereof
Ma et al. An image enhancing pattern-based sparsity for real-time inference on mobile devices
US11811429B2 (en) Variational dropout with smoothness regularization for neural network model compression
JP7368623B2 (en) Point cloud processing method, computer system, program and computer readable storage medium
WO2022043798A1 (en) Automated query predicate selectivity prediction using machine learning models
US11935271B2 (en) Neural network model compression with selective structured weight unification
US11496775B2 (en) Neural network model compression with selective structured weight unification
US20210232891A1 (en) Neural network model compression with structured weight unification
CN113052309A (en) Method, computer system and storage medium for compressing neural network model
US20210201157A1 (en) Neural network model compression with quantizability regularization
CN114616825B (en) Video data decoding method, computer system and storage medium
US20230222358A1 (en) Artificial intelligence operations adaptive multi-granularity event grouping
CN113159312B (en) Method for compressing neural network model, computer system and storage medium
JP2024504179A (en) Method and system for lightweighting artificial intelligence inference models
CN113112012B (en) Method, apparatus and computer device for video image processing
CN113286143A (en) Method, computer system and storage medium for compressing neural network model
CN115427960A (en) Relationship extraction using fully dependent forests
US20230419088A1 (en) Bundling hypervectors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40048702

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant