CN113095486A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113095486A
CN113095486A CN202110436569.2A CN202110436569A CN113095486A CN 113095486 A CN113095486 A CN 113095486A CN 202110436569 A CN202110436569 A CN 202110436569A CN 113095486 A CN113095486 A CN 113095486A
Authority
CN
China
Prior art keywords
tensor
tensors
neural network
ith
bit width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110436569.2A
Other languages
Chinese (zh)
Inventor
李国齐
李东贤
杨玉宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110436569.2A priority Critical patent/CN113095486A/en
Publication of CN113095486A publication Critical patent/CN113095486A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method, an apparatus, an electronic device, and a storage medium, the method including: acquiring a neural network for image processing, wherein the neural network comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data; carrying out tensor decomposition on the first weight parameters of the network layer to obtain a plurality of nuclear tensors; quantizing the plurality of kernel tensors according to the preset bit width to obtain a second weight parameter, wherein the second weight parameter comprises fixed point type data with the preset bit width; and calling the neural network to process the target image to obtain an image processing result. The method and the device have the advantages that the plurality of nuclear tensors are obtained by carrying out tensor decomposition on the weight parameters of the neural network, and the plurality of nuclear tensors are quantized into fixed-point data with preset bit widths.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
An image data processing algorithm or a video data processing algorithm based on a Neural network needs a Deep Neural Network (DNN) model to extract data features so as to complete tasks such as final classification, detection and identification. During the task execution, tensors such as large-scale vectors or matrixes are inevitably introduced to carry out multiplication and addition operation. Considering the computation speed and the limitation of storage resources, it is usually necessary to decompose the high-dimensional tensor in the operation process of the neural network.
In the related technology, a tensor decomposition algorithm is used for decomposing a high-dimensional tensor in the operation process of a neural network, namely, the number of trainable parameters in the neural network is reduced by decomposing a multi-dimensional kernel, and the high redundancy of data in DNN is weakened. Commonly used Tensor decomposition algorithms include CANDECOMP/PARAFAC (CP) decomposition, Tucker decomposition, Tensor-train (TT) decomposition, and Tensor-Ring (TR) decomposition. Among them, CP Decomposition, also called Canonical Decomposition (CANDECOMP) or Parallel Factor Analysis (Parallel Factor Analysis, parafacc), can decompose a tensor into the form of the sum of several tensors with rank 1; the Tucker decomposition can decompose a tensor into a kernel tensor and factor matrices in each dimension direction along the kernel tensor; the TT decomposition can decompose one tensor into a series of three-dimensional nuclear tensors and a form of multiplying two matrixes (namely, two-dimensional nuclear tensors) together, and a chain structure is formed between the nuclear tensors and the nuclear tensors; the TR decomposition replaces two matrixes in the TT decomposition with a three-dimensional nuclear tensor, one tensor can be decomposed into a series of three-order nuclear tensor continuous multiplication form, and a ring structure is formed between the nuclear tensor and the nuclear tensor.
However, due to the dimension limitation, the CP decomposition and the Tucker decomposition cannot flexibly depict the correlation between data of different dimensions of the tensor, have certain limitations, and require more storage resources; however, in the related art, the performance of the neural network can be improved to a certain extent by performing TT decomposition or TR decomposition on the high-dimensional tensor, but for the neural network (especially for the deep neural network), there are many neural network parameters involved in the storage and calculation processes, and the improvement effect on the performance of the neural network is limited only by performing TT decomposition or TR decomposition on the high-dimensional tensor.
Therefore, how to further improve the performance of the neural network, especially the deep neural network, is a problem worthy of research.
Disclosure of Invention
In view of this, the present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, which can reduce the data amount of the neural network parameters, reduce the storage space of the neural network, improve the compression ratio of the neural network, increase the calculation speed of the neural network, and have a wide application.
According to an aspect of the present disclosure, there is provided an image processing method, the method including: acquiring a neural network, wherein the neural network is used for image processing and comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data; carrying out tensor decomposition on a first weight parameter of any network layer to obtain a plurality of nuclear tensors, wherein the nuclear tensors comprise two-dimensional tensors and/or three-dimensional tensors; quantizing the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter, wherein the second weight parameter comprises fixed point type data with a preset bit width; and calling the neural network to process the target image to obtain an image processing result of the target image.
In a possible implementation manner, the invoking the neural network to process a target image to obtain an image processing result of the target image includes: for the ith network layer, convolving the processing results of the (i-1) th level output by the (i-1) th network layer according to the second weight parameter of the ith network layer to obtain an ith convolution result, wherein i is more than or equal to 1 and is less than or equal to N, N is the number of network layers of the neural network, and the processing result of the 0 th level is the target image; carrying out batch standardization on the ith convolution result to obtain an ith standardization result; and activating and quantizing the ith normalization result to obtain an ith processing result, and determining the nth processing result as the image processing result of the target image.
In a possible implementation manner, the quantizing the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter includes: quantizing the plurality of core tensors respectively according to a preset bit width to obtain a plurality of quantized first tensors; for the kth first tensor, fusing the kth first tensor with the kth-1 second tensor to obtain a kth third tensor, wherein K is more than or equal to 2 and less than or equal to K, and K is the number of the first tensors, wherein the 1 st second tensor is the 1 st first tensor; quantizing the kth third tensor according to a preset bit width to obtain a kth second tensor; determining a Kth second tensor as the second weight parameter.
In a possible implementation manner, the tensor decomposing the first weight parameter of the network layer to obtain a plurality of core tensors includes: and carrying out tensor decomposition on the first weight parameter by a tensor row TT decomposition mode to obtain the plurality of nuclear tensors.
In a possible implementation manner, the batch normalization of the ith convolution result to obtain an ith normalization result includes: quantizing the first batch of standardized parameters according to the preset bit width to obtain a second batch of quantized standardized parameters, wherein the second batch of standardized parameters comprise fixed point type data with the preset bit width; and according to the second batch of standardization parameters, carrying out batch standardization on the ith convolution result to obtain an ith standardization result.
In a possible implementation manner, the activating and quantizing the ith normalization result to obtain an ith processing result includes: activating the ith normalized result to obtain an ith activation result; and quantizing the ith activation result according to the preset bit width to obtain the ith processing result, wherein the ith processing result comprises fixed point type data with the preset bit width.
In a possible implementation manner, the preset bit width is configured according to the dimensions of the plurality of core tensors, the core tensors with different dimensions are quantized into first tensors with different bit widths, and the preset bit width is greater than or equal to a bit width threshold.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a neural network, the neural network is used for image processing, the neural network comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data; the tensor decomposition module is used for carrying out tensor decomposition on the first weight parameter of any network layer to obtain a plurality of nuclear tensors, and the nuclear tensors comprise two-dimensional tensors and three-dimensional tensors; the quantization module is configured to quantize the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter, where the second weight parameter includes fixed point type data with the preset bit width; and the calling module is used for calling the neural network to process the target image to obtain an image processing result of the target image.
According to another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
The method has the advantages that the plurality of core tensors are obtained by carrying out tensor decomposition on the weight parameters of the neural network, and the plurality of core tensors are quantized into fixed-point data with preset bit width.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method of an embodiment of the present disclosure.
Figure 2 shows a schematic diagram of a tensor decomposition of an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of an exemplary image processing method of an embodiment of the present disclosure.
Fig. 4a and 4b show schematic diagrams of the distribution of the core tensor data of an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an image processing apparatus of an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of a terminal for data processing, according to an example embodiment.
FIG. 7 illustrates a block diagram of a server for data processing, according to an exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
By virtue of the rapid development of computing clusters such as GPUs (Graphics Processing units), Convolutional Neural Networks (CNNs) have been successfully applied in various fields, especially in the field of deep learning. Despite many advances, training and reasoning large scale convolutional neural networks still require significant computational and memory resources and are difficult to use widely in resource-limited portable devices.
In the correlation technique, under the condition that the overall performance of the convolutional neural network is not influenced a little, the tensor decomposition can be used for decomposing the high-dimensional tensor into a plurality of low-dimensional tensors, and the number of parameters of the neural network is reduced, so that the neural network is compressed, and the performance of the neural network is improved. However, due to dimension curse, common CP decomposition and Tucker decomposition have key limitations, and only TT decomposition or TR decomposition of a high dimension tensor in the related art has a limited effect on improving the performance of a neural network. Therefore, there is a need to further improve the performance of neural networks, particularly deep neural networks.
In view of this, the present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, in which a plurality of core tensors are obtained by performing tensor decomposition on weight parameters of a neural network, and the plurality of core tensors are quantized into fixed point type data with a preset bit width.
Fig. 1 shows a flowchart of an image processing method of an embodiment of the present disclosure.
As shown in fig. 1, in one possible implementation, the image processing method includes:
step S101, obtaining a neural network, wherein the neural network is used for image processing and comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data.
In one possible implementation, the neural network may be acquired using an electronic device. The neural network may be a neural network model. The electronic device may be a terminal or a server. The terminal comprises a mobile phone, a tablet computer, a desktop computer and the like, and the server can be a server, a server cluster formed by a plurality of servers or a cloud computing service center. The electronic device may include a neural network chip for storing neural network parameters and performing neural network calculations.
In one possible implementation, the neural network may be used for image or video processing. For example, an application program for processing the target image may be installed on the electronic device to obtain the neural network, so that feature extraction is performed on the input target image by using the neural network to obtain an image processing result of the target image. The neural network includes, but is not limited to, any one type of convolutional neural network, artificial neural network, cyclic neural network, long and short memory neural network, deep reinforcement learning network, graph neural network, and tensor neural network. It should be understood by those skilled in the art that the present disclosure is not limited as to the type of neural network.
In one possible implementation, the neural network may include a plurality of network layers, and the initial first weight parameter of each network layer includes a multidimensional tensor and is floating point type data.
The tensor is a data structure commonly used in a neural network, can describe data from the perspective of dimensions and how elements in the tensor are arranged and combined, and can include multiple types of data such as scalars, vectors, matrixes and the like. For example, the scalar may be a 0-dimensional tensor, the vector may be a 1-dimensional tensor, and the matrix may be a 2-dimensional tensor. In the embodiment of the present disclosure, the 3-dimensional tensor or the tensor of more than 3-dimensional tensor may be referred to as a high-dimensional tensor or a multi-dimensional tensor. While floating point type data or fixed point type data may describe data from the perspective of how the data is stored. For example, one decimal may be described by using a floating point type in which the decimal point is not fixed, or may be described by using a fixed point type in which the decimal point is fixed, and even when the fixed point type is used for description, precision may be lost. It is noted that although the angles of description are different, tensors or floating point type data, which may also be referred to as floating point type tensors, may both describe the first weight parameter.
Taking a convolutional neural network as an example, the convolutional neural network may include an input layer, a hidden layer, and an output layer. Wherein the hidden layer may include a convolutional layer, a pooling layer, and a fully-connected layer, and each layer in the convolutional neural network may include a plurality of neurons. In the process of convolution operation, each convolution layer corresponds to a group of convolution kernels, and each convolution kernel comprises a plurality of weighted values. The weight values in each convolutional layer may constitute the weight tensor of the convolutional layer, i.e. the initial first weight parameter. For the sake of simplicity, the first weight parameter may also be directly called a weight value or a weight.
Further, the first weight parameter may be a multidimensional tensor. For example, the four dimensions of the first weight parameter may be represented by the number of convolution kernels in the convolutional layer (i.e., the number of channels of the feature map output by the convolutional layer), the number of channels of the image input to the convolutional layer, the height of the convolution kernels, and the width of the convolution kernels. For another example, if a convolution layer includes 32 convolution kernels, and the size of the convolution kernel is 4 × 4, the dimension of the first weight parameter of the convolution layer may be represented as (32, 4, 4, 4), and the number of weight values in the first weight parameter is 32 × 4.
Further, the first weight parameter may be a floating point type tensor. For example, for each data (i.e. element) in said first weight parameter, data of the float32 type or the float16 type may be, for example.
In one possible implementation, the neural network is used for image processing, and may include two processes: a Forward Propagation (FP) procedure and a Back Propagation (BP) procedure. The forward propagation process can carry out multiplication and accumulation operation on the input image of the neural network and the pre-configured weight to obtain a preliminary output result of the neural network; the back propagation process can compare the preliminary output result of the neural network with the actual true value, calculate the error between the preliminary output result of the neural network and the true value, and adjust the weight of the neural network according to the error, so that the output result of the neural network approaches the actual true value. In addition, the forward propagation process can be used for neural network reasoning, also called reasoning process; the back propagation process may be used for neural network training, also known as a training process.
In one possible implementation, the forward propagation process may include a batch normalization process. Batch Normalization (BN) can be used in the forward propagation process of the neural network to improve the performance and stability of the neural network so as to overcome the problem that the training is difficult due to deepening of the layer number of the neural network. In the batch standardization process, the output of a certain layer of the neural network can be converted by using a specific function to obtain a normalized value. For example, batch normalization may be performed before activation function, the convolutional layer output is normalized to a normal distribution with a mean of 0 and a variance of 1, and then the normalized data is input into the activation function for activation.
In one possible implementation, the network parameters of the neural network may include at least one of a first weight parameter, a neuron value, a first set of normalization parameters, an activation function value, a back propagation error value, a neuron gradient value, and a weight update value. In the process of image processing corresponding to the neural network, the network parameters in the forward propagation process may include a first weight parameter, a neuron value, a first batch of standardized parameters, an activation function value, and the like; the network parameters in the back propagation process may include back propagation error values, neuron gradient values, weight update values, and the like. It should be noted that, besides the first weighting parameter, other network parameters (e.g., the first normalization parameters) may also include multidimensional tensor and are floating point type data, and tensor decomposition or quantization may be performed according to the image processing method provided by the embodiment of the present disclosure. Furthermore, the network parameter may be fixed-point data. It should be understood by those skilled in the art that there are many parameters involved in the neural network calculation process, for example, the network parameters may further include various types such as a matrix and a 3-dimensional tensor generated by a process such as data conversion, and the embodiment of the present disclosure takes the first weight parameter in the network parameters as an example, and does not limit the type of the network parameters.
For example, in the process of forward propagation of the neural network, the neuron value of the input layer of the neural network may be a column vector, i.e., a 1-dimensional tensor; the neuron values and the preset weight values can be subjected to multiply-accumulate operation, then the multiply-accumulate operation results are sent to a batch standardization layer, the operation results are normalized by using a first batch of standardization parameters to obtain output of the batch standardization layer, finally the output of the batch standardization layer acts on an activation function, nonlinear processing is carried out to obtain activation function values, and the activation function values can be scalar or multidimensional tensor.
For another example, in the process of back propagation of the neural network, after obtaining the preliminary output result of the neural network, the preliminary output result of the neural network may be compared with the actual true value to obtain an error value (i.e., a back propagation error value), and a back iteration is performed according to the error value to update the weight value, so that the output result of the neural network approaches the true value more, where the value for updating or adjusting the weight value in the process of forward propagation may be a weight update value. In addition, during the back propagation of the neural network, the magnitude of the error between the output result of the neural network and the true value can be evaluated by using a loss function. In order to minimize the loss function, a gradient descent algorithm may be used to calculate the gradient, solve the minimum value of the error along the descending direction of the gradient, and then update the weight value, that is, the value obtained by calculating the gradient for the network parameter (such as the weight value or the bias) of the neural network may be the neuron gradient value.
Step S102, carrying out tensor decomposition on a first weight parameter of any network layer to obtain a plurality of nuclear tensors, wherein the nuclear tensors comprise two-dimensional tensors and/or three-dimensional tensors.
In a possible implementation manner, for any network layer, tensor decomposition is performed on the first weight parameter of the network layer, so as to obtain a plurality of core tensors. The first weight parameter has a higher dimension and can be a high-dimensional tensor; the decomposed nuclear tensor has lower dimensionality and can be a low-dimensional tensor, that is, the tensor decomposition can decompose a high-dimensional tensor into a series of low-dimensional nuclear tensors. The core tensor is also called a core tensor or a factor matrix, and may be, for example, a tensor such as a two-dimensional matrix or a three-dimensional tensor. For example, in the case of performing tensor decomposition on the first weight parameter by using a TT decomposition method, the plurality of core tensors may include a two-dimensional tensor and a three-dimensional tensor; in the case of performing a tensor decomposition on the first weight parameter by using a TR decomposition, the plurality of core tensors may include a three-dimensional tensor. It should be noted that, similar to matrix decomposition, the merged (or recovered) nuclear tensor is not exactly equal to the higher-dimensional tensor, and there is a certain loss in the tensor decomposition process, i.e., the merged nuclear tensor is approximately equal to the higher-dimensional tensor.
In a possible implementation manner, performing tensor decomposition on the first weight parameter of the network layer to obtain a plurality of core tensors may include: and carrying out tensor decomposition on the first weight parameter by a tensor row TT decomposition mode to obtain the plurality of nuclear tensors.
In one possible implementation, the plurality of core tensors are obtained by performing Tensor decomposition on the first weight parameter by a Tensor row TT decomposition method (i.e., a Tensor-Train decomposition algorithm). Wherein a head core tensor and a tail core tensor of the plurality of core tensors may be two-dimensional matrices (i.e., two-dimensional tensors), and the remaining core tensors except for the head core tensor and the tail core tensor may be three-dimensional tensors. It should be understood by those skilled in the art that, although the embodiment of the present disclosure describes the image processing method by taking a Tensor matrix TT decomposition method as an example, in other embodiments of the present disclosure, other types of Tensor decomposition methods such as a Tensor-Ring decomposition method may be used to perform Tensor decomposition on the first weight parameter, and the manner of Tensor decomposition is not limited in the present disclosure.
Compared with CP decomposition and Tucker decomposition, the tensor decomposition method has the advantages that the tensor decomposition is carried out on the first weight parameter in a tensor row TT decomposition mode, the tensor row TT decomposition method is insensitive to the dimension and the simple structure of the tensor, limited factors are few, universality is high, and the application range is wider.
Figure 2 shows a schematic diagram of a tensor decomposition of an embodiment of the present disclosure.
In one possible implementation, the tensors are as shown in FIG. 2
Figure BDA0003033386920000071
Can be used forIs a first weight parameter in the network parameters. See fig. 2, tensor
Figure BDA0003033386920000072
Can be a 5-dimensional tensor, KH、KW、KDM and N may be tensors respectively
Figure BDA0003033386920000073
Of 5 dimensions, e.g. KH、KWAnd KDThe height, width and depth of the convolution kernel can be characterized respectively, and M and N can be characterized by the number of input channels and output channels respectively. It is worth noting that in the pair tensor
Figure BDA0003033386920000074
Before tensor decomposition, M and N in fig. 2 may be decomposed according to the following formulas:
Figure BDA0003033386920000075
m obtained after decompositionknkWhich can be used to represent one dimension of a certain kernel tensor. For example, for the second nuclear tensor
Figure BDA0003033386920000076
In other words, there may be three dimensions r1、r2And m1ni,m1n1One dimension of the nuclear tensor.
In a possible implementation manner, tensor decomposition is performed on the first weight parameter through a tensor row TT decomposition manner, so that the plurality of nuclear tensors are obtained. Referring to FIG. 2, TT decomposition may be used to measure 5-dimensional tensors
Figure BDA0003033386920000077
The decomposition is performed to obtain d lower-dimensional tensors (i.e., two-dimensional tensors or three-dimensional tensors), that is, the original d-dimensional tensor can be decomposed into a contracted form of a plurality of nuclear tensors. Wherein the head tensor
Figure BDA0003033386920000078
And tail tensor
Figure BDA0003033386920000079
May be a two-dimensional tensor (i.e., a two-dimensional matrix) and the remaining decomposed tensor may be a three-dimensional tensor. In the expression method, one dimension r of the head tensor0And one dimension r of the tail tensord+1Is 1, meaning that the header tensor actually has only two dimensions. Thus, for the header tensor
Figure BDA00030333869200000710
In the case of a non-woven fabric,
Figure BDA00030333869200000711
is a two-dimensional tensor; for the second three-dimensional tensor
Figure BDA00030333869200000712
In the case of a non-woven fabric,
Figure BDA00030333869200000713
is a three-dimensional tensor. The TT decomposition in FIG. 2 is also schematically approximated by the following equation:
Figure BDA00030333869200000714
wherein the content of the first and second substances,
Figure BDA00030333869200000715
a d-dimensional tensor can be represented, and the general dimension is higher;
Figure BDA00030333869200000716
can represent a pair d-dimensional tensor
Figure BDA00030333869200000717
The resulting tensor, after tensor decomposition, is generally lower in dimension.
Figure BDA00030333869200000718
But also as a sequence of tensors. Taking the TT decomposition manner as an example,
Figure BDA00030333869200000719
it may be a 3-dimensional tensor,
Figure BDA00030333869200000720
and
Figure BDA00030333869200000721
it may be a 2-dimensional tensor,
Figure BDA00030333869200000722
may each be referred to as the nuclear tensor (or TT nucleus); operation symbol-1The product of the decomposed plurality of kernel tensors may be represented. The decomposed kernel tensors can be extracted from the data by operation sign1And merging. E.g. nuclear tensor
Figure BDA00030333869200000723
Comparable to nuclear tensor
Figure BDA00030333869200000724
The respective corresponding elements may be multiplied and then combined to obtain a combined tensor. Notably, the operand symbol is extracted1The upper right corner 1 is only schematic and does not affect the actual meaning of the operand representing the contraction or merging. For example, the operator symbol x may be used without using the corner mark, or the corner mark may be placed in the lower right corner and used in the database1Indicating a merge.
Step S103, quantizing the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter, where the second weight parameter includes fixed point type data with the preset bit width. The preset bit width may be configured in advance, for example, 8 bits (i.e., bits) or 15 bits, and the specific number of bits of the preset bit width is not limited in this disclosure. It should be understood by those skilled in the art that, in addition to quantizing the tensor decomposed core tensor, the embodiments of the present disclosure may also quantize other neural network parameters, for example, the image data input by the convolutional layer, or the first normalized parameters, and the present disclosure is not limited to the neural network parameters for quantization.
In one possible implementation, the plurality of core tensors may be floating-point type data or fixed-point type data. The fixed point type data of the preset bit width may include integer type data, for example, int8 type data.
In one possible implementation, the electronic device may quantize (also called discretizing) the plurality of core tensors by the following formula to obtain a quantized first tensor, that is, the plurality of core tensors may be converted into fixed point type data with a preset bit width based on the following formula:
Figure BDA0003033386920000081
wherein x may be the core tensor, which may be floating point type data; round (·) is a rounding function; the destination bit width (i.e., the preset bit width) with bits of x may be preconfigured. It should be understood by those skilled in the art that the rounding function in equation (2) may also be an upward rounding function, such as ceil (·) function, or a downward rounding function, such as floor (·) function, and the disclosure is not limited to the type of rounding function.
It is noted that x may also be a network parameter such as the first set of standardized parameters. In addition, the formula (2) may convert the floating point tensor into the fixed point data with the preset bit width, or may convert the fixed point tensor into the fixed point data with the preset bit width, which is not limited in the present disclosure.
By carrying out tensor decomposition on the first weight parameter of the neural network and quantizing the nuclear tensor obtained after decomposition, the embodiment of the disclosure can compress a neural network model in image processing or video processing, reduce the storage of the neural network parameter and improve the calculation speed of the neural network.
In one possible implementation, the quantization may include inference quantization and training quantization. The reasoning quantization may be a process of quantizing network parameters related to reasoning when a neural network is used for reasoning, and the network parameters for reasoning quantization may include a first weight parameter, a neuron value, a first batch of standardized parameters, an activation function value, and the like, corresponding to a forward propagation process of the neural network; the training quantization may be a process of quantizing network parameters related to training when the neural network is used for training, and the network parameters for training quantization may include a back propagation error value, a neuron gradient value, a weight update value, and the like, corresponding to a back propagation process of the neural network.
In the following, the embodiments of the present disclosure will describe the quantization process by taking inference quantization (i.e., quantization of various parameters of the forward propagation process) as an example. Those skilled in the art should understand that the process of inference quantization and training quantization can be based on the Quan (-) function in formula (2), and the quantization process of the network parameters in the back propagation process can be similar to the quantization process of the network parameters in the forward propagation process, and will not be described again.
In a possible implementation manner, the quantizing the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter may include: quantizing the plurality of core tensors respectively according to a preset bit width to obtain a plurality of quantized first tensors; for the kth first tensor, fusing the kth first tensor with the kth-1 second tensor to obtain a kth third tensor, wherein K is more than or equal to 2 and less than or equal to K, and K is the number of the first tensors, wherein the 1 st second tensor is the 1 st first tensor; quantizing the kth third tensor according to a preset bit width to obtain a kth second tensor; determining a Kth second tensor as the second weight parameter.
In a possible implementation manner, the plurality of core tensors are quantized respectively according to a preset bit width, so as to obtain a plurality of quantized first tensors. The quantizing of the plurality of kernel tensors, respectively, may be performed based on equation (2). For example, the plurality of core tensors may be a plurality of core tensors obtained by performing tensor decomposition on a first weight parameter in the network parameters, and the plurality of core tensors may be quantized respectively based on formula (2) to obtain a plurality of quantized first tensors.
In a possible implementation manner, for the kth first tensor, the kth first tensor and the kth-1 st second tensor are fused to obtain a kth third tensor, where K is greater than or equal to 2 and less than or equal to K, and K is the number of the first tensors, where the 1 st second tensor is the 1 st first tensor. For example, when k is 2, for the 2 nd first tensor, the 2 nd first tensor and the 1 st second tensor (that is, the 1 st first tensor) may be fused to obtain a 2 nd third tensor.
In a possible implementation manner, the electronic device may quantize the third tensor according to a preset bit width by using the following formula to obtain a second tensor, where the second tensor may be fixed-point type data of a preset range and a preset bit width:
QuanK(x,bits)=clip(Quan(x,bits),-1+Δ,1-Δ)) (3)
wherein the content of the first and second substances,
Figure BDA0003033386920000091
in formula (3), x may be the third tensor, which is a tensor obtained by fusing the first tensor and the second tensor, and the third tensor may be floating-point data or fixed-point data; the clip () function can project the third tensor x to a preset minimum value and maximum value interval, namely the clip () function can be an intercepting function and is used for controlling the representation range of the second tensor; bits may be a preset bit width; Δ may represent a minimum interval between elements of the third tensor, and may be set according to the preset bit width based on equation (4). It is to be noted that the kth second tensor may be determined as the second weight parameter, and the second weight parameter may be subjected to convolution operation as a convolution kernel.
In a possible implementation manner, in a case that the bit width of the first tensor is a preset bit width bits, the bit width of the second tensor may be the same as the bit width of the first tensor, and the bit width of the third tensor may be (2 × bits-1). Referring to equation (2), for the first tensor with two preset bit widths being bits, the first tensor is ΔThe resulting third tensor is thus deltax delta (i.e.,
Figure BDA0003033386920000092
) The bit width of the third tensor may be (2 x bits-1). For example, when the bit width of each first tensor is 8 bits, the bit width of the second tensor is also 8 bits, and the bit width of the third tensor obtained by fusing the 2 nd first tensor and the 1 st first tensor may be 15 bits. Those skilled in the art will appreciate that the present disclosure is not limited to the manner of tensor fusion.
In a possible implementation manner, the kth third tensor is quantized according to a preset bit width to obtain a kth second tensor, and the kth second tensor is determined as the second weight parameter. The quantization of the kth third tensor may be based on equation (3). For example, when k is 2, for the 2 nd first tensor, the 2 nd first tensor and the 1 st second tensor (i.e., the 1 st first tensor) may be fused to obtain the 2 nd third tensor. For the 2 nd third tensor, the 2 nd second tensor is obtained by quantization based on the formula (3). When K is incremented by 1, and so on until the kth second tensor is obtained. Taking the first tensor as a core tensor obtained by performing tensor decomposition on the first weight parameter as an example, the plurality of core tensors can be quantized according to a preset bit width based on the formula (2) to obtain a plurality of first tensors, which is equivalent to a compression process for the first weight parameter; the quantized first tensor and the second tensor are fused to obtain a third tensor, and then the third tensor is quantized based on the formula (3) to obtain a second weight parameter, which is equivalent to a decompression (i.e., recovery) process of the first weight parameter. In addition, taking the number of the kernel tensors as 4 as an example, after 4 kernel tensors are quantized to obtain 4 first tensors, the second weight parameter can be obtained by performing fusion and quantization for 3 times.
The method and the device have the advantages that the number of the core tensors is obtained by carrying out tensor decomposition on the first weight parameters, the number of the core tensors is quantized to obtain the number of the first quantities, and the number of the first quantities is fused and quantized to obtain the second weight parameters. In addition, a plurality of first vectors are fused, data with higher bit width can be obtained, time consumption of a neural network is reduced, and the method is simple and convenient.
And step S104, calling the neural network to process the target image to obtain an image processing result of the target image.
In a possible implementation manner, the electronic device may invoke the neural network to process the target image, so as to obtain an image processing result of the target image. The image processing result can be used for further acquiring the image characteristics of the target image.
In one possible implementation, the network parameters are obtained during the process of invoking the neural network to process the target image. The network parameter may include at least one of a first weight parameter, a neuron value, a first set of normalized parameters, an activation function value, a back propagation error value, a neuron gradient value, and a weight update value.
In a possible implementation manner, invoking the neural network to process a target image, and obtaining an image processing result of the target image may include: for the ith network layer, convolving the processing results of the (i-1) th level output by the (i-1) th network layer according to the second weight parameter of the ith network layer to obtain an ith convolution result, wherein i is more than or equal to 1 and is less than or equal to N, N is the number of network layers of the neural network, and the processing result of the 0 th level is the target image; carrying out batch standardization on the ith convolution result to obtain an ith standardization result; and activating and quantizing the ith normalization result to obtain an ith processing result, and determining the nth processing result as the image processing result of the target image.
In a possible implementation manner, for the ith network layer, the processing result at the i-1 th level output by the i-1 th network layer is convolved according to the second weight parameter of the ith network layer to obtain the ith convolution result. Wherein i is more than or equal to 1 and less than or equal to N, N is the number of network layers of the neural network, and the processing result at the 0 th level is the target image. For example, when i is equal to 1, the processing result at level 0 is the target image, and at this time, the second weight parameter of the 1 st network layer is used as a convolution kernel to convolve the target image, so as to obtain a 1 st convolution result.
In a possible implementation manner, the target image may be quantized in advance based on formula (2) to obtain a target image with a preset bit width. For example, the target image may be a 4-dimensional tensor, and the 4 dimensions may be the number of samples, the image height, the image width, and the number of channels, respectively. The preset bit width can be 8 bits, that is, the target image can be quantized to a target image with a bit width of 8 bits in advance.
In a possible implementation manner, the ith convolution result is subjected to batch normalization to obtain an ith normalization result; and activating and quantizing the ith normalization result to obtain an ith processing result, and determining the nth processing result as the image processing result of the target image. The ith convolution result may be used as an input to the batch normalization layer. For each convolution result, batch normalization may be performed via a batch normalization layer. The batch normalization layer may precede the active layer, i.e., the output of the batch normalization layer may be the input to the active layer. The output of the active layer may then be quantized to obtain a processing result corresponding to the convolution result, and the processing result last output by the active layer is determined to be the image processing result of the target image.
In a possible implementation manner, batch normalization is performed on the ith convolution result to obtain an ith normalized result, which may include: quantizing the first batch of standardized parameters according to the preset bit width to obtain a second batch of quantized standardized parameters, wherein the second batch of standardized parameters comprise fixed point type data with the preset bit width; and according to the second batch of standardization parameters, carrying out batch standardization on the ith convolution result to obtain an ith standardization result.
The first batch of normalization parameters may include any one of a mean of batch normalization layer input data, a standard deviation of batch normalization layer input data, a proportion of a batch normalization layer, and an offset, and is used for batch normalization of the ith convolution result. The first set of normalization parameters may be either floating point type data or fixed point type data. It is noted that the first set of normalized parameters is also quantified based on equation (2). For example, the scale of the batch normalization layer can be quantized to 8-bit-wide fixed-point data based on equation (2), and then the ith convolution result can be batch normalized according to the fixed-point data.
In a possible implementation manner, after quantizing the first batch of normalized parameters to obtain a second batch of normalized parameters after quantization, the electronic device may perform batch normalization on the ith convolution result to obtain an ith normalization result, which may be an output of the batch normalization layer, according to the following formula:
Figure BDA0003033386920000111
wherein the content of the first and second substances,
Figure BDA0003033386920000112
in equation (6), x may be the ith convolution result; mu.soMay be the mean, σ, of the batch normalization layer input dataoMay be the standard deviation of the batch normalization layer input data, and e may be a smaller value that avoids a divisor of zero;
Figure BDA0003033386920000113
may be a normalization parameter, e.g. using μo、σoAnd belonging to the element of normalization processing on the ith convolution result x to obtain a normalization parameter which accords with normal distribution; gamma rayoAnd betaoMay be the scale and offset, respectively, of the batch normalization layer for the pair
Figure BDA0003033386920000114
And (6) adjusting.
Based on equations (5) and (6) above, the ith convolution result may be normalized first, and then the batch normalized output may be adjusted using the scale and offset. In the embodiment of the present disclosure, a first batch of normalization parameters of the batch normalization layer may be quantized first, and the ith convolution result may be batch normalized by using a second batch of normalization parameters after quantization, that is, each parameter of the batch normalization layer in equations (5) and (6) may be a quantized second batch of normalization parameters.
The first batch of standardized parameters are quantized to obtain a second batch of standardized parameters after quantization, and the convolution result is subjected to batch standardization according to the second batch of standardized parameters.
In a possible implementation manner, activating and quantizing the ith normalization result to obtain an ith processing result may include: activating the ith normalized result to obtain an ith activation result; and quantizing the ith activation result according to the preset bit width to obtain the ith processing result, wherein the ith processing result comprises fixed point type data with the preset bit width.
In a possible implementation manner, the electronic device may activate an ith normalization result (i.e., an activation function value) by using the following formula to obtain an ith activation result, and quantize the ith activation result according to the preset bit width to obtain an ith processing result, where the ith processing result includes fixed-point type data with a preset bit width:
Figure BDA0003033386920000115
wherein the content of the first and second substances,
Figure BDA0003033386920000116
in equation (8), x may be an activation function value in a network parameter, i.e., the ith normalized result; the activation function may be a ReLu (·) function;
Figure BDA0003033386920000121
the activation function value can be acted on the output of the activation function and used as the input of a Quan (-) function; bits may be a predetermined bit width. It should be understood by those skilled in the art that, in addition to the ReLu (-) function, the activation function may include various types, such as Sigmoid (-) function and Tanh (-) function, etc., and the present disclosure does not limit the type of the activation function. It is noted that the activation layer may be located before the batch normalization layer or after the batch normalization layer, and the present disclosure does not limit the quantization order of the neural network parameters.
The method and the device have the advantages that the standardized result is activated to obtain the activation result, the activation result is quantized according to the preset bit width to obtain the processing result, the data volume of the neural network parameters can be reduced, the storage space of the neural network is reduced, the compression ratio of the neural network is improved, and the calculation speed of the neural network is increased. In addition, the bit width input by the activation function is increased by multiplication, and the output of the activation function is converted into fixed-point data with low bit width, so that the operation amount of the neural network can be reduced, and the calculation efficiency of the neural network can be improved.
To sum up, the embodiment of the present disclosure obtains a neural network for image processing, which includes a plurality of network layers, performs tensor decomposition on a first weight parameter of the network layer to obtain a plurality of core tensors for any network layer, quantizes the plurality of core tensors according to a preset bit width to obtain a quantized second weight parameter including fixed-point data with the preset bit width, and finally calls the neural network to process a target image to obtain an image processing result of the target image, so that the data amount of the neural network parameter can be reduced, the storage space of the neural network is reduced, the compression ratio of the neural network is improved, the calculation speed of the neural network is increased, and the application is wide.
Fig. 3 shows a schematic diagram of an exemplary image processing method of an embodiment of the present disclosure.
As shown in fig. 3, the image processing method of the embodiment of the present disclosure may include four stages, namely, quantization of the first weight parameter, convolution, quantization of the first batch of normalized parameters, and quantization of the activation function value. Wherein the nuclear tensor in FIG. 3
Figure BDA0003033386920000122
The first weight parameter of the network parameters may be obtained by performing tensor decomposition by the above equation (1). It will be understood by those skilled in the art that fig. 3 exemplarily provides a processing procedure for parameters involved in the neural network forward propagation process, and does not constitute a limitation for other processing methods (e.g., further processing of the acquired image processing results and processing of neural network parameters in the neural network backward propagation process).
In one possible implementation, as shown in fig. 3, FP may represent a forward propagation process of a neural network,
Figure BDA0003033386920000123
may be the pair tensor in the above formula (1)
Figure BDA0003033386920000124
(e.g., a first weighting parameter of the neural network parameters) is subjected to tensor decomposition to obtain a core tensor. Due to tensor
Figure BDA0003033386920000125
The data can be floating point data, and the core tensor obtained after tensor decomposition can still be floating point data.
In one possible implementation, as shown in fig. 3, the plurality of core tensors may be processed according to a preset bit width
Figure BDA0003033386920000126
Respectively quantizing to obtain multiple first quantized quantities
Figure BDA0003033386920000127
The first tensors in fig. 3 are (d +1) in total, and each first tensor has a bit width of 8 bits. For the plurality of nuclear tensors
Figure BDA0003033386920000128
The quantization may be based on the following formula:
Figure BDA0003033386920000129
wherein, the function of the Quan (-) in the formula (9) is equivalent to the function of the Quan (-) in the formula (2), namely, the formula (9) is different from the formula (2) in expression form, and the actual quantization process is the same.
Referring to fig. 3, the first two quantized first vectors may be quantized
Figure BDA00030333869200001210
And
Figure BDA00030333869200001211
merging (i.e., fusing) is performed to obtain a 1 st third tensor with a 15bit width
Figure BDA0003033386920000131
The third tensor can be fixed-point data; then the third tensor is processed based on the formula (3)
Figure BDA0003033386920000132
Quantizing to obtain the second tensor of the 1 st 8-bit width
Figure BDA0003033386920000133
Due to the first sheet quantity
Figure BDA0003033386920000134
Is also 8-bit wide, and therefore will be the second tensor
Figure BDA0003033386920000135
With the first sheet amount
Figure BDA0003033386920000136
Combining to obtain the third tensor with the 2 nd 15bit width
Figure BDA0003033386920000137
Then for the third tensor
Figure BDA0003033386920000138
Quantizing by using a formula (3) to obtain a second tensor of the 2 nd 8bit width
Figure BDA0003033386920000139
And so on until the third tensor with the d 15bit wide is obtained
Figure BDA00030333869200001310
And for the third tensor
Figure BDA00030333869200001311
Quantizing by using a formula (3) to obtain a second tensor of the d 8bit wide
Figure BDA00030333869200001312
The second tensor
Figure BDA00030333869200001313
I.e. the second weight parameter. Furthermore, the second weight parameter obtained by fusing and quantizing the plurality of first magnitudes may also be expressed by the following set of equations (i.e., equation (10)):
Figure BDA00030333869200001314
Figure BDA00030333869200001315
……
Figure BDA00030333869200001316
wherein the content of the first and second substances,
Figure BDA00030333869200001317
is the second weight parameter
Figure BDA00030333869200001318
With continued reference to fig. 3, Q8 in fig. 3 may represent a Quan (-) function in equation (2) or equation (9), and 8 may represent a predetermined bit width of 8 bits in equation (2) or equation (9); qak in FIG. 3 can represent the Quank (-) function in equation (3) or equation (10), and 8 can represent the predetermined bit width in equation (3) or equation (10) to be 8 bits. In a convolutional neural network, the second weight parameter
Figure BDA00030333869200001319
It is possible to perform convolution operation together with an input image that has been quantized to 8bit wide as a convolution kernel. It is worth noting that
Figure BDA00030333869200001320
In the case of (2), obtained by quantization
Figure BDA00030333869200001321
May still be 5 dimensions (e.g., dimension K)H×KW×KD×M×N)。
As shown in FIG. 3, a second weight parameter may be used
Figure BDA00030333869200001322
As a convolution kernel, a convolution operation is performed together with an input image of 8bit width, and the output of the convolution operation may be a tensor x of 15bit widthconv. Tensor xconvCan be used as input for a batch normalization layer, i.e. tensor xconvMay be the result of a convolution. It is noted that the neural network may include a plurality of network layers, and the convolution result in fig. 3 may be the ith network layer outputAnd (6) convolution results.
In FIG. 3,. mu.xMay be the MEAN of the input data (i.e., MEAN); sigmaxMay be the standard deviation of the input data (i.e., STD); γ may be the Scale of the batch normalization layer (i.e., Scale) and β may be the Offset of the batch normalization layer (i.e., Offset). All four parameters mentioned above may be the first standardized parameters.
In one possible implementation, the mean value μmay be first aligned based on equation (2)xAnd standard deviation σxQuantizing to obtain the mean value mu of the 8bit width after quantizationx qAnd standard deviation σx qThen, in combination with equation (6), according to the quantized mean μ of 8bit widthsx qAnd standard deviation σx qFor the convolution result xconvNormalization is carried out to obtain a normalization parameter with 15bit width
Figure BDA00030333869200001326
The normalization parameters are then normalized based on equation (2)
Figure BDA00030333869200001323
Quantizing again, and normalizing the parameters with 15bit width
Figure BDA00030333869200001324
Normalization parameters quantized to 8bit wide
Figure BDA00030333869200001325
Then, the ratio gamma and the offset beta can be quantized based on the formula (2) to obtain the quantized ratio gamma of 8bit widthqAnd offset betaqAnd combining the formula (5) to normalize the parameter of 8bit width
Figure BDA0003033386920000141
Adjusting to obtain output of batch normalization layer with 15bit width
Figure BDA0003033386920000142
It is worth noting that for the first batch of labelsThe order in which the parameters are quantized is standardized and not limited in this disclosure.
With continued reference to FIG. 3, the output of the batch normalization layer may be used as an input to the activation function, i.e., the output of the batch normalization layer may be combined with equation (8)
Figure BDA0003033386920000143
Activation is performed. The activation function may be a ReLu (·) function. The activation result (i.e., the activation function value) is x which is 15bit widea. Then, the activation result is quantized based on a formula (2) to obtain a processing result with 8bit width
Figure BDA0003033386920000144
And output is performed. Notably, the neural network includes a plurality of network layers, the processing results
Figure BDA0003033386920000145
May correspond to the ith network layer as the ith processing result.
The embodiment of the disclosure combines tensor decomposition and quantization of network parameters in a process of forward propagation of a neural network, including tensor decomposition and quantization of a first weighting parameter and quantization of a first batch of standardized parameters and activation function values into data with a low bit width, on one hand, compared with a case that only tensor decomposition is performed on weighting values in the related art, a model of the neural network can be further reduced, further compression of the neural network is realized, and calculation of the neural network is accelerated.
In a possible implementation manner, the preset bit width is configured according to the dimensions of the plurality of core tensors, the core tensors with different dimensions are quantized into first tensors with different bit widths, and the preset bit width is greater than or equal to a bit width threshold. For example, for TT decomposition, the first weightThe plurality of nuclear tensors obtained by carrying out tensor decomposition on the weight parameters include two-dimensional tensors and three-dimensional tensors. Referring to FIG. 2, the edge nuclear tensor
Figure BDA0003033386920000146
And
Figure BDA0003033386920000147
may be two-dimensional, central nuclear tensor
Figure BDA0003033386920000148
May be three-dimensional. In quantizing the plurality of kernel tensors, an edge kernel tensor may be quantized
Figure BDA0003033386920000149
And
Figure BDA00030333869200001410
a first tensor quantized to 4-bit width and a central core tensor
Figure BDA00030333869200001411
A first tensor quantized to an 8bit width. The bit width threshold value can be preset to be 4 bits or 8 bits.
Fig. 4a and 4b show schematic diagrams of the distribution of the core tensor data of an embodiment of the present disclosure.
Wherein the horizontal axis may represent the numerical value of the core tensor data, the interval is [ -1, 1], the vertical axis may represent the Density (Density) of the distribution of the core tensor data, the upper bound (i.e., upper bound) represents the maximum value of the core tensor data, the lower bound (i.e., lower bound) represents the minimum value of the core tensor data, and the maximum value and the minimum value are represented by dotted lines. Taking the core tensor as a two-dimensional matrix as an example, each element in the two-dimensional matrix may be normalized to an interval of [ -1, 1], and then the density of distribution of each element in the interval is counted, that is, fig. 4a and 4b may show the distribution of each element in the matrix.
As shown in fig. 4a and 4b, there are four TT cores in total in fig. 4a and 4 b. The two diagrams on the left side of fig. 4a show the data distribution of the first TT core when the preset bit width is 15 bits and 8 bits, the two diagrams on the right side of fig. 4a show the data distribution of the second TT core when the preset bit width is 15 bits and 8 bits, the two diagrams on the left side of fig. 4b show the data distribution of the third TT core when the preset bit width is 15 bits and 8 bits, and the two diagrams on the right side of fig. 4b show the data distribution of the fourth TT core when the preset bit width is 15 bits and 8 bits. Wherein, the first TT core and the fourth TT core are edge TT cores, and the second TT core and the third TT core are central TT cores. As can be seen from fig. 4a and 4b, the edge TT kernel has flatter data Distribution compared to the central TT kernel, and the preset bit width of 15bit or 8bit has less influence on the data Distribution of the TT kernel, which is also called as the data Distribution Shift (CDS), and is the first discovery of the inventor. Based on the phenomenon of data distribution deviation, different preset bit widths can be configured according to the dimensions of the plurality of core tensors, the core tensors with different dimensions are quantized into first quantities with different bit widths, the bit widths of the edge core tensors can be further reduced, the loss of precision cannot be greatly increased, and therefore the neural network can be further compressed, the compression rate of the neural network is improved, the storage space of the neural network is further reduced, and the calculation speed of the neural network is increased.
In a possible implementation manner, the preset bit width (also referred to as a feature bit width) may be greater than or equal to a bit width threshold, so as to ensure stability of neural network training. For example, the first threshold may be preconfigured to 8 bits. Under the condition that the preset bit width is greater than or equal to 8 bits, the stability of the neural network training process can be ensured; in the case that the preset bit width is smaller than 8 bits (e.g., 4 bits), the stability of the neural network training process cannot be guaranteed. In addition, compared with the preset bit width of 4 bits, the preset bit width configured to 8 bits can better recover the network parameters (for example, the first weight parameter), and configuring the preset bit width to 4 bits can cause excessive data loss in the compression process of the neural network.
In one aspect, because the Tensor-Train decomposition method and the quantization method are orthogonal, namely the result of the Tensor-Train decomposition method and the result of the quantization method are not affected by each other, the Tensor-Train decomposition and the quantization are combined, so that the neural network (for example, a 2D or 3D convolutional neural network) can be compressed to the maximum extent, and the compression rate of the neural network is improved; on the other hand, compared with the current neural network data processing algorithm, the embodiment of the disclosure can be combined with the current mainstream deep learning acceleration chip, the decomposed weight is embedded, and the multiplication and addition operation of the floating point number is converted into the multiplication and addition operation of the low bit width data, so that the chip area and the energy consumption are greatly reduced; in another aspect, the embodiment of the present disclosure has universality, and can be widely applied to models such as convolutional neural networks, artificial neural networks, cyclic neural networks, long and short memory neural networks, deep reinforcement learning networks, graph neural networks, tensor neural networks, and the like in neural network data processing algorithms; in another aspect, the data distribution shift phenomenon is also discovered in the embodiment of the present disclosure, and different preset bit widths are configured for core tensors with different dimensions, so that the bit width of the edge core tensor can be further reduced, and the precision loss is reduced; in another aspect, the embodiment of the disclosure can enhance the stability of the image processing method by configuring that the preset bit width is greater than or equal to the bit width threshold.
In an application aspect, the method can be widely applied to the fields of artificial neural networks, long and short memory neural networks, reinforcement learning, graph neural networks, tensor neural networks and the like in the processing of neural network images, videos and 3-dimensional point cloud data so as to reduce the storage of the neural networks, reduce the consumption of computing resources, reduce the area and the energy consumption of a neural network chip and improve the computing speed of the neural network chip. In addition, the reduction of computing resources and the improvement of computing speed brought by the method are beneficial to realizing the real-time operation of the neural network on the mobile phone and the deployment of cloud computing, and applications related to the neural network, such as image classification, identification, tracking and the like, are expected to be transferred from a server CPU/GPU computing cluster with high computing cost and huge energy consumption to an embedded intelligent terminal, so that artificial intelligence can better serve the human society.
In one possible implementation, the image processing method may be executed on an electronic device. The electronic device may include a processor, a memory, and a communication interface. It will be appreciated by those skilled in the art that the configuration of the electronic device is not intended to be a limitation of the electronic device and may include more or fewer components, or some components in combination, or a different arrangement of components. Wherein:
the processor may be a control center of the electronic device, and connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, thereby performing overall control of the electronic device. The processor may be implemented by a CPU or a Graphics Processing Unit (GPU). The processor may also include a neural network chip.
The memory may be used to store software programs and modules. The processor executes various functional applications and data processing by executing software programs and modules stored in the memory. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, an acquisition module, a tensor decomposition module, a quantization module, a calling module, an application program (such as neural network training and neural network reasoning) required by at least one function and the like; the storage data area may store data created according to use of the electronic device, and the like. The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The processor executes the following functions by operating the first acquisition module: acquiring a neural network, wherein the neural network is used for image processing and comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data; the processor performs the following functions by operating a tensor decomposition module: carrying out tensor decomposition on a first weight parameter of any network layer to obtain a plurality of nuclear tensors, wherein the nuclear tensors comprise two-dimensional tensors and/or three-dimensional tensors; the processor performs the following functions by operating the quantization module: quantizing the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter, wherein the second weight parameter comprises fixed point type data with a preset bit width; the processor executes the following functions by running the calling module: and calling the neural network to process the target image to obtain an image processing result of the target image.
Fig. 5 shows a block diagram of an image processing apparatus of an embodiment of the present disclosure.
The image processing apparatus may be implemented as all or a part of an electronic device by software, hardware, or a combination of both. As shown in fig. 5, in one possible implementation, the image processing apparatus may include an acquisition module, a tensor decomposition module, a quantization module, and a calling module.
An obtaining module 510, configured to obtain a neural network, where the neural network is used for image processing, the neural network includes a plurality of network layers, and an initial first weight parameter of each network layer includes a multidimensional tensor and is floating point type data.
And a tensor decomposition module 520, configured to perform tensor decomposition on the first weight parameter of the network layer to obtain a plurality of core tensors, where the plurality of core tensors include a two-dimensional tensor and/or a three-dimensional tensor.
A quantizing module 530, configured to quantize the multiple kernel tensors according to a preset bit width to obtain a quantized second weight parameter, where the second weight parameter includes fixed point type data with the preset bit width.
And the invoking module 540 is configured to invoke the neural network to process the target image, so as to obtain an image processing result of the target image.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the above functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to actual needs, that is, the content structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the steps executed by the electronic equipment in the above method embodiments are realized. For example, the electronic device may be a terminal or a server.
The disclosed embodiments also provide a non-transitory computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the methods in the various method embodiments described above.
Fig. 6 illustrates a block diagram of a terminal 800 for data processing, according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
FIG. 7 illustrates a block diagram of a server 1900 for data processing according to an exemplary embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 7, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a neural network, wherein the neural network is used for image processing and comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data;
carrying out tensor decomposition on a first weight parameter of any network layer to obtain a plurality of nuclear tensors, wherein the nuclear tensors comprise two-dimensional tensors and/or three-dimensional tensors;
quantizing the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter, wherein the second weight parameter comprises fixed point type data with a preset bit width;
and calling the neural network to process the target image to obtain an image processing result of the target image.
2. The method of claim 1, wherein the invoking the neural network to process the target image to obtain the image processing result of the target image comprises:
for the ith network layer, convolving the processing results of the (i-1) th level output by the (i-1) th network layer according to the second weight parameter of the ith network layer to obtain an ith convolution result, wherein i is more than or equal to 1 and is less than or equal to N, N is the number of network layers of the neural network, and the processing result of the 0 th level is the target image;
carrying out batch standardization on the ith convolution result to obtain an ith standardization result;
activating and quantizing the ith normalization result to obtain an ith processing result,
and determining the Nth processing result as the image processing result of the target image.
3. The method according to claim 1, wherein the quantizing the plurality of core tensors according to a preset bit width to obtain a quantized second weight parameter includes:
quantizing the plurality of core tensors respectively according to a preset bit width to obtain a plurality of quantized first tensors;
for the kth first tensor, fusing the kth first tensor with the kth-1 second tensor to obtain a kth third tensor, wherein K is more than or equal to 2 and less than or equal to K, and K is the number of the first tensors, wherein the 1 st second tensor is the 1 st first tensor;
quantizing the kth third tensor according to a preset bit width to obtain a kth second tensor;
determining a Kth second tensor as the second weight parameter.
4. The method of claim 1, wherein the tensor decomposing the first weight parameter of the network layer to obtain a plurality of core tensors comprises:
and carrying out tensor decomposition on the first weight parameter by a tensor row TT decomposition mode to obtain the plurality of nuclear tensors.
5. The method of claim 2, wherein the batch normalizing the ith convolution result to obtain an ith normalized result comprises:
quantizing the first batch of standardized parameters according to the preset bit width to obtain a second batch of quantized standardized parameters, wherein the second batch of standardized parameters comprise fixed point type data with the preset bit width;
and according to the second batch of standardization parameters, carrying out batch standardization on the ith convolution result to obtain an ith standardization result.
6. The method of claim 2, wherein said activating and quantifying said ith normalization result to obtain an ith processing result comprises:
activating the ith normalized result to obtain an ith activation result;
and quantizing the ith activation result according to the preset bit width to obtain the ith processing result, wherein the ith processing result comprises fixed point type data with the preset bit width.
7. The method of claim 3, wherein the preset bit width is configured according to a dimension of the plurality of core tensors, core tensors of different dimensions are quantized into a first tensor of different bit widths, and the preset bit width is greater than or equal to a bit width threshold.
8. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a neural network, the neural network is used for image processing, the neural network comprises a plurality of network layers, and an initial first weight parameter of each network layer comprises a multidimensional tensor and is floating point type data;
the tensor decomposition module is used for carrying out tensor decomposition on the first weight parameter of any network layer to obtain a plurality of nuclear tensors, and the nuclear tensors comprise two-dimensional tensors and three-dimensional tensors;
the quantization module is configured to quantize the plurality of kernel tensors according to a preset bit width to obtain a quantized second weight parameter, where the second weight parameter includes fixed point type data with the preset bit width;
and the calling module is used for calling the neural network to process the target image to obtain an image processing result of the target image.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 7.
CN202110436569.2A 2021-04-22 2021-04-22 Image processing method, image processing device, electronic equipment and storage medium Pending CN113095486A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110436569.2A CN113095486A (en) 2021-04-22 2021-04-22 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110436569.2A CN113095486A (en) 2021-04-22 2021-04-22 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113095486A true CN113095486A (en) 2021-07-09

Family

ID=76679805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110436569.2A Pending CN113095486A (en) 2021-04-22 2021-04-22 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113095486A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628290A (en) * 2021-07-28 2021-11-09 武汉大学 Wave band self-adaptive hyperspectral image compression method based on 3D convolution self-encoder
CN113868671A (en) * 2021-12-01 2021-12-31 支付宝(杭州)信息技术有限公司 Data processing method, and back door defense method and device of neural network model
WO2023125785A1 (en) * 2021-12-29 2023-07-06 杭州海康威视数字技术股份有限公司 Data processing method, network training method, electronic device, and storage medium
WO2023231887A1 (en) * 2022-06-01 2023-12-07 华为技术有限公司 Tensor-based continual learning method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628290A (en) * 2021-07-28 2021-11-09 武汉大学 Wave band self-adaptive hyperspectral image compression method based on 3D convolution self-encoder
CN113868671A (en) * 2021-12-01 2021-12-31 支付宝(杭州)信息技术有限公司 Data processing method, and back door defense method and device of neural network model
WO2023125785A1 (en) * 2021-12-29 2023-07-06 杭州海康威视数字技术股份有限公司 Data processing method, network training method, electronic device, and storage medium
WO2023231887A1 (en) * 2022-06-01 2023-12-07 华为技术有限公司 Tensor-based continual learning method and device

Similar Documents

Publication Publication Date Title
CN113095486A (en) Image processing method, image processing device, electronic equipment and storage medium
EP3474194B1 (en) Method and apparatus with neural network parameter quantization
CN109902186B (en) Method and apparatus for generating neural network
US20230196085A1 (en) Residual quantization for neural networks
US20190340499A1 (en) Quantization for dnn accelerators
US10579334B2 (en) Block floating point computations using shared exponents
CN109800865B (en) Neural network generation and image processing method and device, platform and electronic equipment
US20210312289A1 (en) Data processing method and apparatus, and storage medium
CN110188865B (en) Information processing method and device, electronic equipment and storage medium
CN109920016B (en) Image generation method and device, electronic equipment and storage medium
CN112381707B (en) Image generation method, device, equipment and storage medium
CN111105017A (en) Neural network quantization method and device and electronic equipment
CN109903252B (en) Image processing method and device, electronic equipment and storage medium
EP4181061A1 (en) Method for reconstructing tree-shaped tissue in image, and device and storage medium
US11948090B2 (en) Method and apparatus for video coding
CN117315758A (en) Facial expression detection method and device, electronic equipment and storage medium
KR20220018633A (en) Image retrieval method and device
CN109635926B (en) Attention feature acquisition method and device for neural network and storage medium
CN111709784B (en) Method, apparatus, device and medium for generating user retention time
CN112561050B (en) Neural network model training method and device
CN110852202A (en) Video segmentation method and device, computing equipment and storage medium
CN111598037B (en) Human body posture predicted value acquisition method, device, server and storage medium
WO2024012171A1 (en) Binary quantization method, neural network training method, device and storage medium
US20220310069A1 (en) Methods and devices for irregular pruning for automatic speech recognition
US12002453B2 (en) Methods and devices for irregular pruning for automatic speech recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination