CN115331067A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115331067A
CN115331067A CN202210163194.1A CN202210163194A CN115331067A CN 115331067 A CN115331067 A CN 115331067A CN 202210163194 A CN202210163194 A CN 202210163194A CN 115331067 A CN115331067 A CN 115331067A
Authority
CN
China
Prior art keywords
image processing
node
processing model
quantization
quantized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210163194.1A
Other languages
Chinese (zh)
Inventor
林家彦
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202210163194.1A priority Critical patent/CN115331067A/en
Publication of CN115331067A publication Critical patent/CN115331067A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, and provides an image processing method, an image processing device, image processing equipment and a storage medium, which are used for reducing the calculated amount of image processing, reducing the memory occupation and reducing the calculation efficiency. The image processing method comprises the following steps: carrying out target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model; carrying out quantitative training on the modified image processing model to obtain a quantized image processing model; carrying out redundant node deletion on the quantized image processing model to obtain an initial image processing model; and acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of neural networks, and in particular, to a method, an apparatus, a device, and a storage medium for processing an image.
Background
In the field of image recognition, the convolutional neural network (also called deep neural network) which is popular nowadays is a computationally intensive algorithm, which requires a large number of matrix multiplication operations. Usually, a neural network is deployed on a Graphics Processing Unit (GPU) for computation, and although a certain amount of inference time can be reduced, the total computation amount is not reduced, the consumed memory is very large, and the memory of a GPU device is usually small. If the calculated amount of the convolutional neural network is reduced, the memory occupation amount is reduced, and the convolutional neural network is an important topic in the deep learning field.
Quantization is one of the common methods to reduce the amount of network computation. In general, in training and inference of deep neural networks, 32-bit floating point numbers are used for calculation, and quantization refers to a process of approximating continuous 32-bit floating point numbers to finite discrete values. A common quantization method is to map 32-bit floating point numbers onto 8-bit integers, so that the calculation of floating point becomes the calculation of integer (fixed point number).
If the range of 32-bit floating-point numbers is much larger than an 8-bit integer, then directly mapping 32-bit to an 8-bit integer will cause a large loss of information. A common method for reducing information loss is post-training quantization, that is, a certain scale data set is inferred on a trained model, so as to obtain input (output) distributions of each layer in a neural network, the distributions are truncated by an algorithm dividing regions and are mapped to 8-bit values to obtain another distribution, and then approximate values of the two distributions are evaluated by methods such as KL divergence and the like, so as to determine an optimal truncation region (also called a saturation region). Although the loss of model quantization can be reduced by the method, so that the quantized distribution is close to the original distribution of the model, the precision cannot meet the ideal requirement under the application of some practical scenes, because the weight of the model is not changed along with the quantization actually, that is, the robustness of the model to the input quantization may be poor, and the problems of large calculation amount of image processing, high memory occupation and low calculation efficiency still exist.
Disclosure of Invention
The invention provides an image processing method, an image processing device, image processing equipment and a storage medium, which are used for reducing the calculation amount and high memory occupation of image processing and reducing the calculation efficiency of the image processing.
The first aspect of the present invention provides a method for processing an image, including:
performing target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model;
performing quantization training on the modified image processing model to obtain a quantized image processing model;
deleting redundant nodes of the quantized image processing model to obtain an initial image processing model;
and acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.
Optionally, in a first implementation manner of the first aspect of the present invention, the performing target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model includes:
carrying out target node identification on the pre-trained image processing model to obtain a node to be quantized, and acquiring the node type of the node to be quantized;
and performing target quantization node insertion on the node to be quantized based on the node type to obtain a modified image processing model, wherein the target quantization node is a node formed by a quantization node and an inverse quantization node.
Optionally, in a second implementation manner of the first aspect of the present invention, the performing, based on the node type, target quantization node insertion on the node to be quantized to obtain a modified image processing model includes:
when the node type is used for indicating a node with weight, tensor-by-tensor target quantization node insertion and channel-by-channel target quantization node insertion are carried out on the node to be quantized to obtain a modified image processing model;
and when the node type is used for indicating an addition node, respectively performing target quantization node insertion on a plurality of inputs of the node to be quantized to obtain a modified image processing model.
Optionally, in a third implementation manner of the first aspect of the present invention, the performing quantization training on the modified image processing model to obtain a quantized image processing model includes:
obtaining a result before quantization, a quantization proportion and a zero point after quantization of the modified image processing model;
and calculating the modified image processing model based on the result before quantization, the quantized proportion, the quantized zero point and a preset quantization formula to obtain the quantized image processing model.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the performing redundant node deletion on the quantized image processing model to obtain an initial image processing model includes:
based on a preset graph optimization algorithm, obtaining a plurality of quantitative training nodes in the quantized image processing model, and identifying the continuity of the quantitative training nodes to obtain continuity information;
identifying redundant nodes in each quantitative training node based on the continuity information to obtain nodes to be deleted;
and deleting the nodes to be deleted of each quantization training node in the quantized image processing model to obtain an initial image processing model.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the identifying redundant nodes in each quantized training node based on the continuity information to obtain nodes to be deleted includes:
if the continuity information is used for indicating that the two quantization training nodes are discontinuous, obtaining an inverse quantization node in target quantization nodes of the quantization training nodes to obtain a first redundant node, and determining the first redundant node as a node to be deleted;
if the continuity information is used for indicating that the two quantitative training nodes are continuous, acquiring a target quantitative node between the two quantitative training nodes to obtain a second redundant node;
obtaining inverse quantization nodes in target quantization nodes of the previously connected quantization training nodes to obtain third redundant nodes;
and determining the second redundant node and the third redundant node as nodes to be deleted.
Optionally, in a sixth implementation manner of the first aspect of the present invention, the obtaining a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information includes:
converting the initial image processing model into a corresponding network structure expression to obtain a target image processing model, and sending the target image processing model to a preset inference engine;
acquiring an image to be processed and an image processing requirement, calling the target image processing model through the inference engine, and analyzing and processing the image to be processed based on the image processing requirement to obtain image information.
A second aspect of the present invention provides an image processing apparatus, comprising:
the inserting module is used for performing target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model;
the training module is used for carrying out quantitative training on the modified image processing model to obtain a quantized image processing model;
the deleting module is used for deleting redundant nodes of the quantized image processing model to obtain an initial image processing model;
and the analysis processing module is used for acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.
Optionally, in a first implementation manner of the second aspect of the present invention, the insertion module includes:
the identification unit is used for identifying target nodes of the pre-trained image processing model to obtain nodes to be quantized and acquiring node types of the nodes to be quantized;
and the inserting unit is used for performing target quantization node insertion on the node to be quantized based on the node type to obtain a modified image processing model, wherein the target quantization node is a node formed by a quantization node and an inverse quantization node.
Optionally, in a second implementation manner of the second aspect of the present invention, the insertion unit is specifically configured to:
when the node type is used for indicating a node with weight, tensor-by-tensor target quantization node insertion and channel-by-channel target quantization node insertion are carried out on the node to be quantized to obtain a modified image processing model;
and when the node type is used for indicating an addition node, respectively performing target quantization node insertion on a plurality of inputs of the node to be quantized to obtain a modified image processing model.
Optionally, in a third implementation manner of the second aspect of the present invention, the training module is specifically configured to:
obtaining a result before quantization, a quantization proportion and a zero point after quantization of the modified image processing model;
and calculating the modified image processing model based on the result before quantization, the quantized proportion, the quantized zero point and a preset quantization formula to obtain the quantized image processing model.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the deleting module includes:
the first identification unit is used for acquiring a plurality of quantitative training nodes in the quantized image processing model based on a preset graph optimization algorithm, and identifying the continuity of the plurality of quantitative training nodes to obtain continuity information;
the second identification unit is used for identifying redundant nodes in each quantitative training node based on the continuity information to obtain nodes to be deleted;
and the deleting unit is used for deleting the to-be-deleted nodes of all the quantized training nodes in the quantized image processing model to obtain an initial image processing model.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the second identifying unit is specifically configured to:
if the continuity information is used for indicating that the two quantization training nodes are discontinuous, obtaining an inverse quantization node in a target quantization node of the quantization training nodes to obtain a first redundant node, and determining the first redundant node as a node to be deleted;
if the continuity information is used for indicating that the two quantitative training nodes are continuous, acquiring a target quantitative node between the two quantitative training nodes to obtain a second redundant node;
obtaining inverse quantization nodes in target quantization nodes of the previously connected quantization training nodes to obtain third redundant nodes;
and determining the second redundant node and the third redundant node as nodes to be deleted.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the analysis processing module is specifically configured to:
converting the initial image processing model into a corresponding network structure expression to obtain a target image processing model, and sending the target image processing model to a preset inference engine;
acquiring an image to be processed and an image processing requirement, calling the target image processing model through the inference engine, and analyzing and processing the image to be processed based on the image processing requirement to obtain image information.
A third aspect of the present invention provides an apparatus for processing an image, comprising: a memory and at least one processor, the memory having stored therein a computer program; the at least one processor calls the computer program in the memory to cause the image processing apparatus to execute the image processing method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the above-described image processing method.
In the technical scheme provided by the invention, target quantization node insertion is carried out on the pre-trained image processing model to obtain a modified image processing model; carrying out quantitative training on the modified image processing model to obtain a quantized image processing model; deleting redundant nodes of the quantized image processing model to obtain an initial image processing model; and acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information. In the embodiment of the invention, in the deep neural network model training, a target quantization node (combination of a quantization node and an inverse quantization node) is inserted in front of a node to be quantized, so that the node is interfered by quantization during calculation, the existence of quantization is sensed, after the training is finished, the paired target quantization node is eliminated (namely, a redundant node is deleted), and finally, a target image processing model is deployed to reason images to be processed, so that the compression of the image processing model is realized, the image processing model can obtain the quantized benefit, the precision loss caused by quantization can be avoided, the robustness of the image processing model to input quantization is improved, the calculated amount and the memory occupation of image processing are reduced, and the calculation efficiency is reduced.
Drawings
FIG. 1 is a diagram of an embodiment of a method for processing an image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a method for processing an image according to an embodiment of the present invention;
FIG. 3 is a diagram of an embodiment of performing target quantization node insertion on a node to be quantized according to the embodiment of the present invention;
FIG. 4 is a diagram of an embodiment of target quantization node insertion when a node type is used to indicate a weighted node in an embodiment of the present invention;
FIG. 5 is a diagram illustrating an embodiment of target quantization node insertion when a node type is used to indicate a summing node in an embodiment of the present invention;
FIG. 6 is a diagram of an embodiment of two discrete quantized training nodes according to the present invention;
FIG. 7 is a schematic diagram of an embodiment of two continuous quantized training nodes according to the embodiment of the present invention;
fig. 8 is a schematic diagram of an embodiment of deleting a node to be deleted in quantized training nodes when continuity information is used to indicate that two quantized training nodes are discontinuous in an embodiment of the present invention;
fig. 9 is a schematic diagram of an embodiment of deleting a node to be deleted in quantized training nodes when continuity information is used to indicate that two quantized training nodes are continuous in the embodiment of the present invention;
FIG. 10 is a schematic diagram of an embodiment of an apparatus for processing an image according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another embodiment of an apparatus for processing an image according to an embodiment of the present invention;
fig. 12 is a schematic diagram of an embodiment of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention provide an image processing method, an image processing apparatus, an image processing device, and a storage medium, which reduce the amount of image processing calculation and the memory occupation, and reduce the calculation efficiency.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be implemented in other sequences than those illustrated or described herein. Moreover, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For understanding, a specific flow of an embodiment of the present invention is described below, and referring to fig. 1, an embodiment of a method for processing an image according to an embodiment of the present invention includes:
101. and performing target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model.
It is to be understood that the execution subject of the present invention may be a processing apparatus of an image, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
The pre-trained image processing model is a deep neural network in the field of image recognition. Because quantization changes the characteristics of the original model, if the model is quantized and trained from the beginning, the learning ability of some characteristics is deteriorated, so the model needs to be pre-trained in a non-quantized state, that is, the pre-trained image processing model is the pre-trained image processing model in a non-quantized state.
The server obtains the number of training rounds of the created original image processing model in the non-training mode, trains the original image processing model based on the number of training rounds in the non-training mode, and determines the trained original image processing model meeting the training requirement as the pre-trained image processing model.
The server acquires a node to be quantized in the pre-trained image processing model, inserts a target quantization node in a target position of the node to be quantized, and modifies the model structure of the pre-trained image processing model to obtain a modified image processing model, wherein the target position can be front and/or back, namely the target position is used for indicating the front and/or back of the node to be quantized.
It should be noted that the target quantization node is used to indicate a quantization node that causes quantization interference to a node to be quantized in the pre-trained image processing model; the target quantization node comprises a quantization node and an inverse quantization node; the number of the inserted target quantization nodes can be one or more than one, and can be set according to the model quantization requirement, which is not limited herein; the number of the quantization nodes in the target quantization node can be one or more, and can be set according to the model quantization requirement, which is not limited herein; the number of the inverse quantization nodes in the target quantization node may be one or more, and may be set according to the model quantization requirement, which is not limited herein.
102. And carrying out quantitative training on the modified image processing model to obtain a quantized image processing model.
And the server modifies the model structure of the pre-trained image processing model to obtain a modified image processing model, and then performs quantitative training on the modified image processing model. And the server acquires the parameters of the quantitative training, and performs the quantitative training on the modified image processing model through the parameters of the quantitative training, so as to obtain the quantized image processing model. The parameters of quantization training may include, but are not limited to, a result before quantization, a quantization scale, and a zero point after quantization, where the result before quantization is used to indicate an output result of the pre-trained image processing model, such as a prediction result, a classification result, and the like, and is not limited herein, and is set according to the output result (image processing function type) of the pre-trained image processing model, and the zero point after quantization is used to indicate an offset of a 32-bit floating point (i.e., float32 value).
103. And deleting redundant nodes of the quantized image processing model to obtain an initial image processing model.
And the server acquires the redundant nodes in the quantized image processing model based on a preset redundant node detection strategy. Wherein the redundant node is used to indicate an inserted inverse quantization node and/or a target quantization node that affects a desired effect of the quantized image processing model; the redundant node detection strategy is used for detecting redundant nodes, and the redundant node detection strategy may be a graph optimization algorithm or preset redundant node matching information, which is not limited herein.
And deleting the redundant nodes in the quantized image processing model by the server to obtain an initial image processing model. In a feasible embodiment, after deleting the redundant nodes in the quantized image processing model, the server detects the redundant nodes in the image processing model after deleting the redundant nodes to detect whether the redundant nodes exist in the image processing model after deleting the redundant nodes, if the redundant nodes exist, the corresponding redundant nodes are deleted, and if the redundant nodes do not exist, the server does not process the redundant nodes to improve the accuracy and effectiveness of deleting the redundant nodes, so that the effectiveness and robustness of the initial image processing model are improved.
104. And acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.
After obtaining an initial image processing model, a server evaluates the precision and the time consumption of the initial image processing model, if the precision and the time consumption of the initial image processing model reach preset conditions, the initial image processing model is determined as a target image processing model, the target image processing model is deployed on an inference engine, an image to be processed is obtained, and the image to be processed is analyzed and processed through the target image processing model to obtain image information; if the precision and the time consumption of the initial image processing model do not meet the preset conditions, the initial image processing model is obtained again (namely, target quantization node insertion, quantization training, redundant node deletion and precision and time consumption evaluation are carried out on the pre-trained image processing model again) until the precision and the time consumption of the newly obtained initial image processing model meet the preset conditions, the newly obtained initial image processing model is determined as the target image processing model, the target image processing model is deployed on the reasoning engine, the image to be processed is obtained, and the image to be processed is analyzed and processed through the target image processing model to obtain the image information.
The target image processing model is deployed on an automatic driving vehicle, and images in automatic driving are acquired through at least one camera on the automatic driving vehicle to obtain images to be processed; the analysis processing performed on the image to be processed by the target image processing model can be image classification, image detection or image segmentation, and the like.
In the embodiment of the invention, in the deep neural network model training, a target quantization node (combination of a quantization node and an inverse quantization node) is inserted in front of a node to be quantized, so that the node is interfered by quantization during calculation, the existence of quantization is sensed, after the training is finished, the paired target quantization node is eliminated (namely, a redundant node is deleted), and finally, a target image processing model is deployed to reason images to be processed, so that the compression of the image processing model is realized, the image processing model can obtain the quantized benefit, the precision loss caused by quantization can be avoided, the robustness of the image processing model to input quantization is improved, the calculated amount and the memory occupation of image processing are reduced, and the calculation efficiency is reduced.
Referring to fig. 2, another embodiment of the image processing method according to the embodiment of the present invention includes:
201. and carrying out target node identification on the pre-trained image processing model to obtain a node to be quantized, and acquiring the node type of the node to be quantized.
The target node is used to indicate a node to be quantized, and the target node may include, but is not limited to, a weighted node and an addition node, where the weighted node is, for example, a convolution, full concatenation, or other common weighted node.
The server identifies and marks the target node and the type of the pre-trained image processing model to obtain node information, wherein the node information comprises the node to be quantized and the type mark information of the node to be quantized, and the type mark information of the node to be quantized are respectively extracted from the node information, so that the node to be quantized, and the node type of the node to be quantized and the node to be quantized are obtained.
Note that for more critical nodes. For example, if the node used for regression is to maintain high accuracy, no processing is required, that is, the preset key node is not used as the node to be quantized, so as to maintain high accuracy.
202. And based on the node type, performing target quantization node insertion on the node to be quantized to obtain the modified image processing model, wherein the target quantization node is a node formed by a quantization node and an inverse quantization node.
The target quantization node is a node formed by a quantization node followed by an inverse quantization node, the quantization node is used for quantizing the value of the high bit to the low bit, and the inverse quantization node is used for inversely quantizing the value of the low bit back to the high bit. By way of example and not limitation, as shown in fig. 3, fig. 3 illustrates an embodiment of target quantization node insertion for a node to be quantized, where Q is a quantization node, DQ is an inverse quantization node, QDQ is a pseudo quantization node (Q node is followed by DQ), and OP is a node to be quantized.
Specifically, when the node type is used for indicating a node with weight, the server performs tensor-by-tensor target quantization node insertion and channel-by-channel target quantization node insertion on the node to be quantized to obtain a modified image processing model; and when the node type is used for indicating the addition node, respectively performing target quantization node insertion on a plurality of inputs of the node to be quantized to obtain the modified image processing model.
The target node may include, but is not limited to, a weighted node and an adding node, and the present embodiment is described by taking, as an example and not by limitation, a node type for indicating a weighted node and a node type for indicating an adding node. As shown in fig. 4, fig. 4 is a schematic diagram illustrating an example of target quantization node insertion when a node type is used to indicate a weighted node, the weighted node is a convolution node, one target quantization node is inserted between the convolution node and an input to realize tensor-wise target quantization node insertion, so as to realize quantization of the entire features of the input to the node to be quantized, and one target quantization node is inserted between the convolution node and the weight to realize channel-wise target quantization node insertion. As shown in fig. 5, fig. 5 illustrates an exemplary diagram of target quantization node insertion when a node type is used to indicate an addition node, the addition node has two inputs, i.e., input a and input B, respectively, a target quantization node is inserted between the addition node and the input a, and a target quantization node is inserted between the addition node and the input B, so as to ensure that both sides of the addition node are quantized to low bits, i.e., to ensure that multiple edges of the node to be quantized are quantized to low bits.
The target quantization node insertion is realized according to different characteristics of the nodes to be quantized, and the accuracy and effectiveness of the target quantization node insertion are improved, so that the image processing model is compressed, the image processing model can obtain quantization benefits, the precision loss caused by quantization can be avoided, and the robustness of the image processing model to input quantization is improved.
203. And carrying out quantitative training on the modified image processing model to obtain a quantized image processing model.
Specifically, the server obtains a result before quantization, a quantization proportion and a zero point after quantization of the modified image processing model; and calculating the modified image processing model based on the result before quantization, the quantized proportion, the zero point after quantization and a preset quantization formula to obtain the quantized image processing model.
The result before quantization is used to indicate an output result of the pre-trained image processing model, for example, a prediction result, a classification result, and the like, and is set according to the output result (image processing function type) of the pre-trained image processing model without being limited thereto, and the zero point after quantization is used to indicate an offset amount of a 32-bit floating point (that is, a float32 numerical value). The server obtains the maximum and minimum values of the input of the modified image processing model, and calculates the quantized scale based on the input maximum and minimum values.
For example, the modified image processing model is operated based on the result before quantization, the quantization scale, the zero point after quantization and a preset quantization formula to obtain a quantized result, that is, a quantized image processing model, wherein the preset quantization formula is specifically as follows: q = round (R/S) + Z, Q representing the result after quantization, i.e. the image processing model after quantization, round representing the integer rounding operation, R representing the result before quantization, S representing the proportion of quantization, Z representing the zero point after quantization; when the modified gradient of the image processing model quantization training is calculated through the result before quantization, the quantization proportion, the zero point after quantization and the preset quantization formula, the gradient of the quantization training is obtained through the gradient mode of the node transmitted reversely, namely, the gradient of the node transmitted reversely is skipped through the gradient direct transmission mode in the process of reverse transmission, and the gradient of the node transmitted reversely is transmitted.
By carrying out quantization training on the modified image processing model, the image processing model can obtain quantization benefit, precision loss caused by quantization can be avoided, and robustness of the image processing model to input quantization is improved.
204. And deleting redundant nodes of the quantized image processing model to obtain an initial image processing model.
Specifically, the server acquires a plurality of quantitative training nodes in the quantized image processing model based on a preset graph optimization algorithm, and identifies the continuity of the plurality of quantitative training nodes to obtain continuity information; identifying redundant nodes in each quantitative training node based on the continuity information to obtain nodes to be deleted; and deleting the nodes to be deleted of each quantization training node in the quantized image processing model to obtain an initial image processing model.
The quantization training node is used for indicating a node formed after the node to be quantized is inserted into the target quantization node. The continuity information is used to indicate whether two quantized training nodes are continuous (whether continuity exists between the two quantized training nodes), and the continuity of the quantized training nodes is used to indicate whether two quantized training nodes are closely connected. The server judges and identifies redundant nodes in each quantitative training node according to whether the two quantitative training nodes are continuous or not (whether the two quantitative training nodes are closely connected or not) so as to obtain inserted inverse quantitative nodes and/or target quantitative nodes which affect the expected effect of the quantized image processing model, namely nodes to be deleted, and deletes the nodes to be deleted of each quantitative training node in the quantized image processing model to obtain the initial image processing model, so that the accuracy and the effectiveness of deleting the redundant nodes are improved, and the effectiveness and the robustness of the initial image processing model are improved.
Specifically, the server identifies redundant nodes in each quantized training node based on the continuity information to obtain nodes to be deleted, and the method includes: if the continuity information is used for indicating that the two quantization training nodes are discontinuous, obtaining an inverse quantization node in a target quantization node of the quantization training nodes to obtain a first redundant node, and determining the first redundant node as a node to be deleted; if the continuity information is used for indicating that the two quantitative training nodes are continuous, acquiring a target quantitative node between the two quantitative training nodes to obtain a second redundant node; obtaining an inverse quantization node in a target quantization node of a previously connected quantization training node to obtain a third redundant node; and determining the second redundant node and the third redundant node as nodes to be deleted.
As an example and not by way of limitation, as shown in fig. 6, the two quantization training nodes are not continuous, Q1, DQ1, and OP1 constitute a first quantization training node, Q2, DQ2, and OP2 constitute a second quantization training node, and the first quantization training node and the second quantization training node are not connected in a tight manner, or the first quantization training node (or the second quantization training node) is a node separately disposed at each structural position. Two quantized training nodes are connected in series, and as an example and not by way of limitation, as shown in fig. 7, two quantized training nodes are connected in series, Q3, DQ3, and OP3 constitute a third quantized training node, Q4, DQ4, and OP4 constitute a fourth quantized training node, and the third quantized training node and the fourth quantized training node are connected in series.
For example, if the continuity information is used to indicate that two quantization training nodes are discontinuous (for example, a first quantization training node formed by Q1, DQ1, and OP1 and a second quantization training node formed by Q2, DQ2, and OP2 shown in fig. 6), because DQ (inverse quantization node) may inverse quantize a value of a low bit back to a value of a high bit, which is not a desired quantization effect, the inverse quantization node DQ1 in the target quantization node of the first quantization training node is obtained to obtain a first redundant node 1, the inverse quantization node DQ2 in the target quantization node of the second quantization training node is obtained to obtain a first redundant node 2, the first redundant node 1 and the first redundant node 2 are respectively determined as a node 1 to be deleted and a node 2 to be deleted, and nodes from which the node 1 to be deleted and the node 2 to be deleted are respectively shown in fig. 8; if the continuity information is used to indicate that two quantization training nodes are continuous (e.g., a third quantization training node composed of Q3, DQ3, and OP3 and a fourth quantization training node composed of Q4, DQ4, and OP4 shown in fig. 7), since the third quantization training node and the fourth quantization training node are both quantized, then in the process of deployment, OP2 is expected to perform calculation at a low bit, then at this time, the inverse quantization node QDQ (Q4 DQ 4) of OP2 is redundant, a target quantization node between the two quantization training nodes is obtained, a second redundant node, that is, the inverse quantization node QDQ (Q4 DQ 4) of OP2 is obtained, and an inverse quantization node in the target quantization node QDQ (Q3 DQ 3) of the previously connected quantization training node (i.e., the third quantization training node) is obtained, and a third redundant node, that is DQ3 is obtained; the second redundant node Q4DQ4 and the third redundant node DQ3 are respectively determined as a node 3 to be deleted and a node 4 to be deleted, and the nodes after the node 3 to be deleted and the node 4 to be deleted are respectively deleted are shown in fig. 9.
By identifying the redundant nodes in each quantitative training node based on the continuity information, the accuracy and the effectiveness of the redundant nodes are improved.
205. And acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.
Specifically, the server converts the initial image processing model into a corresponding network structure expression to obtain a target image processing model, and sends the target image processing model to a preset inference engine; the method comprises the steps of obtaining an image to be processed and an image processing requirement, calling a target image processing model through a reasoning engine, and analyzing and processing the image to be processed based on the image processing requirement to obtain image information.
The server converts the initial image processing model into a general network structure expression such as onnx and the like to obtain a target image processing model, and provides the target image processing model to a preset inference engine for corresponding deployment, for example, tensorRT and the like for low-bit calculation. In an automatic driving scene, a certain quantization space is reserved for convolutional neural networks such as image classification, image detection and image segmentation. The target image processing model is called through the inference engine, and in the analysis processing of the image to be processed based on the image processing requirement, the inference process (i.e. analysis processing) of the quantized model (i.e. the target image processing model) is the same as that of the unquantized network (i.e. the pre-trained image processing model), except that the intermediate characteristics of the network are low-precision, and operators or hardware participating in the operation are optimized to the low-precision.
After obtaining an initial image processing model, a server evaluates the precision and the time consumption of the initial image processing model, if the precision and the time consumption of the initial image processing model reach preset conditions, the initial image processing model is determined as a target image processing model, the target image processing model is deployed on an inference engine, an image to be processed is obtained, and the image to be processed is analyzed and processed through the target image processing model to obtain image information; if the precision and the time consumption of the initial image processing model do not meet the preset conditions, the initial image processing model is obtained again (namely, target quantization node insertion, quantization training, redundant node deletion and precision and time consumption evaluation are carried out on the image processing model after pre-training again) until the precision and the time consumption of the initial image processing model obtained again meet the preset conditions, the obtained initial image processing model is determined as the target image processing model, the target image processing model is deployed on the reasoning engine, the image to be processed is obtained, and the image to be processed is analyzed and processed through the target image processing model to obtain image information.
The target image processing model is deployed on the inference engine to analyze and process the image to be processed, so that the image processing calculation amount and the memory occupation are reduced, and the calculation efficiency is reduced.
In the embodiment of the invention, a target quantization node (combination of a quantization node and an inverse quantization node) is inserted in front of a node to be quantized in the deep neural network model training, so that the node is interfered by quantization during calculation, the existence of quantization is sensed, after the training is finished, the paired target quantization node is eliminated (namely redundant node is deleted) through a graph optimization algorithm, and finally a target image processing model is deployed to reason images to be processed, so that the compression of the image processing model is realized, the image processing model can obtain the quantization benefit, the precision loss caused by quantization can be avoided, the robustness of the image processing model to input quantization is improved, the calculated amount and the memory occupation of image processing are reduced, and the calculation efficiency of the image processing model is reduced.
With reference to fig. 10, the method for processing an image according to an embodiment of the present invention is described above, and an embodiment of the apparatus for processing an image according to an embodiment of the present invention includes:
the inserting module 1010 is used for performing target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model;
a training module 1020, configured to perform quantization training on the modified image processing model to obtain a quantized image processing model;
a deleting module 1030, configured to delete redundant nodes of the quantized image processing model to obtain an initial image processing model;
the analysis processing module 1040 is configured to obtain a target image processing model after the initial image processing model is deployed and an image to be processed, and analyze and process the image to be processed through the target image processing model to obtain image information.
The function implementation of each module in the image processing apparatus corresponds to each step in the embodiment of the image processing method, and the function and implementation process thereof are not described in detail herein.
In the embodiment of the invention, in the deep neural network model training, a target quantization node (combination of a quantization node and an inverse quantization node) is inserted in front of a node to be quantized, so that the node is interfered by quantization during calculation, the existence of quantization is sensed, after the training is finished, the paired target quantization node is eliminated (namely, a redundant node is deleted), and finally, a target image processing model is deployed to reason images to be processed, so that the compression of the image processing model is realized, the image processing model can obtain the quantized benefit, the precision loss caused by quantization can be avoided, the robustness of the image processing model to input quantization is improved, the calculated amount and the memory occupation of image processing are reduced, and the calculation efficiency is reduced.
Referring to fig. 11, another embodiment of an image processing apparatus according to an embodiment of the present invention includes:
the inserting module 1010 is used for performing target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model;
the insertion module 1010 specifically includes:
the identification unit 1011 is configured to perform target node identification on the pre-trained image processing model to obtain a node to be quantized, and obtain a node type of the node to be quantized;
an inserting unit 1012, configured to perform target quantization node insertion on a node to be quantized based on a node type to obtain a modified image processing model, where a target quantization node is a node formed by a quantization node and an inverse quantization node that follows the quantization node;
a training module 1020, configured to perform quantization training on the modified image processing model to obtain a quantized image processing model;
a deleting module 1030, configured to delete redundant nodes of the quantized image processing model to obtain an initial image processing model;
the analysis processing module 1040 is configured to obtain a target image processing model after the initial image processing model is deployed and an image to be processed, and analyze the image to be processed through the target image processing model to obtain image information.
Optionally, the inserting unit 1012 may be further specifically configured to:
when the node type is used for indicating the node with the weight, tensor-by-tensor target quantization node insertion and channel-by-channel target quantization node insertion are carried out on the node to be quantized to obtain a modified image processing model;
and when the node type is used for indicating the addition node, respectively performing target quantization node insertion on a plurality of inputs of the node to be quantized to obtain the modified image processing model.
Optionally, the training module 1020 may be further specifically configured to:
obtaining a result before quantization, a quantization proportion and a zero point after quantization of the modified image processing model;
and calculating the modified image processing model based on the result before quantization, the quantized proportion, the zero point after quantization and a preset quantization formula to obtain the quantized image processing model.
Optionally, the deleting module 1030 includes:
the first identification unit 1031 is configured to obtain a plurality of quantized training nodes in the quantized image processing model based on a preset graph optimization algorithm, and identify continuity of the plurality of quantized training nodes to obtain continuity information;
the second identification unit 1032 is configured to identify redundant nodes in each quantized training node based on the continuity information, so as to obtain nodes to be deleted;
and the deleting unit 1033 is configured to delete a node to be deleted of each quantized training node in the quantized image processing model, so as to obtain an initial image processing model.
Optionally, the second identifying unit 1032 may be further specifically configured to:
if the continuity information is used for indicating that the two quantization training nodes are discontinuous, obtaining an inverse quantization node in a target quantization node of the quantization training nodes to obtain a first redundant node, and determining the first redundant node as a node to be deleted;
if the continuity information is used for indicating that the two quantitative training nodes are continuous, acquiring a target quantitative node between the two quantitative training nodes to obtain a second redundant node;
obtaining an inverse quantization node in a target quantization node of a previously connected quantization training node to obtain a third redundant node;
and determining the second redundant node and the third redundant node as nodes to be deleted.
Optionally, the analysis processing module 1040 may be further specifically configured to:
converting the initial image processing model into a corresponding network structure expression to obtain a target image processing model, and sending the target image processing model to a preset inference engine;
the method comprises the steps of obtaining an image to be processed and an image processing requirement, calling a target image processing model through a reasoning engine, and analyzing and processing the image to be processed based on the image processing requirement to obtain image information.
The function implementation of each module and each unit in the image processing apparatus corresponds to each step in the embodiment of the image processing method, and the function and implementation process thereof are not described in detail herein.
In the embodiment of the invention, a target quantization node (combination of a quantization node and an inverse quantization node) is inserted in front of a node to be quantized in the deep neural network model training, so that the node is interfered by quantization during calculation, the existence of quantization is sensed, after the training is finished, the paired target quantization node is eliminated (namely redundant node is deleted) through a graph optimization algorithm, and finally a target image processing model is deployed to reason images to be processed, so that the compression of the image processing model is realized, the image processing model can obtain the quantization benefit, the precision loss caused by quantization can be avoided, the robustness of the image processing model to input quantization is improved, the calculated amount and the memory occupation of image processing are reduced, and the calculation efficiency of the image processing model is reduced.
Fig. 10 and 11 describe the image processing apparatus in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the image processing apparatus in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 12 is a schematic structural diagram of an image processing apparatus 1200 according to an embodiment of the present invention, which may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1210 (e.g., one or more processors) and a memory 1220, one or more storage media 1230 (e.g., one or more mass storage devices) for storing applications 1233 or data 1232. Memory 1220 and storage media 1230, among other things, may be transient or persistent storage. The program stored in the storage medium 1230 may include one or more modules (not shown), each of which may include a series of computer program operations in the processing apparatus 1200 for an image. Still further, the processor 1210 may be arranged to communicate with the storage medium 1230, executing a series of computer program operations in the storage medium 1230 on the processing device 1200 of the image.
The image processing apparatus 1200 may also include one or more power supplies 1240, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1260, and/or one or more operating systems 1231, such as Windows Server, mac OS X, unix, linux, freeBSD, etc. Those skilled in the art will appreciate that the image processing apparatus configuration shown in fig. 12 does not constitute a limitation on the image processing apparatus, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The present invention also provides an apparatus for processing an image, comprising: a memory having a computer program stored therein and at least one processor, the memory and the at least one processor interconnected by lines; the at least one processor calls the computer program in the memory to cause the processing device of the image to perform the steps in the processing method of the image.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored thereon a computer program, which, when run on a computer, causes the computer to perform the steps of the method of processing an image.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several computer programs to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing method, characterized in that the image processing method comprises:
carrying out target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model;
carrying out quantitative training on the modified image processing model to obtain a quantized image processing model;
deleting redundant nodes of the quantized image processing model to obtain an initial image processing model;
and acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.
2. The image processing method according to claim 1, wherein the performing target quantization node insertion on the pre-trained image processing model to obtain the modified image processing model comprises:
carrying out target node identification on the pre-trained image processing model to obtain a node to be quantized, and acquiring the node type of the node to be quantized;
and performing target quantization node insertion on the node to be quantized based on the node type to obtain a modified image processing model, wherein the target quantization node is a node formed by a quantization node and an inverse quantization node.
3. The image processing method according to claim 2, wherein the performing target quantization node insertion on the node to be quantized based on the node type to obtain a modified image processing model comprises:
when the node type is used for indicating a node with weight, carrying out tensor-by-tensor target quantization node insertion and channel-by-channel target quantization node insertion on the node to be quantized to obtain a modified image processing model;
and when the node type is used for indicating an addition node, respectively performing target quantization node insertion on a plurality of inputs of the node to be quantized to obtain a modified image processing model.
4. The method according to claim 1, wherein the performing quantization training on the modified image processing model to obtain a quantized image processing model comprises:
obtaining a result before quantization, a quantization proportion and a zero point after quantization of the modified image processing model;
and calculating the modified image processing model based on the result before quantization, the quantized proportion, the quantized zero point and a preset quantization formula to obtain the quantized image processing model.
5. The method according to claim 1, wherein the performing redundant node deletion on the quantized image processing model to obtain an initial image processing model comprises:
based on a preset graph optimization algorithm, acquiring a plurality of quantitative training nodes in the quantized image processing model, and identifying the continuity of the plurality of quantitative training nodes to obtain continuity information;
identifying redundant nodes in each quantitative training node based on the continuity information to obtain nodes to be deleted;
and deleting the nodes to be deleted of each quantization training node in the quantized image processing model to obtain an initial image processing model.
6. The method according to claim 5, wherein the identifying redundant nodes in each quantized training node based on the continuity information to obtain nodes to be deleted includes:
if the continuity information is used for indicating that the two quantization training nodes are discontinuous, obtaining an inverse quantization node in a target quantization node of the quantization training nodes to obtain a first redundant node, and determining the first redundant node as a node to be deleted;
if the continuity information is used for indicating that the two quantitative training nodes are continuous, acquiring a target quantitative node between the two quantitative training nodes to obtain a second redundant node;
obtaining an inverse quantization node in a target quantization node of a previously connected quantization training node to obtain a third redundant node;
and determining the second redundant node and the third redundant node as nodes to be deleted.
7. The image processing method according to any one of claims 1 to 6, wherein the obtaining a target image processing model after the initial image processing model is deployed and an image to be processed, and performing analysis processing on the image to be processed through the target image processing model to obtain image information includes:
converting the initial image processing model into a corresponding network structure expression to obtain a target image processing model, and sending the target image processing model to a preset inference engine;
acquiring an image to be processed and an image processing requirement, calling the target image processing model through the inference engine, and analyzing and processing the image to be processed based on the image processing requirement to obtain image information.
8. An apparatus for processing an image, comprising:
the inserting module is used for carrying out target quantization node insertion on the pre-trained image processing model to obtain a modified image processing model;
the training module is used for carrying out quantitative training on the modified image processing model to obtain a quantized image processing model;
the deleting module is used for deleting redundant nodes of the quantized image processing model to obtain an initial image processing model;
and the analysis processing module is used for acquiring a target image processing model after the initial image processing model is deployed and an image to be processed, and analyzing and processing the image to be processed through the target image processing model to obtain image information.
9. An apparatus for processing an image, characterized in that the apparatus comprises: a memory and at least one processor, the memory having stored therein a computer program;
the at least one processor invokes the computer program in the memory to cause the processing device of the image to perform the method of processing the image according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of processing an image according to any one of claims 1 to 7.
CN202210163194.1A 2022-02-22 2022-02-22 Image processing method, device, equipment and storage medium Pending CN115331067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210163194.1A CN115331067A (en) 2022-02-22 2022-02-22 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210163194.1A CN115331067A (en) 2022-02-22 2022-02-22 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115331067A true CN115331067A (en) 2022-11-11

Family

ID=83915551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210163194.1A Pending CN115331067A (en) 2022-02-22 2022-02-22 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115331067A (en)

Similar Documents

Publication Publication Date Title
US20220254146A1 (en) Method for filtering image feature points and terminal
CN111783974A (en) Model construction and image processing method and device, hardware platform and storage medium
CN110175641A (en) Image-recognizing method, device, equipment and storage medium
CN111985495A (en) Model deployment method, device, system and storage medium
CN115186821A (en) Core particle-oriented neural network inference overhead estimation method and device and electronic equipment
CN111144457A (en) Image processing method, device, equipment and storage medium
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
CN111008631A (en) Image association method and device, storage medium and electronic device
CN111476362B (en) Method and device for determining FL value
CN114332500A (en) Image processing model training method and device, computer equipment and storage medium
US11704555B2 (en) Batch normalization layer fusion and quantization method for model inference in AI neural network engine
CN110197213B (en) Image matching method, device and equipment based on neural network
CN114707637A (en) Neural network quantitative deployment method, system and storage medium
CN114049530A (en) Hybrid precision neural network quantization method, device and equipment
CN112381147B (en) Dynamic picture similarity model establishment and similarity calculation method and device
CN111914809A (en) Target object positioning method, image processing method, device and computer equipment
JPH10247204A (en) Method and device for multidimensional retrieval
CN115331067A (en) Image processing method, device, equipment and storage medium
CN113807330B (en) Three-dimensional sight estimation method and device for resource-constrained scene
CN117693754A (en) Training masked automatic encoders for image restoration
CN112418098A (en) Training method of video structured model and related equipment
CN111767204A (en) Overflow risk detection method, device and equipment
CN116384452B (en) Dynamic network model construction method, device, equipment and storage medium
CN111767980B (en) Model optimization method, device and equipment
CN114692892B (en) Method for processing numerical characteristics, model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination