CN113762403A - Image processing model quantization method and device, electronic equipment and storage medium - Google Patents
Image processing model quantization method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113762403A CN113762403A CN202111076409.8A CN202111076409A CN113762403A CN 113762403 A CN113762403 A CN 113762403A CN 202111076409 A CN202111076409 A CN 202111076409A CN 113762403 A CN113762403 A CN 113762403A
- Authority
- CN
- China
- Prior art keywords
- model
- quantization
- detection rate
- false detection
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides an image processing model quantification method, an image processing model quantification device, electronic equipment and a storage medium, wherein the image processing models of a plurality of storage points are obtained to obtain a plurality of models to be quantified; respectively carrying out model quantization on each model to be quantized to obtain each quantization model; obtaining a quantitative test picture, analyzing the quantitative test picture by using each quantitative model, and respectively obtaining the test result of each quantitative model; and determining a target quantization model based on the test result of each quantization model. The image processing models of a plurality of storage points in one training are subjected to model quantization, so that the problem that the accuracy of a quantization model obtained by randomly distributing and selecting a single model for quantization is low due to model parameters can be solved, and the success rate of model quantization can be increased; and the parameter adjustment of the image processing model is not needed, the manual workload is reduced, and a good foundation is laid for batch model quantification.
Description
Technical Field
The present application relates to the field of deep learning technologies, and in particular, to a method and an apparatus for quantizing an image processing model, an electronic device, and a storage medium.
Background
With the development of artificial intelligence technology, deep learning models are more and more applied to processing scenes of images. The deep learning model quantization is a process of approximating tensor data of continuous-valued floating point type model weight fixed points to finite discrete values with low inference precision loss, and is a process of approximately representing 32-bit finite range floating point type data by using a data type with less digits, and the input and the output of the deep learning model are still floating point type data, so that the purposes of reducing the size of the deep learning model, reducing the hardware consumption of the deep learning model, accelerating the inference speed of the deep learning model and the like are achieved.
In the related technology, firstly, a deep learning model is trained by using a sample picture to obtain a trained deep learning model; and then, manually selecting a certain number of quantized reference pictures, and then performing model quantization on the trained deep learning model by using the quantized reference pictures to obtain a quantized model. However, by adopting the method, the accuracy of the obtained quantization model is random, and the quantization model with higher accuracy cannot be guaranteed to be obtained.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing model quantization method, an image processing model quantization device, an electronic device, and a storage medium, so as to obtain a quantization model with high accuracy. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an image processing model quantization method, including:
acquiring image processing models of a plurality of storage points to obtain a plurality of models to be quantized;
respectively carrying out model quantization on each model to be quantized to obtain each quantization model;
obtaining a quantitative test picture, analyzing the quantitative test picture by using each quantitative model, and respectively obtaining a test result of each quantitative model;
and determining a target quantization model based on the test result of each quantization model.
In a possible implementation, before the step of obtaining the image processing model of the plurality of storage points and obtaining the plurality of models to be quantized, the method further includes:
and training the image processing model by using the sample picture, and saving the current image processing model as the image processing model of the current storage point when the training times meet the preset storage condition.
In a possible implementation manner, the performing model quantization on each to-be-quantized model to obtain each quantized model includes:
acquiring a quantization reference picture and a quantization configuration file, wherein the quantization reference picture is a randomly selected positive sample picture;
and model quantization is respectively carried out on each model to be quantized based on the quantization reference picture and the quantization configuration file to obtain each quantization model.
In a possible implementation manner, the test result includes a detection rate and a false detection rate;
the obtaining of the quantization test picture, analyzing the quantization test picture by using each quantization model, and obtaining the test result of each quantization model respectively includes:
obtaining a quantitative test picture, wherein the quantitative test picture comprises a positive sample picture and a negative sample picture;
analyzing a positive sample picture in the quantitative test picture by using each quantitative model to respectively obtain the detection rate and a first false detection rate of each quantitative model;
analyzing the negative sample picture in the quantitative test picture by using each quantitative model to respectively obtain a second false detection rate of each quantitative model;
and respectively obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model.
In a possible implementation manner, the obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model respectively includes:
for each quantization model, determining the number of false detection pictures of the quantization model according to the number of positive sample pictures, the number of negative sample pictures, the first false detection rate and the second false detection rate of the quantization model in the quantization test pictures;
and determining the false detection rate of the quantization model according to the number of the false detection pictures of the quantization model, the number of the positive sample pictures and the number of the negative sample pictures in the quantization test picture.
In a possible implementation, the determining a target quantization model based on the test result of each quantization model includes:
acquiring a preset detection rate threshold value and a preset false detection rate threshold value;
filtering out quantization models with a detection rate smaller than a preset detection rate threshold value and filtering out quantization models with a false detection rate larger than a preset false detection rate threshold value in each quantization model;
and selecting a target quantization model from the filtered quantization models.
In a possible implementation, the determining a target quantization model based on the test result of each quantization model includes:
weighting the detection rate and the false detection rate of each quantization model to obtain the weighted value of the quantization model;
sequencing the weighted values of the quantization models according to the time sequence of the storage points corresponding to the quantization models to obtain a weighted value sequence;
acquiring a preset range threshold, and determining each weighted value of the selected numerical value within the preset range threshold in the weighted value sequence to obtain each target weighted value; dividing each target weighted value sequenced continuously in the weighted value sequence into the same weighted value set to obtain each weighted value set;
and selecting a specified target weighted value from the weighted value set with the maximum number of the target weighted values, and taking the quantization model corresponding to the specified target weighted value as a target quantization model.
In a second aspect, an embodiment of the present application provides an image processing model quantizing device, including:
the model to be quantized acquiring module is used for acquiring image processing models of a plurality of storage points to obtain a plurality of models to be quantized;
the model quantization module is used for respectively carrying out model quantization on each model to be quantized to obtain each quantization model;
the model testing module is used for acquiring quantitative test pictures, analyzing the quantitative test pictures by utilizing the quantitative models and respectively obtaining the testing results of the quantitative models;
and the target quantization model determining module is used for determining a target quantization model based on the test result of each quantization model.
In a possible embodiment, the apparatus further comprises:
and the model training module is used for training the image processing model by using the sample picture, and saving the current image processing model as the image processing model of the current storage point when the training times meet the preset storage condition.
In a possible implementation, the model quantization module is specifically configured to: acquiring a quantization reference picture and a quantization configuration file, wherein the quantization reference picture is a randomly selected positive sample picture; and model quantization is respectively carried out on each model to be quantized based on the quantization reference picture and the quantization configuration file to obtain each quantization model.
In a possible implementation manner, the test result includes a detection rate and a false detection rate; the model test module comprises:
the quantitative test picture acquisition sub-module is used for acquiring a quantitative test picture, wherein the quantitative test picture comprises a positive sample picture and a negative sample picture;
the positive sample picture analysis submodule is used for analyzing the positive sample picture in the quantitative test picture by utilizing each quantitative model to respectively obtain the detection rate and the first false detection rate of each quantitative model;
the negative sample picture analysis submodule is used for analyzing the negative sample picture in the quantitative test picture by utilizing each quantitative model to respectively obtain a second false detection rate of each quantitative model;
and the false detection rate determining submodule is used for respectively obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model.
In a possible implementation manner, the false detection rate determining sub-module is specifically configured to determine, for each quantization model, the number of false detection pictures of the quantization model according to the number of positive sample pictures, the number of negative sample pictures, and the first false detection rate and the second false detection rate of the quantization model; determining the false detection rate of the quantization model according to the number of false detection pictures of the quantization model, the number of positive sample pictures and the number of negative sample pictures in the quantization test picture;
in a possible implementation manner, the target quantization model determination module is specifically configured to: acquiring a preset detection rate threshold value and a preset false detection rate threshold value; filtering out quantization models with a detection rate smaller than a preset detection rate threshold value and filtering out quantization models with a false detection rate larger than a preset false detection rate threshold value in each quantization model; selecting a target quantization model from the filtered quantization models;
in a possible implementation manner, the target quantization model determination module is specifically configured to: weighting the detection rate and the false detection rate of each quantization model to obtain the weighted value of the quantization model; sequencing the weighted values of the quantization models according to the time sequence of the storage points corresponding to the quantization models to obtain a weighted value sequence; acquiring a preset range threshold, and determining each weighted value of the selected numerical value within the preset range threshold in the weighted value sequence to obtain each target weighted value; dividing each target weighted value sequenced continuously in the weighted value sequence into the same weighted value set to obtain each weighted value set; and selecting a specified target weighted value from the weighted value set with the maximum number of the target weighted values, and taking the quantization model corresponding to the specified target weighted value as a target quantization model.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the image processing model quantization method according to any one of the present applications when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program is executed by a processor, where the computer program implements the image processing model quantization method described in any of the present applications.
In a fifth aspect, embodiments of the present application provide a computer program product including instructions, which when run on a computer, cause the computer to perform the image processing model quantization method described in any of the present applications.
The embodiment of the application has the following beneficial effects:
the image processing model quantization method, the image processing model quantization device, the electronic equipment and the storage medium, provided by the embodiment of the application, are used for acquiring image processing models of a plurality of storage points to obtain a plurality of models to be quantized; respectively carrying out model quantization on each model to be quantized to obtain each quantization model; obtaining a quantitative test picture, analyzing the quantitative test picture by using each quantitative model, and respectively obtaining the test result of each quantitative model; and determining a target quantization model based on the test result of each quantization model. The image processing models of a plurality of storage points in one training are subjected to model quantization, so that the problem that the accuracy of a quantization model obtained by randomly distributing and selecting a single model for quantization is low due to model parameters can be solved, and the success rate of model quantization can be increased; and the parameter adjustment of the image processing model is not needed, the manual workload is reduced, and a good foundation is laid for batch model quantification. Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a first schematic diagram of an image processing model quantization method according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of an image processing model quantization method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of one possible implementation of step S102 in the embodiment of the present application;
FIG. 4 is a schematic diagram of one possible implementation manner of step S103 in the embodiment of the present application;
FIG. 5 is a schematic diagram of one possible implementation of step S104 in the example of the present application;
FIG. 6 is a first schematic diagram of an apparatus for quantizing an image processing model according to an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
First, terms of the present application are explained:
labeling: and identifying the target object in the picture in a picture frame mode.
Quantizing the reference picture: including the picture of the object, provides reference and correction data for the quantification tool.
Positive sample picture: the method comprises the steps of containing a picture of a target object and labeling the target object.
Negative sample pictures: the picture of the target object is not included, and the picture is used for preventing false recognition in a specific scene.
And (3) quantizing the test picture: and the picture set comprises a positive sample picture and a negative sample picture and is used for testing the performance of the quantization model.
An image processing model: and outputting files which can be used for recognition after deep learning training.
A storage point: in the training process of the image processing model, the image processing model is stored once every certain training times, the image processing model stored every time is called a storage point, and the training times of the image processing models of different storage points are different.
In the related technology, firstly, a deep learning model to be quantized is manually selected; and then, manually selecting a certain number of quantized reference pictures, and then performing model quantization on the depth learning model to be quantized by using the quantized reference pictures so as to obtain a quantization model. The selection of the deep learning model to be quantized is artificially and subjectively selected, and the deep learning model to be quantized is selected as the deep learning model to be quantized when the effect of the model is better, or the deep learning model with the best test result is selected as the deep learning model to be quantized based on the test result of the deep learning model, or the deep learning model with the largest training times is selected as the deep learning model to be quantized.
However, in the deep learning model with the best test result, the test result may be poor after quantification. The inventor finds that, in the process of training the deep learning model, the distribution of the parameters has certain randomness, so that the accuracy of the quantization model has no positive correlation with the training times of the deep learning model, and the accuracy of the quantization model is higher when the training times are not more, so that the accuracy of the quantization model obtained by model quantization of the trained deep learning model in the related technology is random and not necessarily optimal, and the quantization model with higher accuracy cannot be obtained.
In view of this, an embodiment of the present application provides an image processing model quantization method, referring to fig. 1, including:
s101, obtaining image processing models of a plurality of storage points to obtain a plurality of models to be quantized.
The image processing model quantization method according to the embodiment of the present application may be implemented by an electronic device, and in an example, the electronic device may be a smart phone, a personal computer, a server, or the like.
The image processing model in the present application is a model for image processing or target recognition obtained based on deep learning model training, for example, the image processing model may be used for recognizing targets such as vehicles, buildings, animals and plants in an image; for example, the image processing model may be used to perform scene classification or style migration on the image. The storage points refer to storage point positions in the training process of the image processing model, the training times of the image processing models of different storage points are different, and the obtained image processing models of the plurality of storage points are called a plurality of models to be quantized.
And S102, respectively carrying out model quantization on each model to be quantized to obtain each quantization model.
The model to be quantized after model quantization is called a quantization model; the model quantization process may refer to a model quantization method in the related art, and is not specifically limited in the embodiment of the present application. In the actual quantization process, the model quantization work can only effectively quantize parameters within a certain range, and if the distribution range of the model parameters to be quantized is far beyond the range in which the tool can quantize, the quantization fails, so that it can be understood that the obtained quantization model is a quantization model with successful quantization.
S103, obtaining a quantitative test picture, analyzing the quantitative test picture by using each quantitative model, and respectively obtaining the test result of each quantitative model.
And analyzing the quantitative test picture by utilizing the quantitative model aiming at each quantitative model so as to obtain the test result of the quantitative model. The test result of the quantization model can be the parameters of the quantization model such as the accuracy, the error rate, the detection rate or the false detection rate, and can be set in a user-defined way according to the actual situation.
And S104, determining a target quantization model based on the test result of each quantization model.
And determining a target quantization model based on the test result of each quantization model, and outputting the target quantization model as the quantization model of the image processing model. For example, the test result includes the accuracy, and the quantization model with the highest accuracy can be selected as the target quantization model; for example, the test result includes a detection rate, and the quantization model with the highest detection rate may be selected as the target quantization model; for example, the test result includes a false detection rate, and the quantization model with the lowest false detection rate may be selected as the target quantization model.
The target quantization model inherits the functions of an image processing model and is used for image processing or target identification. For example, if the image processing model is used for vehicle identification, the target quantization model is also used for vehicle identification; for example, the image processing model is used for image scene classification, and the target quantization model is also used for image scene classification. The method comprises the steps of carrying out model quantization on image processing models of a plurality of storage points in one training, determining an optimal target quantization model according to the test result of each quantization model, wherein the accuracy of image processing or target identification of the target quantization model is higher, the problem that the accuracy of a quantization model obtained by selecting a single model for quantization is low due to random distribution of model parameters can be reduced, and the success rate of model quantization can be increased.
In one example, a preset algorithm may be used to select the best quantization model as the target quantization model. For example, the optimal quantization model for comprehensive update may be calculated by an algorithm such as difference calculation, curvature analysis, variance analysis, or gradient analysis based on the detection rate and false detection rate in the quantization model test result.
In a possible implementation, the determining a target quantization model based on the test result of each quantization model includes:
step one, aiming at each quantization model, weighting the detection rate and the false detection rate of the quantization model to obtain the weighted value of the quantization model.
The specific calculation of the weighting can be set by self-definition according to the actual situation, and in one example, the weighting value is the detection rate-N × false detection rate.
And step two, sequencing the weighted values of the quantization models according to the time sequence of the storage points corresponding to the quantization models to obtain a weighted value sequence.
The sorting may be in ascending order or in descending order, and all of them are within the protection scope of the present application.
Step three, acquiring a preset range threshold, and determining each weighted value of the selected numerical value within the preset range threshold in the weighted value sequence to obtain each target weighted value; and dividing each target weighted value sequenced continuously in the weighted value sequence into the same weighted value set to obtain each weighted value set.
The preset range threshold may be set by self-definition according to actual conditions, and is specifically related to a calculation manner of the weighted value, for example, the preset range threshold may be set to 85 to 90, 90 to 94, or 92 to 95.
And step four, selecting the appointed target weighted value from the weighted value set with the maximum number of the target weighted values, and taking the quantization model corresponding to the appointed target weighted value as the target quantization model.
The designated target weight value is an extreme value in the weighted value set with the largest number of target weight values.
In one example, the predetermined range threshold is 90-94, and the weighted value sequence is: 70. 93, 75, 82, 93, 92, 95, 89, 93, 92, 94, 99, 87 for example, then the target weights are 93 for rank 2, 93 for rank 5, 92 for rank 6, 93 for rank 9, 92 for rank 10, 94 for rank 11, respectively, resulting in two sets of weights, respectively, weight set a comprising 93 for rank 5, 92 for rank 6, and weight set B comprising 93 for rank 9, 92 for rank 10, 94 for rank 11. Then 94 of the rank 11 with the largest value is selected as the designated target weight value in the weight value set B, and the quantization model corresponding to 94 of the rank 11 is the target quantization model.
In the embodiment of the application, the model quantization is carried out on the image processing models of a plurality of storage points in one training, so that the problem that the accuracy of a quantization model obtained by randomly distributing and selecting a single model for quantization is low due to model parameters can be reduced, and the success rate of model quantization can be increased; and the parameter adjustment of the image processing model is not needed, the manual workload is reduced, and a good foundation is laid for batch model quantification.
In a possible implementation, referring to fig. 2, before the step of obtaining the image processing model of the plurality of storage points and obtaining the plurality of models to be quantized, the method further includes:
s201, training the image processing model by using the sample picture, and saving the current image processing model as the image processing model of the current storage point when the training times meet the preset storage condition.
The sample pictures include a positive sample picture and a negative sample picture, and the training process of the image processing model by using the sample pictures can refer to the training process of the image processing model in the related art, which is not specifically limited in the embodiment of the present application.
The preset storage condition may be set in a self-defined manner according to an actual situation, and in one example, the image processing model may be stored once as one storage point every preset number of times of training, for example, the image processing models of the storage points are stored respectively with the number of times of training N, 2N, 3N, 4N, and the like as the storage points, where N is the preset number of times.
In a possible implementation manner, referring to fig. 3, the performing model quantization on each model to be quantized to obtain each quantization model includes:
s1021, a quantization reference picture and a quantization profile are obtained.
In one example, the quantized reference picture is a randomly selected positive sample picture. In the embodiment of the application, compared with the manual selection of the quantized reference picture, the random selection of the quantized reference picture eliminates the problem of nonuniform subjective judgment of manual selection. In one example, the quantization profile may include a model network file, a tag list file, a quantized picture list file, a quantization parameter profile, and the like.
And S1022, model quantization is respectively carried out on each model to be quantized based on the quantization reference picture and the quantization configuration file, so that each quantization model is obtained.
And respectively carrying out model quantization on each model to be quantized based on the quantization configuration file by taking the quantization reference picture as a reference so as to obtain each quantization model.
In a possible implementation manner, the test result includes a detection rate and a false detection rate; referring to fig. 4, the obtaining of the quantization test picture, analyzing the quantization test picture by using each quantization model, and obtaining the test result of each quantization model respectively includes:
and S1031, obtaining a quantitative test picture, wherein the quantitative test picture comprises a positive sample picture and a negative sample picture.
S1032, analyzing the positive sample picture in the quantitative test picture by using each quantitative model to respectively obtain the detection rate and the first false detection rate of each quantitative model.
And S1033, analyzing the negative sample picture in the quantization test picture by using each quantization model to respectively obtain a second false detection rate of each quantization model.
S1034, respectively obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model.
And aiming at any quantization model, determining the false detection rate of the quantization model according to the first false detection rate and the second false detection rate of the quantization model. For example, the first false detection rate and the second false detection rate may be weighted and averaged to obtain the false detection rate. In one example, the obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model respectively includes:
step 1, aiming at each quantization model, determining the number of the false detection pictures of the quantization model according to the number of the positive sample pictures, the number of the negative sample pictures, the first false detection rate and the second false detection rate of the quantization model in the quantization test pictures.
And 2, determining the false detection rate of the quantization model according to the number of the false detection pictures of the quantization model, the number of the positive sample pictures and the number of the negative sample pictures in the quantization test picture.
For example, taking the quantization model 1 as an example, if the number of positive sample pictures in the quantization test picture is a, the number of negative sample pictures in the quantization test picture is b, the first false detection rate of the quantization model 1 is x, and the second false detection rate is y, the false detection rate z of the quantization model 1 can be expressed as: z ═ ax + by ÷ (a + b).
In a possible embodiment, referring to fig. 5, the determining a target quantization model based on the test result of each quantization model includes:
and S1041, acquiring a preset detection rate threshold value and a preset false detection rate threshold value.
The preset detection rate threshold may be set in a self-defined manner according to an actual situation, for example, set to 85%, 90%, or 95%, and the preset false detection rate threshold may be set in a self-defined manner according to an actual situation, for example, set to 10%, 5%, or 3%.
S1042, filtering out the quantization models with the detection rate smaller than a preset detection rate threshold value and filtering out the quantization models with the false detection rate larger than a preset false detection rate threshold value from the quantization models.
And S1043, selecting a target quantization model from the filtered quantization models.
In one example, the preset false detection rate threshold includes a positive sample false detection rate threshold, a negative sample false detection rate threshold, and a total false detection rate threshold, and the filtering out quantization models with detection rates smaller than the preset detection rate threshold and the filtering out quantization models with detection rates larger than the preset false detection rate threshold in each quantization model includes: and filtering out quantization models with the detection rate smaller than a preset detection rate threshold value, filtering out quantization models with the false detection rate larger than a total false detection rate threshold value, filtering out quantization models with the first false detection rate larger than a positive sample false detection rate threshold value, and filtering out quantization models with the second false detection rate larger than a negative sample false detection rate threshold value in each quantization model.
One model can be randomly selected from the filtered quantitative models to serve as a target quantitative model, and a model with the largest detectable rate or the smallest false detection rate can be selected from the filtered quantitative models to serve as the target quantitative model. In one example, a preset algorithm may be used to select the best quantization model from the filtered quantization models as the target quantization model. For example, the optimal quantization model for comprehensive update may be calculated by an algorithm such as difference calculation, curvature analysis, variance analysis, or gradient analysis based on the detection rate and false detection rate in the quantization model test result.
The embodiment of the present application further provides an image processing model quantization apparatus, including:
the device comprises a quantitative reference picture acquisition module, a quantitative reference picture acquisition module and a quantitative comparison module, wherein the quantitative reference picture acquisition module is used for pulling a specified number of quantitative test pictures from a first storage position, and the quantitative test pictures are positive sample pictures containing a target object; in one example, a specified number of quantized test pictures may be obtained by random acquisition without human intervention. Eliminating artificial subjective influence factors.
And the quantitative test picture acquisition module is used for pulling a specified number of quantitative test pictures from the second storage position, wherein the quantitative test pictures comprise positive sample pictures and negative sample pictures, so that the detection rate of the quantitative model can be evaluated and the false detection rate of the quantitative model can be evaluated during testing.
And the model to be quantized obtaining module is used for pulling the image processing models of all the storage points from the third storage position to be used as the model to be quantized.
And the quantization configuration file generation module is used for generating a quantization configuration file necessary for quantization, wherein the quantization configuration file comprises a model network file, a tag list file, a quantization picture list file, a quantization parameter configuration file and the like.
And the quantization processing module is used for calling a quantization server, respectively carrying out model quantization on the models to be quantized one by one on the basis of the quantization configuration file and the quantization reference picture, and obtaining and outputting the quantization models.
And the quantitative test module is used for calling the quantitative models one by one, analyzing and testing the quantitative test pictures on the quantitative test server, and respectively storing the test result of each quantitative model, wherein the test result comprises a detection rate and a false detection rate.
And the quantization completion detection module is used for judging whether all the models to be quantized are completely quantized, continuing to operate the quantization processing module and the quantization testing module if not, and starting the quantization result counting module if all the models are completely quantized.
And the quantitative result statistical module is used for screening the quantitative models by utilizing a preset algorithm based on the test results of the quantitative models, selecting the optimal target quantitative model and outputting the optimal target quantitative model. In one example, the predetermined algorithm includes, but is not limited to, difference calculation, curvature analysis, variance analysis, gradient analysis, and the like.
The image processing model quantification device provided by the embodiment of the application can be applied to quantification of NNIE (Neural Network interference Engine), can be applied to a scene of a quantitative model for batch production of a large number of model quantitative production tasks in engineering application, and can effectively produce the quantitative model. Compared with the manual selection of the quantized reference picture, the method and the device have the advantages that the program is randomly selected, and the problem that manual intervention and manual subjective judgment are not uniform is solved. In practice, because the parameter distribution of each storage point model in the model training process is random, model quantization can only effectively quantize parameters within a certain range, if the parameter distribution range of the model to be quantized is far beyond the range in which the tool can quantize, quantization failure can be caused, and if the parameter distribution of the model to be quantized is within a proper range, model quantization can be successful. The method and the device quantize the models to be quantized of all the storage points, and can obtain a plurality of standard quantization models in the image processing model generated by one-time training without adjusting parameters.
An embodiment of the present application further provides an image processing model quantization apparatus, see fig. 6, including:
a model to be quantized obtaining module 61, configured to obtain image processing models of multiple storage points, and obtain multiple models to be quantized;
a model quantization module 62, configured to perform model quantization on each to-be-quantized model respectively to obtain each quantized model;
the model testing module 63 is configured to obtain a quantitative test picture, analyze the quantitative test picture by using each quantitative model, and obtain a test result of each quantitative model;
and a target quantization model determining module 64, configured to determine a target quantization model based on the test result of each of the quantization models.
In a possible embodiment, the apparatus further comprises:
and the model training module is used for training the image processing model by using the sample picture, and saving the current image processing model as the image processing model of the current storage point when the training times meet the preset storage condition.
In a possible implementation, the model quantization module is specifically configured to:
acquiring a quantization reference picture and a quantization configuration file, wherein the quantization reference picture is a randomly selected positive sample picture;
and model quantization is respectively carried out on each model to be quantized based on the quantization reference picture and the quantization configuration file to obtain each quantization model.
In a possible implementation manner, the test result includes a detection rate and a false detection rate; the model test module comprises:
the quantitative test picture acquisition sub-module is used for acquiring a quantitative test picture, wherein the quantitative test picture comprises a positive sample picture and a negative sample picture;
the positive sample picture analysis submodule is used for analyzing the positive sample picture in the quantitative test picture by utilizing each quantitative model to respectively obtain the detection rate and the first false detection rate of each quantitative model;
the negative sample picture analysis submodule is used for analyzing the negative sample picture in the quantitative test picture by utilizing each quantitative model to respectively obtain a second false detection rate of each quantitative model;
the false detection rate determining submodule is used for respectively obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model;
in a possible implementation manner, the false detection rate determining sub-module is specifically configured to determine, for each quantization model, the number of false detection pictures of the quantization model according to the number of positive sample pictures, the number of negative sample pictures, and the first false detection rate and the second false detection rate of the quantization model; and determining the false detection rate of the quantization model according to the number of the false detection pictures of the quantization model, the number of the positive sample pictures and the number of the negative sample pictures in the quantization test picture.
In a possible implementation manner, the target quantization model determination module is specifically configured to:
acquiring a preset detection rate threshold value and a preset false detection rate threshold value;
filtering out quantization models with a detection rate smaller than a preset detection rate threshold value and filtering out quantization models with a false detection rate larger than a preset false detection rate threshold value in each quantization model;
and selecting a target quantization model from the filtered quantization models.
In a possible implementation manner, the target quantization model determination module is specifically configured to:
weighting the detection rate and the false detection rate of each quantization model to obtain the weighted value of the quantization model;
sequencing the weighted values of the quantization models according to the time sequence of the storage points corresponding to the quantization models to obtain a weighted value sequence;
acquiring a preset range threshold, and determining each weighted value of the selected numerical value within the preset range threshold in the weighted value sequence to obtain each target weighted value; dividing each target weighted value sequenced continuously in the weighted value sequence into the same weighted value set to obtain each weighted value set;
and selecting a specified target weighted value from the weighted value set with the maximum number of the target weighted values, and taking the quantization model corresponding to the specified target weighted value as a target quantization model.
An embodiment of the present application further provides an electronic device, including: a processor and a memory;
the memory is used for storing computer programs;
the processor is configured to implement the image processing model quantization method according to any one of the present application when executing the computer program stored in the memory.
Optionally, referring to fig. 7, the electronic device according to the embodiment of the present application further includes a communication interface 72 and a communication bus 74, where the processor 71, the communication interface 72, and the memory 73 complete communication with each other through the communication bus 74.
The communication bus mentioned in the electronic device may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the image processing model quantization method described in any of the present applications.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the image processing model quantification method described in any one of the present applications.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that, in this document, the technical features in the various alternatives can be combined to form the scheme as long as the technical features are not contradictory, and the scheme is within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (11)
1. An image processing model quantization method, comprising:
acquiring image processing models of a plurality of storage points to obtain a plurality of models to be quantized;
respectively carrying out model quantization on each model to be quantized to obtain each quantization model;
obtaining a quantitative test picture, analyzing the quantitative test picture by using each quantitative model, and respectively obtaining a test result of each quantitative model;
and determining a target quantization model based on the test result of each quantization model.
2. The method of claim 1, wherein prior to the step of obtaining the image processing model for the plurality of storage points to obtain the plurality of models to be quantized, the method further comprises:
and training the image processing model by using the sample picture, and saving the current image processing model as the image processing model of the current storage point when the training times meet the preset storage condition.
3. The method according to claim 1, wherein the performing model quantization on each model to be quantized to obtain each quantization model comprises:
acquiring a quantization reference picture and a quantization configuration file, wherein the quantization reference picture is a randomly selected positive sample picture;
and model quantization is respectively carried out on each model to be quantized based on the quantization reference picture and the quantization configuration file to obtain each quantization model.
4. The method of claim 1, wherein the test results comprise a detection rate and a false detection rate;
the obtaining of the quantization test picture, analyzing the quantization test picture by using each quantization model, and obtaining the test result of each quantization model respectively includes:
obtaining a quantitative test picture, wherein the quantitative test picture comprises a positive sample picture and a negative sample picture;
analyzing a positive sample picture in the quantitative test picture by using each quantitative model to respectively obtain the detection rate and a first false detection rate of each quantitative model;
analyzing the negative sample picture in the quantitative test picture by using each quantitative model to respectively obtain a second false detection rate of each quantitative model;
and respectively obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model.
5. The method of claim 4, wherein the obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model respectively comprises:
for each quantization model, determining the number of false detection pictures of the quantization model according to the number of positive sample pictures, the number of negative sample pictures, the first false detection rate and the second false detection rate of the quantization model in the quantization test pictures;
and determining the false detection rate of the quantization model according to the number of the false detection pictures of the quantization model, the number of the positive sample pictures and the number of the negative sample pictures in the quantization test picture.
6. The method of claim 4, wherein determining a target quantization model based on the test results of each of the quantization models comprises:
acquiring a preset detection rate threshold value and a preset false detection rate threshold value;
filtering out quantization models with a detection rate smaller than a preset detection rate threshold value and filtering out quantization models with a false detection rate larger than a preset false detection rate threshold value in each quantization model;
and selecting a target quantization model from the filtered quantization models.
7. The method of claim 4, wherein determining a target quantization model based on the test results of each of the quantization models comprises:
weighting the detection rate and the false detection rate of each quantization model to obtain the weighted value of the quantization model;
sequencing the weighted values of the quantization models according to the time sequence of the storage points corresponding to the quantization models to obtain a weighted value sequence;
acquiring a preset range threshold, and determining each weighted value of the selected numerical value within the preset range threshold in the weighted value sequence to obtain each target weighted value; dividing each target weighted value sequenced continuously in the weighted value sequence into the same weighted value set to obtain each weighted value set;
and selecting a specified target weighted value from the weighted value set with the maximum number of the target weighted values, and taking the quantization model corresponding to the specified target weighted value as a target quantization model.
8. An image processing model quantization apparatus, comprising:
the model to be quantized acquiring module is used for acquiring image processing models of a plurality of storage points to obtain a plurality of models to be quantized;
the model quantization module is used for respectively carrying out model quantization on each model to be quantized to obtain each quantization model;
the model testing module is used for acquiring quantitative test pictures, analyzing the quantitative test pictures by utilizing the quantitative models and respectively obtaining the testing results of the quantitative models;
and the target quantization model determining module is used for determining a target quantization model based on the test result of each quantization model.
9. The apparatus of claim 8, further comprising:
the model training module is used for training the image processing model by using the sample picture, and storing the current image processing model as the image processing model of the current storage point when the training times meet the preset storage condition;
the model quantization module is specifically configured to: acquiring a quantization reference picture and a quantization configuration file, wherein the quantization reference picture is a randomly selected positive sample picture; model quantization is respectively carried out on each model to be quantized based on the quantization reference picture and the quantization configuration file to obtain each quantization model;
the test result comprises a detection rate and a false detection rate; the model test module comprises:
the quantitative test picture acquisition sub-module is used for acquiring a quantitative test picture, wherein the quantitative test picture comprises a positive sample picture and a negative sample picture;
the positive sample picture analysis submodule is used for analyzing the positive sample picture in the quantitative test picture by utilizing each quantitative model to respectively obtain the detection rate and the first false detection rate of each quantitative model;
the negative sample picture analysis submodule is used for analyzing the negative sample picture in the quantitative test picture by utilizing each quantitative model to respectively obtain a second false detection rate of each quantitative model;
the false detection rate determining submodule is used for respectively obtaining the false detection rate of each quantization model according to the first false detection rate and the second false detection rate of each quantization model;
the false detection rate determining sub-module is specifically configured to determine, for each quantization model, the number of false detection pictures of the quantization model according to the number of positive sample pictures, the number of negative sample pictures, a first false detection rate and a second false detection rate of the quantization model in the quantization test pictures; determining the false detection rate of the quantization model according to the number of false detection pictures of the quantization model, the number of positive sample pictures and the number of negative sample pictures in the quantization test picture;
the target quantization model determination module is specifically configured to: acquiring a preset detection rate threshold value and a preset false detection rate threshold value; filtering out quantization models with a detection rate smaller than a preset detection rate threshold value and filtering out quantization models with a false detection rate larger than a preset false detection rate threshold value in each quantization model; selecting a target quantization model from the filtered quantization models;
the target quantization model determination module is specifically configured to: weighting the detection rate and the false detection rate of each quantization model to obtain the weighted value of the quantization model; sequencing the weighted values of the quantization models according to the time sequence of the storage points corresponding to the quantization models to obtain a weighted value sequence; acquiring a preset range threshold, and determining each weighted value of the selected numerical value within the preset range threshold in the weighted value sequence to obtain each target weighted value; dividing each target weighted value sequenced continuously in the weighted value sequence into the same weighted value set to obtain each weighted value set; and selecting a specified target weighted value from the weighted value set with the maximum number of the target weighted values, and taking the quantization model corresponding to the specified target weighted value as a target quantization model.
10. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method of any of claims 1-7.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111076409.8A CN113762403B (en) | 2021-09-14 | 2021-09-14 | Image processing model quantization method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111076409.8A CN113762403B (en) | 2021-09-14 | 2021-09-14 | Image processing model quantization method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113762403A true CN113762403A (en) | 2021-12-07 |
CN113762403B CN113762403B (en) | 2023-09-05 |
Family
ID=78795681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111076409.8A Active CN113762403B (en) | 2021-09-14 | 2021-09-14 | Image processing model quantization method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113762403B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463447A (en) * | 2021-12-28 | 2022-05-10 | 浙江大华技术股份有限公司 | Image processing method and device, electronic equipment and storage medium |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147866A (en) * | 2011-04-20 | 2011-08-10 | 上海交通大学 | Target identification method based on training Adaboost and support vector machine |
US20190251444A1 (en) * | 2018-02-14 | 2019-08-15 | Google Llc | Systems and Methods for Modification of Neural Networks Based on Estimated Edge Utility |
CN110348562A (en) * | 2019-06-19 | 2019-10-18 | 北京迈格威科技有限公司 | The quantization strategy of neural network determines method, image-recognizing method and device |
US20190340492A1 (en) * | 2018-05-04 | 2019-11-07 | Microsoft Technology Licensing, Llc | Design flow for quantized neural networks |
CN110929837A (en) * | 2018-09-19 | 2020-03-27 | 北京搜狗科技发展有限公司 | Neural network model compression method and device |
WO2020165629A1 (en) * | 2019-02-13 | 2020-08-20 | Mipsology SAS | Quality monitoring and hidden quantization in artificial neural network computations |
CN111598237A (en) * | 2020-05-21 | 2020-08-28 | 上海商汤智能科技有限公司 | Quantization training method, image processing device, and storage medium |
CN111639745A (en) * | 2020-05-13 | 2020-09-08 | 北京三快在线科技有限公司 | Data processing method and device |
CN111767833A (en) * | 2020-06-28 | 2020-10-13 | 北京百度网讯科技有限公司 | Model generation method and device, electronic equipment and storage medium |
CN111797984A (en) * | 2020-06-17 | 2020-10-20 | 宁波物栖科技有限公司 | Quantification and hardware acceleration method and device for multitask neural network |
CN111860405A (en) * | 2020-07-28 | 2020-10-30 | Oppo广东移动通信有限公司 | Quantification method and device of image recognition model, computer equipment and storage medium |
CN111860095A (en) * | 2020-03-23 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | State detection model training method and device and state detection method and device |
CN111967491A (en) * | 2020-06-29 | 2020-11-20 | 北京百度网讯科技有限公司 | Model offline quantization method and device, electronic equipment and storage medium |
CN112183742A (en) * | 2020-09-03 | 2021-01-05 | 南强智视(厦门)科技有限公司 | Neural network hybrid quantization method based on progressive quantization and Hessian information |
US20210117869A1 (en) * | 2018-03-29 | 2021-04-22 | Benevolentai Technology Limited | Ensemble model creation and selection |
US20210125066A1 (en) * | 2019-10-28 | 2021-04-29 | Lightmatter, Inc. | Quantized architecture search for machine learning models |
WO2021077744A1 (en) * | 2019-10-25 | 2021-04-29 | 浪潮电子信息产业股份有限公司 | Image classification method, apparatus and device, and computer readable storage medium |
US20210192349A1 (en) * | 2018-09-21 | 2021-06-24 | Huawei Technologies Co., Ltd. | Method and apparatus for quantizing neural network model in device |
EP3857453A1 (en) * | 2019-02-08 | 2021-08-04 | Huawei Technologies Co., Ltd. | Neural network quantization method using multiple refined quantized kernels for constrained hardware deployment |
-
2021
- 2021-09-14 CN CN202111076409.8A patent/CN113762403B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102147866A (en) * | 2011-04-20 | 2011-08-10 | 上海交通大学 | Target identification method based on training Adaboost and support vector machine |
US20190251444A1 (en) * | 2018-02-14 | 2019-08-15 | Google Llc | Systems and Methods for Modification of Neural Networks Based on Estimated Edge Utility |
US20210117869A1 (en) * | 2018-03-29 | 2021-04-22 | Benevolentai Technology Limited | Ensemble model creation and selection |
US20190340492A1 (en) * | 2018-05-04 | 2019-11-07 | Microsoft Technology Licensing, Llc | Design flow for quantized neural networks |
CN110929837A (en) * | 2018-09-19 | 2020-03-27 | 北京搜狗科技发展有限公司 | Neural network model compression method and device |
US20210192349A1 (en) * | 2018-09-21 | 2021-06-24 | Huawei Technologies Co., Ltd. | Method and apparatus for quantizing neural network model in device |
EP3857453A1 (en) * | 2019-02-08 | 2021-08-04 | Huawei Technologies Co., Ltd. | Neural network quantization method using multiple refined quantized kernels for constrained hardware deployment |
WO2020165629A1 (en) * | 2019-02-13 | 2020-08-20 | Mipsology SAS | Quality monitoring and hidden quantization in artificial neural network computations |
CN110348562A (en) * | 2019-06-19 | 2019-10-18 | 北京迈格威科技有限公司 | The quantization strategy of neural network determines method, image-recognizing method and device |
WO2021077744A1 (en) * | 2019-10-25 | 2021-04-29 | 浪潮电子信息产业股份有限公司 | Image classification method, apparatus and device, and computer readable storage medium |
US20210125066A1 (en) * | 2019-10-28 | 2021-04-29 | Lightmatter, Inc. | Quantized architecture search for machine learning models |
CN111860095A (en) * | 2020-03-23 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | State detection model training method and device and state detection method and device |
CN111639745A (en) * | 2020-05-13 | 2020-09-08 | 北京三快在线科技有限公司 | Data processing method and device |
CN111598237A (en) * | 2020-05-21 | 2020-08-28 | 上海商汤智能科技有限公司 | Quantization training method, image processing device, and storage medium |
CN111797984A (en) * | 2020-06-17 | 2020-10-20 | 宁波物栖科技有限公司 | Quantification and hardware acceleration method and device for multitask neural network |
CN111767833A (en) * | 2020-06-28 | 2020-10-13 | 北京百度网讯科技有限公司 | Model generation method and device, electronic equipment and storage medium |
CN111967491A (en) * | 2020-06-29 | 2020-11-20 | 北京百度网讯科技有限公司 | Model offline quantization method and device, electronic equipment and storage medium |
CN111860405A (en) * | 2020-07-28 | 2020-10-30 | Oppo广东移动通信有限公司 | Quantification method and device of image recognition model, computer equipment and storage medium |
CN112183742A (en) * | 2020-09-03 | 2021-01-05 | 南强智视(厦门)科技有限公司 | Neural network hybrid quantization method based on progressive quantization and Hessian information |
Non-Patent Citations (3)
Title |
---|
尹文枫等: "卷积神经网络压缩与加速技术研究进展", 计算机系统应用, vol. 29, no. 09, pages 16 - 25 * |
祝建英;夏哲雷;殷海兵;华强;: "基于神经网络的视频编码量化参数选择算法", 电视技术, vol. 36, no. 19, pages 40 - 43 * |
肖国麟等: "基于权值交互思想的卷积神经网络量化算法", 电子技术应用, vol. 46, no. 10, pages 39 - 41 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463447A (en) * | 2021-12-28 | 2022-05-10 | 浙江大华技术股份有限公司 | Image processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113762403B (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110856037B (en) | Video cover determination method and device, electronic equipment and readable storage medium | |
EP3620982B1 (en) | Sample processing method and device | |
CN110874604A (en) | Model training method and terminal equipment | |
CN109816043B (en) | Method and device for determining user identification model, electronic equipment and storage medium | |
CN110929785A (en) | Data classification method and device, terminal equipment and readable storage medium | |
CN110647974A (en) | Network layer operation method and device in deep neural network | |
CN113902944A (en) | Model training and scene recognition method, device, equipment and medium | |
CN111783812A (en) | Method and device for identifying forbidden images and computer readable storage medium | |
CN114091594A (en) | Model training method and device, equipment and storage medium | |
CN113762403B (en) | Image processing model quantization method, device, electronic equipment and storage medium | |
EP4343616A1 (en) | Image classification method, model training method, device, storage medium, and computer program | |
CN111404835A (en) | Flow control method, device, equipment and storage medium | |
CN113762382B (en) | Model training and scene recognition method, device, equipment and medium | |
CN113283388A (en) | Training method, device and equipment of living human face detection model and storage medium | |
CN112800813B (en) | Target identification method and device | |
CN111582446B (en) | System for neural network pruning and neural network pruning processing method | |
CN115546554A (en) | Sensitive image identification method, device, equipment and computer readable storage medium | |
CN112598020A (en) | Target identification method and system | |
CN114139678A (en) | Convolutional neural network quantization method and device, electronic equipment and storage medium | |
CN113066486A (en) | Data identification method and device, electronic equipment and computer readable storage medium | |
CN112329715A (en) | Face recognition method, device, equipment and storage medium | |
CN112463964A (en) | Text classification and model training method, device, equipment and storage medium | |
CN116610806B (en) | AI-based RPA digital service processing method and computer equipment | |
CN114626524B (en) | Target service network determining method, service processing method and device | |
CN117892301B (en) | Classification method, device, equipment and medium for few-sample malicious software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |