CN108416187A - A kind of method and device of determining pruning threshold, model pruning method and device - Google Patents

A kind of method and device of determining pruning threshold, model pruning method and device Download PDF

Info

Publication number
CN108416187A
CN108416187A CN201810488059.8A CN201810488059A CN108416187A CN 108416187 A CN108416187 A CN 108416187A CN 201810488059 A CN201810488059 A CN 201810488059A CN 108416187 A CN108416187 A CN 108416187A
Authority
CN
China
Prior art keywords
convolution kernel
convolutional layer
current
index position
pruning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810488059.8A
Other languages
Chinese (zh)
Inventor
高岩
于治楼
姜凯
段成德
李朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Hi Tech Investment and Development Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN201810488059.8A priority Critical patent/CN108416187A/en
Publication of CN108416187A publication Critical patent/CN108416187A/en
Priority to PCT/CN2018/113895 priority patent/WO2019223250A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a kind of method and device of determining pruning threshold, model pruning method and device, the methods for determining pruning threshold to include:It for the current convolutional layer of preset model, determines and combines corresponding at least one convolution kernel index position with the convolution kernel of the current convolutional layer, wherein the current convolutional layer is any convolutional layer of the preset model;According to the weighted value on convolution kernel index position described in each, cumulative distribution function is obtained;The cumulative distribution function is substituted into using preset model compression rate as dependent variable, and institute's value is determined as to the pruning threshold of the current convolutional layer.This programme is suitable for the threshold value of this layer based on the weight distribution situation of model any layer with determining, therefore is beneficial to optimize beta pruning effect.

Description

A kind of method and device of determining pruning threshold, model pruning method and device
Technical field
The present invention relates to field of computer technology, more particularly to a kind of method and device of determining pruning threshold, model are cut Branch method and device.
Background technology
Development with artificial intelligence technology and maturation, more and more deep neural network models are designed, train, portion Affix one's name to different application scenarios.To reduce the computation complexity of model, the neural network after training can be pressed by beta pruning Contracting.Wherein, threshold method is most common Pruning strategy, to cut off unessential connection between neuron in model, to reach Reduce the purpose of model complexity.
Currently, can make a reservation for that a threshold value fixed.One fixed threshold value is applicable to all layers of model.Usual feelings Under condition, when threshold value is excessive, compression multiple is promoted but loss of significance can also increase;When threshold value is too small, compression effectiveness unobvious.
Since the weight distribution situation of model different layers is different, the existing realization method of fixed threshold is unfavorable for ensureing beta pruning Effect.
Invention content
The present invention provides a kind of method and device of determining pruning threshold, model pruning method and devices, are beneficial to excellent Change beta pruning effect.
In order to achieve the above object, the present invention is achieved through the following technical solutions:
In a first aspect, the present invention provides a kind of methods of determining pruning threshold, including:
For the current convolutional layer of preset model, determine corresponding to being combined with the convolution kernel of the current convolutional layer at least One convolution kernel index position, wherein the current convolutional layer is any convolutional layer of the preset model;
According to the weighted value on convolution kernel index position described in each, cumulative distribution function is obtained;
The cumulative distribution function is substituted into using preset model compression rate as dependent variable, and institute's value is determined as institute State the pruning threshold of current convolutional layer.
Further, the convolution kernel combination meets formula one;
The formula one includes:
Wherein, FiIt is combined for the convolution kernel, 1≤i≤L, i are integer, and L is all convolutional layers of the preset model The number of plies,For real number field, K is the number of all convolution kernels in the convolution kernel combination, C is that the current convolutional layer is corresponding Convolution kernel port number, R are the corresponding convolution kernel height of the current convolutional layer, S is the corresponding convolution kernel of the current convolutional layer Width;
This method further comprises:Using formula two, the weighted value on each described convolution kernel index position is calculated;
The formula two includes:X=Fi(k, c, r, s)
Wherein, (k, c, r, s) is any convolution kernel index position at least one convolution kernel index position, and 1 ≤ k≤K, 1≤c≤C, 1≤r≤R, 1≤s≤S, Fi(k, c, r, s) is with convolution kernel index position (k, c, r, s) as change certainly The connection weight value function of amount, x are the weighted value on convolution kernel index position (k, c, r, s).
Second aspect, the present invention provides a kind of model pruning methods, including:
S1:For the first floor convolutional layer at least two layers of convolutional layer of preset model, determine that the first floor convolutional layer is to work as Preceding convolutional layer, wherein the convolution kernel combination of any convolutional layer is corresponding at least one convolution kernel index position;
S2:For the current convolutional layer of the preset model, it is right that institute is combined in determination with the convolution kernel of the current convolutional layer At least one target convolution kernel index position answered;According to the weighted value on target convolution kernel index position described in each, obtain To cumulative distribution function;Substitute into the cumulative distribution function using preset model compression rate as dependent variable, and by institute's value It is determined as the pruning threshold of the current convolutional layer;
S3:It is performed both by for target convolution kernel index position described in each:Judge current goal convolution kernel index position On weighted value whether be no more than the current convolutional layer pruning threshold, if so, by the current goal convolution kernel index bit The weighted value set resets and is fixed as 0, otherwise, the weighted value on the fixed current goal convolution kernel index position;
S4:Judge whether the current convolutional layer is last layer convolutional layer at least two layers of convolutional layer, if so, terminating Otherwise current process executes S5;
S5:Preset verification data collection is inputted to the preset model, to finely tune the preset model, and is worked as described in determination Next layer of convolutional layer of preceding convolutional layer executes S2 as new current convolutional layer.
Further, the convolution kernel combination of any convolutional layer includes at least one convolution kernel, and described in each Convolution kernel is corresponding at least one convolution kernel index position;
In the S4, in the last layer convolutional layer in judging that the current convolutional layer is at least two layers of convolutional layer, Before the end current process, further comprise:
A1:It is performed both by for each layer of convolutional layer of the preset model:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge current convolution kernel pair Whether the weighted value on each convolution kernel index position answered is 0, if so, deleting the current convolution kernel;
A2:According to the current preset model, the structure description file and weighted value file of the preset model are updated.
Further, before S1, further comprise:Preset data collection is divided into training dataset, test data set With the verification data collection, using the training dataset and the test data set to train CNN (Convolutional Neural Network, convolutional neural networks) model is as the preset model.
The third aspect, the present invention provides a kind of devices of determining pruning threshold, including:
Function generation unit determines the convolution with the current convolutional layer for the current convolutional layer for preset model The corresponding at least one convolution kernel index position of core combination, wherein the current convolutional layer is any of the preset model Convolutional layer;
Processing unit, for according to the weighted value on convolution kernel index position described in each, obtaining cumulative distribution function;
Threshold value generation unit, for substituting into the cumulative distribution function using preset model compression rate as dependent variable, and Institute's value is determined as to the pruning threshold of the current convolutional layer.
Further, the convolution kernel combination meets formula one;
The formula one includes:
Wherein, FiIt is combined for the convolution kernel, 1≤i≤L, i are integer, and L is all convolutional layers of the preset model The number of plies,For real number field, K is the number of all convolution kernels in the convolution kernel combination, C is that the current convolutional layer is corresponding Convolution kernel port number, R are the corresponding convolution kernel height of the current convolutional layer, S is the corresponding convolution kernel of the current convolutional layer Width;
The processing unit is additionally operable to utilize formula two, calculates the weighted value on each described convolution kernel index position;
The formula two includes:X=Fi(k, c, r, s)
Wherein, (k, c, r, s) is any convolution kernel index position at least one convolution kernel index position, and 1 ≤ k≤K, 1≤c≤C, 1≤r≤R, 1≤s≤S, Fi(k, c, r, s) is with convolution kernel index position (k, c, r, s) as change certainly The connection weight value function of amount, x are the weighted value on convolution kernel index position (k, c, r, s).
Fourth aspect, the present invention provides a kind of model pruning devices, including:
Determination unit, for for the first floor convolutional layer at least two layers of convolutional layer of preset model, determining the first floor Convolutional layer is current convolutional layer, wherein the convolution kernel combination of any convolutional layer is corresponding at least one convolution kernel index Position;
Pruning threshold determination unit determines and the current convolution for the current convolutional layer for the preset model At least one target convolution kernel index position corresponding to the convolution kernel combination of layer;It is indexed according to target convolution kernel described in each Weighted value on position, obtains cumulative distribution function;The cumulative distribution is substituted into using preset model compression rate as dependent variable Function, and institute's value is determined as to the pruning threshold of the current convolutional layer;
First beta pruning unit is performed both by for being directed to each described target convolution kernel index position:Judge current goal Whether the weighted value on convolution kernel index position is no more than the pruning threshold of the current convolutional layer, if so, by the current mesh Weighted value on mark convolution kernel index position resets and is fixed as 0, otherwise, on the fixed current goal convolution kernel index position Weighted value;
Judging unit, for when determining that the first beta pruning unit is completed to execute, judging that the current convolutional layer is Otherwise the no last layer convolutional layer at least two layers of convolutional layer, triggers training unit if so, terminating;
The training unit, for inputting preset verification data collection to the preset model, to finely tune the default mould Type, and next layer of convolutional layer for determining the current convolutional layer triggers the pruning threshold and determines as new current convolutional layer Unit.
Further, the convolution kernel combination of any convolutional layer includes at least one convolution kernel, and described in each Convolution kernel is corresponding at least one convolution kernel index position;
The model pruning device further includes:Second beta pruning unit, updating unit;
The judging unit, the last layer being additionally operable in judging that the current convolutional layer is at least two layers of convolutional layer When convolutional layer, the second beta pruning unit is triggered, and execute the end;
The second beta pruning unit is performed both by for each layer of convolutional layer for the preset model:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge current convolution kernel pair Whether the weighted value on each convolution kernel index position answered is 0, if so, deleting the current convolution kernel;
The updating unit, for according to the current preset model, updating the structure description text of the preset model Part and weighted value file.
Further, the training unit, be additionally operable to by preset data collection be divided into training dataset, test data set and The verification data collection is preset using the training dataset and the test data set to train CNN models as described Model, and trigger the determination unit.
The present invention provides a kind of method and device of determining pruning threshold, model pruning method and devices, determine beta pruning The method of threshold value includes:For the current convolutional layer of preset model, it is right that institute is combined in determination with the convolution kernel of the current convolutional layer At least one convolution kernel index position answered, wherein the current convolutional layer is any convolutional layer of the preset model;According to Weighted value on each described convolution kernel index position, obtains cumulative distribution function;Using preset model compression rate as because Variable substitutes into the cumulative distribution function, and institute's value is determined as to the pruning threshold of the current convolutional layer.Base of the present invention It is suitable for the threshold value of this layer with determining in the weight distribution situation of model any layer, therefore is beneficial to optimize beta pruning effect.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 is a kind of flow chart of the method for determining pruning threshold that one embodiment of the invention provides;
Fig. 2 is a kind of schematic diagram for cumulative distribution function that one embodiment of the invention provides;
Fig. 3 is the flow chart for the method that the another kind that one embodiment of the invention provides determines pruning threshold;
Fig. 4 is a kind of flow chart for model pruning method that one embodiment of the invention provides;
Fig. 5 is the flow chart for another model pruning method that one embodiment of the invention provides;
Fig. 6 is a kind of schematic diagram of the device for determining pruning threshold that one embodiment of the invention provides;
Fig. 7 is a kind of schematic diagram for model pruning device that one embodiment of the invention provides;
Fig. 8 is the schematic diagram for another model pruning device that one embodiment of the invention provides.
Specific implementation mode
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments, based on the embodiments of the present invention, those of ordinary skill in the art The every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As shown in Figure 1, an embodiment of the present invention provides a kind of method of determining pruning threshold, may comprise steps of:
Step 101:For the current convolutional layer of preset model, it is right that institute is combined in determination with the convolution kernel of the current convolutional layer At least one convolution kernel index position answered, wherein the current convolutional layer is any convolutional layer of the preset model.
Step 102:According to the weighted value on convolution kernel index position described in each, cumulative distribution function is obtained.
Step 103:Substitute into the cumulative distribution function using preset model compression rate as dependent variable, and by institute's value It is determined as the pruning threshold of the current convolutional layer.
An embodiment of the present invention provides a kind of methods of determining pruning threshold, for the current convolutional layer of preset model, really It is fixed that corresponding at least one convolution kernel index position is combined with the convolution kernel of the current convolutional layer, wherein the current volume Lamination is any convolutional layer of the preset model;According to the weighted value on convolution kernel index position described in each, tired out Product distribution function;The cumulative distribution function is substituted into using preset model compression rate as dependent variable, and institute's value is determined For the pruning threshold of the current convolutional layer.The embodiment of the present invention is applicable in based on the weight distribution situation of model any layer with determining In the threshold value of this layer, therefore it is beneficial to optimize beta pruning effect.
In detail, if preset model can be with the convolutional layer of dried layer, such as at least two layers of convolutional layer.Above-mentioned current convolution Layer can be any convolutional layer of preset model.Since the weight distribution situation of different convolutional layers is typically different, therefore determine The pruning threshold of each convolutional layer is accordingly different.
In detail, for any convolutional layer, which can have several convolution kernels, which is constituted The convolution kernel of this convolutional layer combines.Since any convolution kernel in several convolution kernels is corresponding at least one convolution kernel rope Draw position, in this way, convolution kernel combination can be corresponding at least one convolution kernel index position.
In an embodiment of the invention, referring to FIG. 2, the Cumulative Distribution Function obtained in step 102 can be such as Fig. 2 institutes Show.In fig. 2, independent variable is weighted value x, and dependent variable is the value P (x) of cumulative distribution function, and preset model compression rate is R after r is substituted into cumulative distribution function as dependent variable, solves P (x)=r, obtainsIn this way, institute's value Think the pruning threshold determined.
In an embodiment of the invention, above-mentioned preset model can be convolutional neural networks model, as a kind of weight The neural network model wanted can mainly be used to handle the problem related with computation vision, such as image classification model VGG, object Body detection model SSD, example parted pattern MaskR-CNN.
In an embodiment of the invention, the convolution kernel combination meets formula (1);
Wherein, FiIt is combined for the convolution kernel, 1≤i≤L, i are integer, and L is all convolutional layers of the preset model The number of plies,For real number field, K is the number of all convolution kernels in the convolution kernel combination, C is that the current convolutional layer is corresponding Convolution kernel port number, R are the corresponding convolution kernel height of the current convolutional layer, S is the corresponding convolution kernel of the current convolutional layer Width;
This method further comprises:Using formula (2), the weighted value on each described convolution kernel index position is calculated;
X=Fi(k, c, r, s) (2)
Wherein, (k, c, r, s) is any convolution kernel index position at least one convolution kernel index position, and 1 ≤ k≤K, 1≤c≤C, 1≤r≤R, 1≤s≤S, Fi(k, c, r, s) is with convolution kernel index position (k, c, r, s) as change certainly The connection weight value function of amount, x are the weighted value on convolution kernel index position (k, c, r, s).
In the embodiment of the present invention, there may be this 4 dimensions of K, C, R, S.Any convolution kernel index position both corresponds to often A latitude value in one dimension, respectively k, c, r, s.Weighted value x on (k, c, r, s) this convolution kernel index position is Can be Fi(k, c, r, s), i.e. x=Fi(k, c, r, s).
In the embodiment of the present invention, pruning threshold can be calculated by Adaptive Thresholding, the computational methods of threshold value depend on Each layer of weight distribution, therefore solve the problems, such as that threshold value is set in threshold value beta pruning method, is helping to ensure that compressed model just True rate.
Based on the above, as shown in figure 3, one embodiment of the invention provides another side for determining pruning threshold Method specifically includes following steps for determining the pruning threshold of CNN model first floor convolutional layers:
Step 301:For the first floor convolutional layer of CNN models, determine corresponding to being combined with the convolution kernel of first floor convolutional layer Each convolution kernel index position, wherein convolution kernel combination meets above-mentioned formula (1).
Step 302:Using above-mentioned formula (2), the weighted value on each convolution kernel index position is calculated.
Step 303:According to each calculated weighted value, cumulative distribution function is obtained.
Step 304:Cumulative distribution function is substituted into using preset model compression rate as dependent variable, and institute's value is determined For the pruning threshold of first floor convolutional layer.
After the pruning threshold for determining first floor convolutional layer, you can carry out beta pruning to first floor convolutional layer based on the pruning threshold Operation.After carrying out beta pruning processing to first floor convolutional layer, there is variation in CNN models, to ensure model accuracy, therefore can will cut The first floor convolutional layer of branch carries out parameter and fixes, and is finely adjusted based on training data set pair CNN models.For the CNN after fine tuning Model needs further to carry out cut operator to its second layer convolutional layer.It is same as above, it may be determined that second layer convolutional layer is cut Branch threshold value, and based on the pruning threshold currently determined, to second layer convolutional layer beta pruning.Certainly, CNN models after beta pruning again It needs to be finely tuned again to ensure model accuracy.So cycle, until each layer of convolutional layer of CNN models be by beta pruning, from And complete the model beta pruning flow of the pruning threshold based on optimization.
Based on the above, referring to FIG. 4, an embodiment of the present invention provides a kind of model pruning methods, including:
Step 401:For the first floor convolutional layer at least two layers of convolutional layer of preset model, the first floor convolutional layer is determined For current convolutional layer, wherein the convolution kernel combination of any convolutional layer is corresponding at least one convolution kernel index position.
Step 402:For the current convolutional layer of the preset model, determination is combined with the convolution kernel of the current convolutional layer Corresponding at least one target convolution kernel index position;According to the weight on target convolution kernel index position described in each Value, obtains cumulative distribution function;Substitute into the cumulative distribution function using preset model compression rate as dependent variable, and by gained Numerical value is determined as the pruning threshold of the current convolutional layer.
Step 403:It is performed both by for target convolution kernel index position described in each:Judge that current goal convolution kernel indexes Whether the weighted value on position is no more than the pruning threshold of the current convolutional layer, if so, by the current goal convolution kernel rope Draw the weighted value on position to reset and be fixed as 0, otherwise, the weighted value on the fixed current goal convolution kernel index position.
Step 404:Judge whether the current convolutional layer is last layer convolutional layer at least two layers of convolutional layer, if It is to terminate current process, otherwise, executes step 405.
Step 405:Preset verification data collection is inputted to the preset model, to finely tune the preset model, and is determined Next layer of convolutional layer of the current convolutional layer executes step 402 as new current convolutional layer.
An embodiment of the present invention provides a kind of model pruning methods, and pruning threshold, threshold value are calculated by Adaptive Thresholding Computational methods depend on each layer of weight distribution, solve the problems, such as that threshold value is set in threshold value beta pruning method.Simultaneously, it is proposed that The method of beta pruning and fine tuning successively ensure that compressed model accuracy.Based on this, compressed model is to running memory There can be about 10 times of reduction with power demand is calculated.
In an embodiment of the invention, in step 402, the convolution kernel combination of the current convolutional layer meets above-mentioned formula (1);And using above-mentioned formula (2), calculate the weighted value on each described target convolution kernel index position.
It, can be based on any of the above-described determining pruning threshold during to preset model beta pruning in the embodiment of the present invention Method determines that the pruning threshold of each layer convolutional layer, particular content can be found in the narration in correlation method embodiment, herein no longer It repeats.
In the embodiment of the present invention, for the convolutional layer of any beta pruning, carried out based on verification data set pair preset model When fine tuning, the parameter of the convolutional layer of each beta pruning can be fixed, i.e., the learning rate of the convolutional layer of each beta pruning is set to 0. In this way, when input validation data set is to finely tune model, only the convolutional layer of currently non-beta pruning is trained.
In an embodiment of the invention, the convolution kernel combination of any convolutional layer includes at least one convolution kernel, And each described convolution kernel is corresponding at least one convolution kernel index position;
In the step 404, the last layer convolution in judging that the current convolutional layer is at least two layers of convolutional layer When layer, before the end current process, further comprise:
A1:It is performed both by for each layer of convolutional layer of the preset model:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge current convolution kernel pair Whether the weighted value on each convolution kernel index position answered is 0, if so, deleting the current convolution kernel;
A2:According to the current preset model, the structure description file and weighted value file of the preset model are updated.
For example, the convolution verification of a convolutional layer should have 5 convolution kernel index positions, this 5 convolution kernel index bits The weighted value set is computed respectively x1、x2、x3、x4、x5.If the pruning threshold of the convolutional layer is x0, and x1< x2< x3< x0 < x4< x5, in this way, after step 403, the weighted value on this 5 convolution kernel index positions can be fixed as 0,0,0, x4、x5
In this way, in above-mentioned A1 steps, when which is current convolution kernel, due to its each corresponding convolution kernel rope It is 0 to draw the weighted value on position not to be, therefore does not delete this convolution kernel.
In an embodiment of the invention, before step 401, further comprise:Preset data collection is divided into training Data set, test data set and the verification data collection, using the training dataset and the test data set to train CNN models are as the preset model.
In the embodiment of the present invention, the adaptive setting method based on neural networks pruning threshold value, and by CNN models Beta pruning compression processing is carried out, the inference speed that neural network model can be accelerated and the high demand to calculating power.Compressed model There can be about 10 times of reduction to running memory and calculating power demand, therefore be beneficial to neural network model on mini-plant Using.
Based on the above it is found that the beta pruning of model can cut off unessential connection between neuron, reduced with reaching The purpose of model complexity and number of parameters.By beta pruning, the neuron connection number of model can generally reduce an order of magnitude. For this Pruning strategy of the threshold method described in the embodiment of the present invention, by real-time given threshold, resetting weighted value is in threshold value Connection weight between neuron below is 0.When the neuron connection weight all 0 of a neuron and all associated layers When, which is removed.Therefore, on the one hand beta pruning can reduce the connection quantity between neuron, on the other hand can also Reduce the neuronal quantity of model.Model needs after beta pruning finely tune parameter on verification data collection, to minimize The problem of precision that beta pruning is brought reduces.Under normal circumstances, beta pruning and verification can be repeated several times progress, to reach lossless compression god Effect through network model.
In this way, referring to FIG. 5, one embodiment of the invention provides a kind of model pruning method, with for the volume in model For this neuron of product core, following steps are can specifically include:
Step 501:Preset data collection is divided into training dataset, test data set and verification data collection, uses training Data set and test data set are to train CNN models.
It should be noted that in order to subsequently obtain preferable compression performance, the object function of model can be added to power The penalty term of weight, such as L1 regularizations so that convolution nuclear parameter levels off to 0.
Step 502:For the first floor convolutional layer at least two layers of convolutional layer of CNN models, determine that first floor convolutional layer is to work as Preceding convolutional layer.
In detail, the convolution kernel combination of any convolutional layer includes at least one convolution kernel, and each convolution kernel is right There should be at least one convolution kernel index position, therefore the combination of the convolution kernel of any convolutional layer is corresponding at least one convolution kernel index Position.
Step 503:For the current convolutional layer of CNN models, determine corresponding to being combined with the convolution kernel of current convolutional layer At least one target convolution kernel index position, wherein the convolution kernel combination of current convolutional layer meets above-mentioned formula (1);Using upper Formula (2) is stated, the weighted value on each target convolution kernel index position is calculated;According to each target convolution kernel index position On weighted value, obtain cumulative distribution function;Cumulative distribution function is substituted into using preset model compression rate as dependent variable, and will Institute's value is determined as the pruning threshold of current convolutional layer.
Step 504:It is performed both by for each target convolution kernel index position:Judge current goal convolution kernel index position On weighted value whether be no more than current convolutional layer pruning threshold, if so, by the power on current goal convolution kernel index position Weight values reset and are fixed as 0, otherwise, the weighted value on fixed current goal convolution kernel index position.
Step 505:Judge whether current convolutional layer is last layer convolutional layer at least two layers of convolutional layer, if so, executing step Rapid 507, otherwise, execute step 506.
Step 506:To the preset verification data collection of CNN mode inputs, to finely tune CNN models, and current convolutional layer is determined Next layer of convolutional layer as new current convolutional layer, execute step 503.
Step 507:It is performed both by for each layer of convolutional layer of CNN models:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge current convolution kernel pair Whether the weighted value on each convolution kernel index position answered is 0, if so, deleting current convolution kernel, otherwise, does not delete and works as Preceding convolution kernel.
Step 508:According to current CNN models, the structure description file and weighted value file of CNN models are updated.
In detail, in step 507, after completing to execute to each layer convolutional layer, step 508 is executed.
In conclusion the embodiment of the present invention automatically analyzes the convolution kernel distribution after model pre-training, it is suitable to set Threshold value, and the connection no more than threshold value is cut off based on this, the accuracy for ensureing model is then finely adjusted to model.It is compressed Model is all greatly reduced to calculating power and running memory, and model more small-sized efficient is particularly suitable in meters such as embedded devices Calculate deployment model in resource constrained environment.For example, compressed CNN models, as more small-sized, efficient neural network mould Type can be competent at real-time reasoning task, such as the Text region on smart mobile phone, the pedestrian detection on intelligent driving vehicle Deng.
As shown in fig. 6, an embodiment of the present invention provides a kind of devices of determining pruning threshold, including:
Function generation unit 601 determines the volume with the current convolutional layer for the current convolutional layer for preset model The corresponding at least one convolution kernel index position of product core combination, wherein the current convolutional layer is appointing for the preset model One convolutional layer;
Processing unit 602, for according to the weighted value on convolution kernel index position described in each, obtaining cumulative distribution letter Number;
Threshold value generation unit 603, for substituting into the cumulative distribution function using preset model compression rate as dependent variable, And institute's value is determined as to the pruning threshold of the current convolutional layer.
In an embodiment of the invention, the convolution kernel combination meets above-mentioned formula (1);
The processing unit 602 is additionally operable to utilize above-mentioned formula (2), calculate on each described convolution kernel index position Weighted value.
As shown in fig. 7, an embodiment of the present invention provides a kind of model pruning devices, including:
Determination unit 701, for for the first floor convolutional layer at least two layers of convolutional layer of preset model, determining the head Layer convolutional layer is current convolutional layer, wherein the convolution kernel combination of any convolutional layer is corresponding at least one convolution kernel rope Draw position;
Pruning threshold determination unit 702 determines and the current volume for the current convolutional layer for the preset model At least one target convolution kernel index position corresponding to the convolution kernel combination of lamination;According to target convolution kernel rope described in each Draw the weighted value on position, obtains cumulative distribution function;The iterated integral is substituted into using preset model compression rate as dependent variable Cloth function, and institute's value is determined as to the pruning threshold of the current convolutional layer;
First beta pruning unit 703 is performed both by for being directed to each described target convolution kernel index position:Judge current mesh The pruning threshold whether weighted value on convolution kernel index position is no more than the current convolutional layer is marked, if so, by described current Weighted value on target convolution kernel index position resets and is fixed as 0, otherwise, the fixed current goal convolution kernel index position On weighted value;
Judging unit 704, for when determining that the first beta pruning unit 703 is completed to execute, judging the current volume Whether lamination is otherwise last layer convolutional layer at least two layers of convolutional layer, triggers training unit 705 if so, terminating;
The training unit 705, it is described default to finely tune for inputting preset verification data collection to the preset model Model, and it is true to trigger the pruning threshold as new current convolutional layer for next layer of convolutional layer for determining the current convolutional layer Order member 702.
In an embodiment of the invention, referring to FIG. 8, the convolution kernel combination of any convolutional layer includes at least one A convolution kernel, and each described convolution kernel is corresponding at least one convolution kernel index position;
The model pruning device further includes:Second beta pruning unit 801, updating unit 802;
The judging unit 704 is additionally operable in judging that the current convolutional layer is at least two layers of convolutional layer When last layer convolutional layer, the second beta pruning unit 801 is triggered, and execute the end;
The second beta pruning unit 801 is performed both by for each layer of convolutional layer for the preset model:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge current convolution kernel pair Whether the weighted value on each convolution kernel index position answered is 0, if so, deleting the current convolution kernel;
The updating unit 802, for when determining that the second beta pruning unit 801 is completed to execute, according to current The preset model updates the structure description file and weighted value file of the preset model.
In an embodiment of the invention, referring to FIG. 8, the training unit 705, is additionally operable to divide preset data collection At training dataset, test data set and the verification data collection, using the training dataset and the test data set with CNN models are trained as the preset model, and trigger the determination unit 701.
The contents such as the information exchange between each unit, implementation procedure in above-mentioned apparatus, due to implementing with the method for the present invention Example is based on same design, and particular content can be found in the narration in the method for the present invention embodiment, and details are not described herein again.
In conclusion each embodiment of the present invention at least has the advantages that:
1, in the embodiment of the present invention, for the current convolutional layer of preset model, the convolution with the current convolutional layer is determined The corresponding at least one convolution kernel index position of core combination, wherein the current convolutional layer is any of the preset model Convolutional layer;According to the weighted value on convolution kernel index position described in each, cumulative distribution function is obtained;With preset model pressure Shrinkage substitutes into the cumulative distribution function as dependent variable, and institute's value is determined as to the beta pruning threshold of the current convolutional layer Value.The embodiment of the present invention is suitable for the threshold value of this layer based on the weight distribution situation of model any layer with determining, therefore is beneficial to excellent Change beta pruning effect.
2, in the embodiment of the present invention, pruning threshold can be calculated by Adaptive Thresholding, the computational methods of threshold value rely on In each layer of weight distribution, therefore solve the problems, such as that threshold value is set in threshold value beta pruning method, helps to ensure that compressed model Accuracy.
3, in the embodiment of the present invention, pruning threshold is calculated by Adaptive Thresholding, the computational methods of threshold value are dependent on every One layer of weight distribution solves the problems, such as that threshold value is set in threshold value beta pruning method.Simultaneously, it is proposed that the side of beta pruning and fine tuning successively Method ensure that compressed model accuracy.Based on this, compressed model can have running memory and calculating power demand About 10 times of reduction.
It should be noted that herein, such as first and second etc relational terms are used merely to an entity Or operation is distinguished with another entity or operation, is existed without necessarily requiring or implying between these entities or operation Any actual relationship or order.Moreover, the terms "include", "comprise" or its any other variant be intended to it is non- It is exclusive to include, so that the process, method, article or equipment including a series of elements includes not only those elements, But also include other elements that are not explicitly listed, or further include solid by this process, method, article or equipment Some elements.In the absence of more restrictions, the element limited by sentence " including one ", is not arranged Except there is also other identical factors in the process, method, article or apparatus that includes the element.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in computer-readable storage medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:ROM, RAM, magnetic disc or light In the various media that can store program code such as disk.
Finally, it should be noted that:The foregoing is merely presently preferred embodiments of the present invention, is merely to illustrate the skill of the present invention Art scheme, is not intended to limit the scope of the present invention.Any modification for being made all within the spirits and principles of the present invention, Equivalent replacement, improvement etc., are included within the scope of protection of the present invention.

Claims (10)

1. a kind of method of determining pruning threshold, which is characterized in that including:
For the current convolutional layer of preset model, determine at least one corresponding to being combined with the convolution kernel of the current convolutional layer Convolution kernel index position, wherein the current convolutional layer is any convolutional layer of the preset model;
According to the weighted value on convolution kernel index position described in each, cumulative distribution function is obtained;
The cumulative distribution function is substituted into using preset model compression rate as dependent variable, and institute's value is determined as described work as The pruning threshold of preceding convolutional layer.
2. the method for determining pruning threshold according to claim 1, which is characterized in that
The convolution kernel combination meets formula one;
The formula one includes:
Wherein, FiIt being combined for the convolution kernel, 1≤i≤L, i are integer, and L is the number of plies of all convolutional layers of the preset model,For real number field, K is the number of all convolution kernels in the convolution kernel combination, C is the corresponding convolution of the current convolutional layer Core port number, R are the corresponding convolution kernel height of the current convolutional layer, S is the corresponding convolution kernel width of the current convolutional layer;
Further comprise:Using formula two, the weighted value on each described convolution kernel index position is calculated;
The formula two includes:X=Fi(k, c, r, s)
Wherein, (k, c, r, s) is any convolution kernel index position at least one convolution kernel index position, and 1≤k≤ K, 1≤c≤C, 1≤r≤R, 1≤s≤S, Fi(k, c, r, s) is with convolution kernel index position (k, c, r, s) as independent variable Connection weight value function, x are the weighted value on convolution kernel index position (k, c, r, s).
3. a kind of model pruning method, which is characterized in that including:
S1:For the first floor convolutional layer at least two layers of convolutional layer of preset model, determine that the first floor convolutional layer is current volume Lamination, wherein the convolution kernel combination of any convolutional layer is corresponding at least one convolution kernel index position;
S2:For the current convolutional layer of the preset model, determine corresponding to being combined with the convolution kernel of the current convolutional layer At least one target convolution kernel index position;According to the weighted value on target convolution kernel index position described in each, tired out Product distribution function;The cumulative distribution function is substituted into using preset model compression rate as dependent variable, and institute's value is determined For the pruning threshold of the current convolutional layer;
S3:It is performed both by for target convolution kernel index position described in each:Judge on current goal convolution kernel index position Whether weighted value is no more than the pruning threshold of the current convolutional layer, if so, by the current goal convolution kernel index position Weighted value reset and be fixed as 0, otherwise, the weighted value on the fixed current goal convolution kernel index position;
S4:Judge whether the current convolutional layer is last layer convolutional layer at least two layers of convolutional layer, if so, terminating current Otherwise flow executes S5;
S5:Preset verification data collection is inputted to the preset model, to finely tune the preset model, and determines the current volume Next layer of convolutional layer of lamination executes S2 as new current convolutional layer.
4. model pruning method according to claim 3, which is characterized in that
The convolution kernel combination of any convolutional layer includes at least one convolution kernel, and each described convolution kernel is corresponding with At least one convolution kernel index position;
In the S4, in the last layer convolutional layer in judging that the current convolutional layer is at least two layers of convolutional layer, in institute It states before terminating current process, further comprises:
A1:It is performed both by for each layer of convolutional layer of the preset model:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge that current convolution kernel is corresponding Whether the weighted value on each convolution kernel index position is 0, if so, deleting the current convolution kernel;
A2:According to the current preset model, the structure description file and weighted value file of the preset model are updated.
5. model pruning method according to claim 3 or 4, which is characterized in that
Before S1, further comprise:Preset data collection is divided into training dataset, test data set and the verification data Collection is preset using the training dataset and the test data set to train convolutional neural networks CNN models as described Model.
6. a kind of device of determining pruning threshold, which is characterized in that including:
Function generation unit determines the convolution kernel group with the current convolutional layer for the current convolutional layer for preset model Close corresponding at least one convolution kernel index position, wherein the current convolutional layer is any convolution of the preset model Layer;
Processing unit, for according to the weighted value on convolution kernel index position described in each, obtaining cumulative distribution function;
Threshold value generation unit, for the substitution cumulative distribution function using preset model compression rate as dependent variable, and by institute Value is determined as the pruning threshold of the current convolutional layer.
7. the device of determining pruning threshold according to claim 6, which is characterized in that
The convolution kernel combination meets formula one;
The formula one includes:
Wherein, FiIt being combined for the convolution kernel, 1≤i≤L, i are integer, and L is the number of plies of all convolutional layers of the preset model,For real number field, K is the number of all convolution kernels in the convolution kernel combination, C is the corresponding convolution of the current convolutional layer Core port number, R are the corresponding convolution kernel height of the current convolutional layer, S is the corresponding convolution kernel width of the current convolutional layer;
The processing unit is additionally operable to utilize formula two, calculates the weighted value on each described convolution kernel index position;
The formula two includes:X=Fi(k, c, r, s)
Wherein, (k, c, r, s) is any convolution kernel index position at least one convolution kernel index position, and 1≤k≤ K, 1≤c≤C, 1≤r≤R, 1≤s≤S, Fi(k, c, r, s) is with convolution kernel index position (k, c, r, s) as independent variable Connection weight value function, x are the weighted value on convolution kernel index position (k, c, r, s).
8. a kind of model pruning device, which is characterized in that including:
Determination unit, for for the first floor convolutional layer at least two layers of convolutional layer of preset model, determining the first floor convolution Layer is current convolutional layer, wherein the convolution kernel combination of any convolutional layer is corresponding at least one convolution kernel index position;
Pruning threshold determination unit determines and the current convolutional layer for the current convolutional layer for the preset model The corresponding at least one target convolution kernel index position of convolution kernel combination;According to target convolution kernel index position described in each On weighted value, obtain cumulative distribution function;The cumulative distribution function is substituted into using preset model compression rate as dependent variable, And institute's value is determined as to the pruning threshold of the current convolutional layer;
First beta pruning unit is performed both by for being directed to each described target convolution kernel index position:Judge current goal convolution Whether the weighted value on core index position is no more than the pruning threshold of the current convolutional layer, if so, the current goal is rolled up Weighted value on product core index position resets and is fixed as 0, otherwise, the power on the fixed current goal convolution kernel index position Weight values;
Judging unit, for when determining that the first beta pruning unit is completed to execute, judge the current convolutional layer whether be Otherwise last layer convolutional layer at least two layers of convolutional layer, triggers training unit if so, terminating;
The training unit, for inputting preset verification data collection to the preset model, to finely tune the preset model, and Determine that next layer of convolutional layer of the current convolutional layer as new current convolutional layer, triggers the pruning threshold determination unit.
9. model pruning device according to claim 8, which is characterized in that
The convolution kernel combination of any convolutional layer includes at least one convolution kernel, and each described convolution kernel is corresponding with At least one convolution kernel index position;
Further include:Second beta pruning unit, updating unit;
The judging unit, the last layer convolution being additionally operable in judging that the current convolutional layer is at least two layers of convolutional layer When layer, the second beta pruning unit is triggered, and execute the end;
The second beta pruning unit is performed both by for each layer of convolutional layer for the preset model:
It is performed both by for each included convolution kernel of the convolution kernel combination of current convolutional layer:Judge that current convolution kernel is corresponding Whether the weighted value on each convolution kernel index position is 0, if so, deleting the current convolution kernel;
The updating unit, for when determining that the second beta pruning unit is completed to execute, according to the current default mould Type updates the structure description file and weighted value file of the preset model.
10. model pruning device according to claim 8 or claim 9, which is characterized in that
The training unit is additionally operable to preset data collection being divided into training dataset, test data set and the verification data Collection is preset using the training dataset and the test data set to train convolutional neural networks CNN models as described Model, and trigger the determination unit.
CN201810488059.8A 2018-05-21 2018-05-21 A kind of method and device of determining pruning threshold, model pruning method and device Pending CN108416187A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810488059.8A CN108416187A (en) 2018-05-21 2018-05-21 A kind of method and device of determining pruning threshold, model pruning method and device
PCT/CN2018/113895 WO2019223250A1 (en) 2018-05-21 2018-11-05 Pruning threshold determination method and device, as well as model pruning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810488059.8A CN108416187A (en) 2018-05-21 2018-05-21 A kind of method and device of determining pruning threshold, model pruning method and device

Publications (1)

Publication Number Publication Date
CN108416187A true CN108416187A (en) 2018-08-17

Family

ID=63140220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810488059.8A Pending CN108416187A (en) 2018-05-21 2018-05-21 A kind of method and device of determining pruning threshold, model pruning method and device

Country Status (2)

Country Link
CN (1) CN108416187A (en)
WO (1) WO2019223250A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034119A (en) * 2018-08-27 2018-12-18 苏州广目信息技术有限公司 A kind of method for detecting human face of the full convolutional neural networks based on optimization
CN109978137A (en) * 2019-03-20 2019-07-05 厦门美图之家科技有限公司 A kind of processing method of convolutional neural networks
CN109978142A (en) * 2019-03-29 2019-07-05 腾讯科技(深圳)有限公司 The compression method and device of neural network model
CN110119811A (en) * 2019-05-15 2019-08-13 电科瑞达(成都)科技有限公司 A kind of convolution kernel method of cutting out based on entropy significance criteria model
WO2019223250A1 (en) * 2018-05-21 2019-11-28 济南浪潮高新科技投资发展有限公司 Pruning threshold determination method and device, as well as model pruning method and device
CN111429415A (en) * 2020-03-18 2020-07-17 东华大学 Efficient model construction method for product surface defects based on network collaborative pruning
CN112132219A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 General deployment scheme of deep learning detection model based on mobile terminal
CN112508182A (en) * 2020-12-22 2021-03-16 北京百度网讯科技有限公司 Pruning method, device, equipment, program product and medium for machine learning model
CN113392953A (en) * 2020-03-12 2021-09-14 澜起科技股份有限公司 Method and apparatus for pruning convolutional layers in a neural network
CN113408724A (en) * 2021-06-17 2021-09-17 博众精工科技股份有限公司 Model compression method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794527B (en) * 2014-01-20 2018-03-27 富士通株式会社 Disaggregated model construction method and equipment based on convolutional neural networks
CN106326985A (en) * 2016-08-18 2017-01-11 北京旷视科技有限公司 Neural network training method, neural network training device, data processing method and data processing device
CN108416187A (en) * 2018-05-21 2018-08-17 济南浪潮高新科技投资发展有限公司 A kind of method and device of determining pruning threshold, model pruning method and device

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019223250A1 (en) * 2018-05-21 2019-11-28 济南浪潮高新科技投资发展有限公司 Pruning threshold determination method and device, as well as model pruning method and device
CN109034119A (en) * 2018-08-27 2018-12-18 苏州广目信息技术有限公司 A kind of method for detecting human face of the full convolutional neural networks based on optimization
CN109978137A (en) * 2019-03-20 2019-07-05 厦门美图之家科技有限公司 A kind of processing method of convolutional neural networks
CN109978137B (en) * 2019-03-20 2021-03-16 厦门美图之家科技有限公司 Processing method of convolutional neural network
CN109978142A (en) * 2019-03-29 2019-07-05 腾讯科技(深圳)有限公司 The compression method and device of neural network model
CN109978142B (en) * 2019-03-29 2022-11-29 腾讯科技(深圳)有限公司 Neural network model compression method and device
CN110119811B (en) * 2019-05-15 2021-07-27 电科瑞达(成都)科技有限公司 Convolution kernel cutting method based on entropy importance criterion model
CN110119811A (en) * 2019-05-15 2019-08-13 电科瑞达(成都)科技有限公司 A kind of convolution kernel method of cutting out based on entropy significance criteria model
CN113392953A (en) * 2020-03-12 2021-09-14 澜起科技股份有限公司 Method and apparatus for pruning convolutional layers in a neural network
CN111429415B (en) * 2020-03-18 2020-12-08 东华大学 Method for constructing efficient detection model of product surface defects based on network collaborative pruning
CN111429415A (en) * 2020-03-18 2020-07-17 东华大学 Efficient model construction method for product surface defects based on network collaborative pruning
CN112132219A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 General deployment scheme of deep learning detection model based on mobile terminal
CN112508182A (en) * 2020-12-22 2021-03-16 北京百度网讯科技有限公司 Pruning method, device, equipment, program product and medium for machine learning model
CN113408724A (en) * 2021-06-17 2021-09-17 博众精工科技股份有限公司 Model compression method and device

Also Published As

Publication number Publication date
WO2019223250A1 (en) 2019-11-28

Similar Documents

Publication Publication Date Title
CN108416187A (en) A kind of method and device of determining pruning threshold, model pruning method and device
CN109543502A (en) A kind of semantic segmentation method based on the multiple dimensioned neural network of depth
CN107688855A (en) It is directed to the layered quantization method and apparatus of Complex Neural Network
WO2020224297A1 (en) Method and device for determining computer-executable integrated model
CN112000772B (en) Sentence-to-semantic matching method based on semantic feature cube and oriented to intelligent question and answer
CN113627389A (en) Target detection optimization method and device
US20200074296A1 (en) Learning to search deep network architectures
CN103853786A (en) Method and system for optimizing database parameters
CN110275928B (en) Iterative entity relation extraction method
CN108537328A (en) Method for visualizing structure neural network
CN106897744A (en) A kind of self adaptation sets the method and system of depth confidence network parameter
CN107395211A (en) A kind of data processing method and device based on convolutional neural networks model
CN110168572A (en) Information processing method, information processing unit, computer readable storage medium
CN113947320A (en) Power grid regulation and control method based on multi-mode reinforcement learning
CN113360670A (en) Knowledge graph completion method and system based on fact context
CN109344968A (en) A kind of method and device of the hyper parameter processing of neural network
CN117829149B (en) Language model hybrid training method and device, electronic equipment and storage medium
CN114137967B (en) Driving behavior decision method based on multi-network joint learning
CN111832787B (en) Teacher style prediction model training method and computer storage medium
CN115170902B (en) Training method of image processing model
CN113807541B (en) Fairness repair method, system, equipment and storage medium for decision system
CN108921213A (en) A kind of entity classification model training method and device
CN111602145A (en) Optimization method of convolutional neural network and related product
CN112633516B (en) Performance prediction and machine learning compiling optimization method and device
CN114969148A (en) System access amount prediction method, medium and equipment based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180817

RJ01 Rejection of invention patent application after publication