CN110210622A - Model building method, device, electronic equipment and storage medium based on beta pruning - Google Patents
Model building method, device, electronic equipment and storage medium based on beta pruning Download PDFInfo
- Publication number
- CN110210622A CN110210622A CN201910498309.0A CN201910498309A CN110210622A CN 110210622 A CN110210622 A CN 110210622A CN 201910498309 A CN201910498309 A CN 201910498309A CN 110210622 A CN110210622 A CN 110210622A
- Authority
- CN
- China
- Prior art keywords
- layer
- learning model
- deep learning
- beta pruning
- eigenmatrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The disclosure belongs to field of computer technology about a kind of model building method based on beta pruning, device, electronic equipment and storage medium.The disclosure is according to the number of channels on every layer of deep learning model after beta pruning, determine the model structure of transplantable deep learning model, and according to the number of channels and eigenmatrix on every layer of deep learning model after beta pruning, cross-layer addition is carried out to the eigenmatrix of every layer of upper respective channel, obtain transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels, and then according to the corresponding eigenmatrix in channels different in the model structure of transplantable deep learning model and every layer, transplantable deep learning model is constructed.Present disclose provides a kind of methods for constructing transplantable deep learning model, improve the practicability of model, and constructed according to the number of channels on every layer after beta pruning, the calculation amount and model running time and occupied resource of model are reduced, to improve the efficiency of model running.
Description
Technical field
This disclosure relates to field of computer technology, in particular to a kind of model building method based on beta pruning, device, electronics
Equipment and storage medium.
Background technique
Hot spot of the deep learning model as field of computer technology research is widely used in image recognition, object inspection
Survey, image segmentation, natural language processing etc..Deep learning model is generally large, in operational process calculation amount often also compared with
Greatly, in order to promote the runnability of deep learning model, cut operator can be carried out to deep learning model.
Currently, can be protected according to the feature on every layer of deep learning model when carrying out cut operator to deep learning model
The important feature on every layer is stayed, the insignificant feature on every layer is cut.It, can be to the feature cut for the output accuracy for ensuring model
Zero padding operation is carried out, and then by running the deep learning model after the beta pruning in original platform, theoretically estimates its operation speed
Degree.
However, can only operate in can not be again on original platform since the deep learning model after beta pruning is related to parallel link
Building, causes the practicability of model poor, is unable to satisfy the use demand of user, therefore, it is urgent to provide a kind of based on beta pruning
The construction method of deep learning model.
Summary of the invention
In order to solve problems in the prior art, the embodiment of the present disclosure provide a kind of model building method based on beta pruning,
Device and storage medium.The technical solution is as follows:
On the one hand, a kind of model building method based on beta pruning is provided, which comprises
Deep learning model after obtaining beta pruning, the deep learning model after the beta pruning are to carry out to deep learning model
Cut operator obtains;
According to the number of channels on every layer of deep learning model after the beta pruning, transplantable deep learning model is determined
Model structure;
According to the number of channels and eigenmatrix on every layer of deep learning model after the beta pruning, after the beta pruning
The eigenmatrix of every layer of deep learning model upper respective channel carries out cross-layer addition, obtains the transplantable deep learning model
The corresponding eigenmatrix in every layer of upper different channels;
According to the model structure of the transplantable deep learning model and transplantable every layer of deep learning model described
The corresponding eigenmatrix in upper difference channel, constructs the transplantable deep learning model.
Channel in another embodiment of the disclosure, on every layer of the deep learning model according to after the beta pruning
Quantity determines the model structure of transplantable deep learning model, comprising:
The output channel quantity on every layer of deep learning model after obtaining the beta pruning;
Using one layer on the deep learning model after the beta pruning of output channel quantity as next layer of input channel number
Amount, obtains the model structure of the transplantable deep learning model.
Channel in another embodiment of the disclosure, on every layer of the deep learning model according to after the beta pruning
Quantity and eigenmatrix carry out cross-layer phase to the eigenmatrix of every layer of deep learning model upper respective channel after the beta pruning
Add, obtain described transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels, comprising:
According to the number of channels and eigenmatrix on every layer of deep learning model after the beta pruning, after determining the beta pruning
Deep learning model in can cross-layer be added layer;
By in the deep learning model after the beta pruning can cross-layer be added layer on respective channel eigenmatrix carry out across
Layer is added, and obtains described transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels.
Channel in another embodiment of the disclosure, on every layer of the deep learning model according to after the beta pruning
Quantity and eigenmatrix, can the layer that is added of cross-layer in the deep learning model after determining the beta pruning, comprising:
For any two layers in the deep learning model after the beta pruning, if two layers of number of channels is identical and each logical
The corresponding eigenmatrix size in road is identical, it is determined that described two layers can cross-layer addition.
In another embodiment of the disclosure, in the deep learning model by after the beta pruning can cross-layer be added
The eigenmatrix of respective channel carries out cross-layer addition on layer, obtains described transplantable every layer of deep learning model upper different channels
Corresponding eigenmatrix, comprising:
For in the deep learning model after the beta pruning can cross-layer be added any two layers, according to after beta pruning channel compile
Number and eigenmatrix size, to two layers of progress zero padding operation to be added;
Based on the corresponding eigenmatrix of number of channels and each channel after zero padding, by the corresponding feature in each channel of front layer
Matrix eigenmatrix corresponding with rear layer respective channel is added, and layer is every after obtaining in the transplantable deep learning model
The corresponding eigenmatrix in a channel.
On the other hand, a kind of model construction device based on beta pruning is provided, described device includes:
Module is obtained, for obtaining the deep learning model after beta pruning, the deep learning model after the beta pruning is to depth
Degree learning model carries out cut operator and obtains;
Determining module, for determining portable according to the number of channels on every layer of deep learning model after the beta pruning
Deep learning model model structure;
Summation module, for according to the number of channels and eigenmatrix on every layer of deep learning model after the beta pruning,
Cross-layer addition is carried out to the eigenmatrix of every layer of deep learning model upper respective channel after the beta pruning, obtains the portable
The corresponding eigenmatrix in every layer of deep learning model upper different channels;
Module is constructed, for the model structure and the transplantable depth according to the transplantable deep learning model
The corresponding eigenmatrix in every layer of learning model upper different channels, constructs the transplantable deep learning model.
In another embodiment of the disclosure, the determining module, for obtaining the deep learning mould after the beta pruning
Output channel quantity in every layer of type;Using one layer on the deep learning model after the beta pruning of output channel quantity as next
The input channel quantity of layer, obtains the model structure of the transplantable deep learning model.
In another embodiment of the disclosure, the summation module, for according to the deep learning mould after the beta pruning
Number of channels and eigenmatrix in every layer of type, can the layer that is added of cross-layer in the deep learning model after determining the beta pruning;It will
In deep learning model after the beta pruning can cross-layer be added layer on respective channel eigenmatrix carry out cross-layer addition, obtain
The corresponding eigenmatrix in described transplantable every layer of deep learning model upper different channels.
In another embodiment of the disclosure, the summation module, for for the deep learning mould after the beta pruning
Any two layers in type, if two layers of number of channels is identical and the corresponding eigenmatrix size in each channel is identical, it is determined that institute
Stating two layers can cross-layer addition.
In another embodiment of the disclosure, the summation module, for for the deep learning mould after the beta pruning
In type can cross-layer be added any two layers, according to the size of channel number and eigenmatrix after beta pruning, to be added two layers
Carry out zero padding operation;It is based on the corresponding eigenmatrix of number of channels and each channel after zero padding, each channel of front layer is corresponding
Eigenmatrix eigenmatrix corresponding with rear layer respective channel be added, obtain in the transplantable deep learning model
The corresponding eigenmatrix in each channel of layer afterwards.
On the other hand, a kind of electronic equipment is provided, which is characterized in that the electronic equipment includes processor and storage
Device is stored at least one instruction, at least a Duan Chengxu, code set or instruction set in the memory, and described at least one refers to
It enables, an at least Duan Chengxu, the code set or described instruction collection are loaded by the processor and executed to realize and be based on cutting
The model building method of branch.
On the other hand, a kind of computer readable storage medium is provided, which is characterized in that be stored in the storage medium
At least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, institute
It states code set or described instruction collection is loaded by processor and executed to realize the model building method based on beta pruning.
The technical solution that the embodiment of the present disclosure provides has the benefit that
According to the number of channels on every layer of deep learning model after beta pruning, the mould of transplantable deep learning model is determined
Type structure, and according to the number of channels and eigenmatrix on every layer of deep learning model after beta pruning, to every layer of upper respective channel
Eigenmatrix carry out cross-layer addition, obtain the corresponding eigenmatrix in transplantable every layer of deep learning model upper different channels,
And then according to the corresponding eigenmatrix in channels different in the model structure of transplantable deep learning model and every layer, construct removable
The deep learning model of plant.Present disclose provides a kind of methods for constructing transplantable deep learning model, improve model
Practicability, and constructed according to the number of channels on every layer after beta pruning, reduce the calculation amount and model running of model
Time and occupied resource, to improve the efficiency of model running.
Detailed description of the invention
In order to illustrate more clearly of the technical solution in the embodiment of the present disclosure, will make below to required in embodiment description
Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present disclosure, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 (A) is a kind of implementation environment involved in the model building method based on beta pruning of embodiment of the present disclosure offer;
Fig. 1 (B) is that another kind involved in the model building method based on beta pruning of embodiment of the present disclosure offer implements ring
Border;
Fig. 1 (C) is that another kind involved in the model building method based on beta pruning of embodiment of the present disclosure offer implements ring
Border;
Fig. 2 is the model building method flow chart based on beta pruning that the embodiment of the present disclosure provides;
Fig. 3 is the model structure schematic diagram for the transplantable model that the embodiment of the present disclosure provides;
Fig. 4 is the model construction apparatus structure schematic diagram based on beta pruning that the embodiment of the present disclosure provides;
Fig. 5 shows the electronic equipment for the model construction based on beta pruning of one exemplary embodiment of disclosure offer
Structural block diagram;
Fig. 6 is a kind of electronic equipment for the model construction based on beta pruning shown according to an exemplary embodiment.
Specific embodiment
To keep the purposes, technical schemes and advantages of the disclosure clearer, below in conjunction with attached drawing to disclosure embodiment party
Formula is described in further detail.
Fig. 1 (A) is please referred to, it illustrates another kinds involved in model building method of the embodiment of the present disclosure based on beta pruning
Implementation environment, referring to Fig. 1 (A), which includes: the first platform 101.
Wherein, the first platform 101 can be the operation platform of single server, the meter that can also be made of multiple servers
The operation platform of calculation machine cluster.Equipment on first platform 101 has computing capability, can run deep learning model, and right
Deep learning model progress cut operator, the deep learning model after obtaining beta pruning, and then based on the deep learning mould after beta pruning
Type rebuilds transplantable deep learning model, and runs the transplantable deep learning model.
Fig. 1 (B) is please referred to, it illustrates another kinds involved in model building method of the embodiment of the present disclosure based on beta pruning
Implementation environment, referring to Fig. 1 (B), which includes: the first platform 101 and the second platform 102.
Wherein, the first platform 101 can be the operation platform of single server, the meter that can also be made of multiple servers
The operation platform of calculation machine cluster.Equipment on first platform 101 has computing capability, can run deep learning model, and right
Deep learning model carries out cut operator, the deep learning model after obtaining beta pruning.
Second platform 102 can be the operation platform of terminal device, for example, smart phone, tablet computer, laptop
Operation platform, can also be single server operation platform, can also be by the computer cluster that multiple servers form
Operation platform.Equipment on second platform 102 equally has computing capability, can be according to the beta pruning on the first platform 101 after
Deep learning model rebuilds transplantable deep learning model, and runs the transplantable deep learning model.
Fig. 1 (C) is please referred to, it illustrates another kinds involved in model building method of the embodiment of the present disclosure based on beta pruning
Implementation environment, referring to Fig. 1 (C), which includes: the first platform 101, the second platform 102 and third platform 103.
Wherein, the first platform 101 can be the operation platform of single server, the meter that can also be made of multiple servers
The operation platform of calculation machine cluster.Equipment on first platform 101 has computing capability, can run deep learning model, and right
Deep learning model carries out cut operator, the deep learning model after obtaining beta pruning.
Third platform 103 can be the operation platform of single server, the computer that can also be made of multiple servers
The operation platform of cluster.Equipment on third platform 103 equally has computing capability, can be according to cutting on the first platform 101
Deep learning model after branch, rebuilds transplantable deep learning model, and by transplantable deep learning model transplantations to the
On two platforms 102.
Second platform 102 can be the operation platform of terminal device, for example, smart phone, tablet computer, laptop
Operation platform, can also be single server operation platform, can also be by the computer cluster that multiple servers form
Operation platform.Second platform 102 can run the transplantable deep learning model that third platform 103 is transplanted.
Based on implementation environment shown in Fig. 1 (A) or Fig. 1 (B) or Fig. 1 (C), the embodiment of the present disclosure provides one kind and is based on cutting
The model building method of branch, referring to fig. 2, the method flow that the embodiment of the present disclosure provides includes:
201, the deep learning model after beta pruning is obtained.
Wherein, the deep learning model after beta pruning is obtained by carrying out cut operator to deep learning model on the first platform
It arrives.
Based on the deep learning model after the beta pruning on the first platform, the related ginseng of the deep learning model after obtaining beta pruning
Number, corresponding eigenmatrix of the number of plies, convolution kernel, every layer of number of channels and each channel including model etc..
202, according to the number of channels on every layer of deep learning model after beta pruning, transplantable deep learning model is determined
Model structure.
Wherein, transplantable deep learning model can be with deep learning model computational accuracy having the same, this is removable
The deep learning model of plant is the model reconstructed on other platforms according to the deep learning model after beta pruning, with the depth after beta pruning
It is different to spend learning model, which can be transplanted on any platform, and run on any platform.
In deep learning model transplantable based on deep learning model construction after beta pruning, it is thus necessary to determine that go out portable
Deep learning model the corresponding eigenmatrix of model structure and transplantable every layer of deep learning model upper different channels, because
This also first will use this step to determine transplantable deep learning model before constructing transplantable deep learning model
Model structure, and the corresponding feature square in transplantable every layer of deep learning model upper different channels is determined using step 203
Battle array.
Specifically, according to the number of channels on every layer of deep learning model after beta pruning, transplantable deep learning is determined
When the model structure of model, following method can be used:
2021, the output channel quantity on every layer of deep learning model after obtaining beta pruning.
Every layer of deep learning model after deep learning model and beta pruning all has input channel and output channel, every layer of institute
The input channel quantity having is different with output channel quantity.For example, the deep learning model first layer after beta pruning has 16
Output channel, the second layer have 8 output channels, and third layer has 2 output channels etc. with 4 output channels, the 4th layer
Deng.
2022, using one layer on the deep learning model after beta pruning of output channel quantity as next layer of input channel number
Amount, obtains the model structure of transplantable deep learning model.
By using the output channel quantity of the preceding layer of the deep learning model after beta pruning as the input channel of a lower layer
Quantity, it may be determined that the input channel quantity and output channel quantity for going out model, so that it is determined that transplantable deep learning model out
Model structure.
For example, deep learning model includes 4 layers, first layer has 16 output channels, and the second layer has 8 outputs logical
Road, third layer have 4 output channels, and the 4th layer has 2 output channels.Cut operator is carried out to deep learning model, it will
16 output channels of first layer are cut to 8 output channels, 8 output channels of the second layer are cut to 4 output channel,
4 output channels of third layer are cut to 2 output channels, the 4th layer of 2 output channels remain unchanged, then can be according to beta pruning after
8 output channels of first layer, 4 output channels of the second layer, 2 output channels of third layer, the 4th layer of 2 output channel, really
The input channel quantity for making the transplantable deep learning model second layer is 8, the input channel quantity of third layer is 4,
4th layer of input channel quantity is 2, so according to every layer of input channel quantity of the deep learning model after beta pruning and
Output channel quantity determines the model structure of transplantable deep learning model.
203, according to the number of channels and eigenmatrix on every layer of deep learning model after beta pruning, to the depth after beta pruning
The eigenmatrix of every layer of learning model upper respective channel carries out cross-layer addition, obtains on transplantable every layer of deep learning model not
The corresponding eigenmatrix with channel.
Each channel on every layer of deep learning model after deep learning model and beta pruning corresponds to an eigenmatrix,
This feature matrix is the matrix being made of multiple features, for reacting the correlated characteristic of input data.According to the depth after beta pruning
Number of channels and eigenmatrix on every layer of learning model, to the feature of every layer of deep learning model upper respective channel after beta pruning
Matrix carries out cross-layer addition, can obtain transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels.
Specifically, according to the number of channels and eigenmatrix on every layer of deep learning model after beta pruning, after beta pruning
The eigenmatrix of every layer of deep learning model upper respective channel carries out cross-layer addition, obtains transplantable every layer of deep learning model
When the upper difference corresponding eigenmatrix in channel, following method can be used:
2031, according to the number of channels and eigenmatrix on every layer of deep learning model after beta pruning, after determining beta pruning
In deep learning model can cross-layer be added layer.
For deep learning model, parallel link can be carried out, by the spy on two layers met certain condition in respective channel
Sign matrix is added, and the feature of front layer is transferred on rear layer, to improve the precision of model output.
For any two layers in the deep learning model after beta pruning, this two layers layer for whether capableing of cross-layer addition is being determined
When, can determine whether this two layers number of channels is identical, and judge that this corresponding eigenmatrix size in two layers of each channel is
It is no identical, if two layers of number of channels is identical and the corresponding eigenmatrix size in each channel is identical, it is determined that this two layers can
Cross-layer is added.
2032, by the deep learning model after beta pruning can cross-layer be added layer on respective channel eigenmatrix carry out across
Layer is added, and obtains transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels.
By in the deep learning model after beta pruning can cross-layer be added layer on respective channel eigenmatrix carry out cross-layer phase
Add, when obtaining the corresponding eigenmatrix in transplantable every layer of deep learning model upper different channels, following method can be used:
20321, in the deep learning model after beta pruning can cross-layer be added any two layers, according to the channel after beta pruning
The size of number and eigenmatrix, to two layers of progress zero padding operation to be added.
When due to carrying out cut operator to deep learning model, the channel that different layers are cut is different, for example, layer 5 16
A channel, what is cut is 8 channels that number is 0~7, and the tenth layer of 16 channel, what is cut is 8 that number is 8~15 logical
Road, after having executed cut operator, the remaining port number of layer 5 is 8, and the tenth layer of remaining port number is also 8, if the
The sizes of the corresponding eigenmatrix in five layers of upper each channel are 8*8, the size of the corresponding eigenmatrix in the tenth layer of upper each channel
It also is that 8*8, then layer 5 and the tenth layer meet cross-layer addition condition.However, the remaining channel of layer 5 after beta pruning is number
For 8~15 channel, the tenth layer of remaining channel is the channel that number is 0~7, layer 5 after beta pruning and the tenth layer it is remaining
Channel is different, can not directly be added.To be added the layer 5 after beta pruning and the tenth layer can, need to layer 5 and the tenth layer
Carry out zero padding operation.
To the detailed process of two layers of progress zero padding operation to be added are as follows: the remaining channel of n-th layer after setting beta pruning
Index is index, and for characterizing the channel number after beta pruning, the index in m layers of remaining channel after beta pruning is the index
Index1, the Src Chan number of n-th layer is q before cut operator, and the eigenmatrix in each channel is c*c, m layers before cut operator
Src Chan number be q, the eigenmatrix in each channel is c*c, according to m layer before beta pruning and the Src Chan number of n-th layer
Amount and eigenmatrix size, construct initial matrix n_i and m_i that two port numbers are q, and the element of the two initial matrixs is whole
It is 0, is expressed as n_i=q*c*c and m_i=q*c*c.Next, being initial matrix according to the index of the n-th layer after beta pruning
N_i=q*c*c carries out assignment, the i.e. value of n-th layer after n_i [index]=beta pruning, and is according to m layers of index after beta pruning
Initial matrix m_i=q*c*c carries out assignment, i.e. m layers of value after m_i [index]=beta pruning.
2322, based on the corresponding eigenmatrix of number of channels and each channel after zero padding, each channel of front layer is corresponding
Eigenmatrix eigenmatrix corresponding with rear layer respective channel be added, layer after obtaining in transplantable deep learning model
The corresponding eigenmatrix in each channel.
Due to after zero padding operation number of channels and channel it is identical, the corresponding eigenmatrix size in each channel is identical, because
And the eigenmatrix corresponding with rear layer respective channel of the corresponding eigenmatrix in each channel of front layer in two layers to be added can be carried out
It is added.By the way that the corresponding eigenmatrix in each channel of front layer eigenmatrix corresponding with rear layer respective channel to be added, obtain
The corresponding eigenmatrix in each channel of layer after into transplantable deep learning model.For example, m layers of setting are two layers to be added
In front layer, n-th layer is the rear layer in two layers to be added, can be by m layers often by carrying out zero padding operation to m layer and n-th layer
The corresponding eigenmatrix in a channel eigenmatrix corresponding with n-th layer respective channel is added, and transplantable depth is obtained
Practise the corresponding eigenmatrix in each channel of n-th layer in model.
Referring to Fig. 3, cut operator is carried out to deep learning model, input layer cuts 8 channels, and convolutional layer (1*1) cuts 8
It a channel can be to this two layers progress zero padding operation, and then based on after zero padding for two layers after beta pruning when carrying out cross-layer addition
The corresponding eigenmatrix of number of channels and each channel, the corresponding eigenmatrix in each channel of input layer is corresponding to convolutional layer
The corresponding eigenmatrix in channel is added, and the corresponding feature in each channel of convolutional layer in transplantable deep learning model is obtained
Matrix.
204, according to the model structure of transplantable deep learning model and it is every layer of deep learning model transplantable on not
The corresponding eigenmatrix with channel constructs transplantable deep learning model.
It, can be directly after having constructed transplantable deep learning model for implementation environment shown in Fig. 1 (A)
One platform runs the transplantable deep learning model;For implementation environment shown in Fig. 1 (B), when having constructed transplantable depth
It spends after learning model, directly the transplantable deep learning model can be run in the first platform, also by transplantable depth
Model transplantations are practised to be run to the second platform;For implementation environment shown in Fig. 1 (C), when having constructed transplantable depth
After practising model, the transplantable deep learning model directly can be run in the second platform, it can also be by transplantable depth
Model transplantations are practised to be run to the first platform or third platform.
Based on constructed deep learning model, by being transplanted and being transported to by the transplantable deep learning model
Row, improves the practicability of model, meets the use demand of the user of different platform.
In addition, the relevant technologies are in order to meet the needs of parallel link between different layers, it is only that will lead to after execution cut operator
Feature on road is cut to 0, and the actually corresponding eigenmatrix of number of channels and each channel is not cut directly, at runtime institute
Eigenmatrix on some channels and channel is involved in calculating, and calculation amount is larger when operation, and portable constructed by the disclosure
Deep learning model number of channels compared to after the relevant technologies beta pruning deep learning molded passage quantity and channel it is corresponding
Eigenmatrix quantity to lack, the channel and corresponding eigenmatrix being cut up at runtime cannot participate in calculating, greatly reduce
Calculation amount when operation, shortens the runing time of model and reduces occupied resource, improve the effect of model running
Rate.
The method that the embodiment of the present disclosure provides is determined according to the number of channels on every layer of deep learning model after beta pruning
The model structure of transplantable deep learning model, and according on every layer of deep learning model after beta pruning number of channels and spy
Matrix is levied, cross-layer addition is carried out to the eigenmatrix of every layer of upper respective channel, is obtained on transplantable every layer of deep learning model
The corresponding eigenmatrix in different channels, and then according to the model structure of transplantable deep learning model and every layer of upper different channels
Corresponding eigenmatrix constructs transplantable deep learning model.Present disclose provides a kind of transplantable deep learnings of building
The method of model improves the practicability of model, and is constructed according to the number of channels on every layer after beta pruning, reduces mould
The calculation amount of type and model running time and occupied resource, to improve the efficiency of model running.
Referring to fig. 4, the embodiment of the present disclosure provides a kind of model construction device based on beta pruning, which includes:
Module 401 is obtained, for obtaining the deep learning model after beta pruning, the deep learning model after beta pruning leads to for depth
Degree learning model carries out cut operator and obtains;
Determining module 402, for determining transplantable according to the number of channels on every layer of deep learning model after beta pruning
The model structure of deep learning model;
Summation module 403 is right for according to the number of channels and eigenmatrix on every layer of deep learning model after beta pruning
The eigenmatrix of every layer of deep learning model upper respective channel after beta pruning carries out cross-layer addition, obtains transplantable deep learning
The corresponding eigenmatrix in every layer of model upper different channels;
Module 404 is constructed, for the model structure and transplantable deep learning according to transplantable deep learning model
The corresponding eigenmatrix in every layer of model upper different channels, constructs transplantable deep learning model.
In another embodiment of the disclosure, determining module 402, for obtaining every layer of deep learning model after beta pruning
On output channel quantity;Output channel quantity using one layer on the deep learning model after beta pruning is led to as next layer of input
Road quantity obtains the model structure of transplantable deep learning model.
In another embodiment of the disclosure, summation module 403, for according to every layer of deep learning model after beta pruning
On number of channels and eigenmatrix, can the layer that is added of cross-layer in the deep learning model after determining beta pruning;By the depth after beta pruning
Spend learning model in can cross-layer be added layer on respective channel eigenmatrix carry out cross-layer addition, obtain transplantable depth
Practise every layer of the model above corresponding eigenmatrix in different channels.
In another embodiment of the disclosure, summation module 403, for for appointing in the deep learning model after beta pruning
Two layers of meaning, if two layers of number of channels is identical and the corresponding eigenmatrix size in each channel is identical, it is determined that two layers can be across
Layer is added.
In another embodiment of the disclosure, summation module 403, for for can in the deep learning model after beta pruning
Any two layers of cross-layer addition, according to the size of channel number and eigenmatrix after beta pruning, mends to be added two layers
Z-operation;Based on the corresponding eigenmatrix of number of channels and each channel after zero padding, by the corresponding feature in each channel of front layer
Matrix eigenmatrix corresponding with rear layer respective channel is added, and layer each leads to after obtaining in transplantable deep learning model
The corresponding eigenmatrix in road.
In conclusion the device that the embodiment of the present disclosure provides, according to the channel on every layer of deep learning model after beta pruning
Quantity determines the model structure of transplantable deep learning model, and according to logical on every layer of deep learning model after beta pruning
Road quantity and eigenmatrix carry out cross-layer addition to the eigenmatrix of every layer of upper respective channel, obtain transplantable deep learning
The corresponding eigenmatrix in every layer of model upper different channels, and then according to the model structure of transplantable deep learning model and every layer
The corresponding eigenmatrix in upper difference channel, constructs transplantable deep learning model.Present disclose provides a kind of building portables
Deep learning model method, improve the practicability of model, and structure is carried out according to the number of channels on every layer after beta pruning
It builds, the calculation amount and model running time and occupied resource of model is reduced, to improve the efficiency of model running.
Fig. 5 shows the electronic equipment for the model construction based on beta pruning of one exemplary embodiment of disclosure offer
Structural block diagram.The electronic equipment 500 is terminal, may is that smart phone, tablet computer, MP3 player (Moving
Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4
(Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) is broadcast
Put device, laptop or desktop computer.Terminal 500 be also possible to referred to as user equipment, portable terminal, laptop terminal,
Other titles such as terminal console.
In general, terminal 500 includes: processor 501 and memory 502.
Processor 501 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 501 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 501 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 501 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 501 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 502 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 502 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 502 can
Storage medium is read for storing at least one instruction, at least one instruction performed by processor 501 for realizing this Shen
Please in embodiment of the method provide the model building method based on beta pruning.
In some embodiments, terminal 500 is also optional includes: peripheral device interface 503 and at least one peripheral equipment.
It can be connected by bus or signal wire between processor 501, memory 502 and peripheral device interface 503.Each peripheral equipment
It can be connected by bus, signal wire or circuit board with peripheral device interface 503.Specifically, peripheral equipment includes: radio circuit
504, at least one of touch display screen 505, camera 506, voicefrequency circuit 507, positioning component 508 and power supply 509.
Peripheral device interface 503 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 501 and memory 502.In some embodiments, processor 501, memory 502 and peripheral equipment
Interface 503 is integrated on same chip or circuit board;In some other embodiments, processor 501, memory 502 and outer
Any one or two in peripheral equipment interface 503 can realize on individual chip or circuit board, the present embodiment to this not
It is limited.
Radio circuit 504 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates
Frequency circuit 504 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 504 turns electric signal
It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 504 wraps
It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip
Group, user identity module card etc..Radio circuit 504 can be carried out by least one wireless communication protocol with other terminals
Communication.The wireless communication protocol includes but is not limited to: Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), wireless office
Domain net and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio circuit 504 may be used also
To include the related circuit of NFC (Near Field Communication, wireless near field communication), the application is not subject to this
It limits.
Display screen 505 is for showing UI (User Interface, user interface).The UI may include figure, text, figure
Mark, video and its their any combination.When display screen 505 is touch display screen, display screen 505 also there is acquisition to show
The ability of the touch signal on the surface or surface of screen 505.The touch signal can be used as control signal and be input to processor
501 are handled.At this point, display screen 505 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or
Soft keyboard.In some embodiments, display screen 505 can be one, and the front panel of terminal 500 is arranged;In other embodiments
In, display screen 505 can be at least two, be separately positioned on the different surfaces of terminal 500 or in foldover design;In still other reality
It applies in example, display screen 505 can be flexible display screen, be arranged on the curved surface of terminal 500 or on fold plane.Even, it shows
Display screen 505 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 505 can use LCD (Liquid
Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)
Etc. materials preparation.
CCD camera assembly 506 is for acquiring image or video.Optionally, CCD camera assembly 506 include front camera and
Rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.One
In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively
Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle
Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped
Camera shooting function.In some embodiments, CCD camera assembly 506 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp,
It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not
With the light compensation under colour temperature.
Voicefrequency circuit 507 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will
Sound wave, which is converted to electric signal and is input to processor 501, to be handled, or is input to radio circuit 504 to realize voice communication.
For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 500 to be multiple.Mike
Wind can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 501 or radio circuit will to be come from
504 electric signal is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramic loudspeaker.When
When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications
Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 507 can also include
Earphone jack.
Positioning component 508 is used for the current geographic position of positioning terminal 500, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 508 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union
The positioning component of Galileo system.
Power supply 509 is used to be powered for the various components in terminal 500.Power supply 509 can be alternating current, direct current,
Disposable battery or rechargeable battery.When power supply 509 includes rechargeable battery, which can support wired charging
Or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 500 further includes having one or more sensors 510.The one or more sensors
510 include but is not limited to: acceleration transducer 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514,
Optical sensor 515 and proximity sensor 516.
The acceleration that acceleration transducer 511 can detecte in three reference axis of the coordinate system established with terminal 500 is big
It is small.For example, acceleration transducer 511 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 501 can
With the acceleration of gravity signal acquired according to acceleration transducer 511, touch display screen 505 is controlled with transverse views or longitudinal view
Figure carries out the display of user interface.Acceleration transducer 511 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 512 can detecte body direction and the rotational angle of terminal 500, and gyro sensor 512 can
To cooperate with acquisition user to act the 3D of terminal 500 with acceleration transducer 511.Processor 501 is according to gyro sensor 512
Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shooting
Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 505 in terminal 500 can be set in pressure sensor 513.Work as pressure
When the side frame of terminal 500 is arranged in sensor 513, user can detecte to the gripping signal of terminal 500, by processor 501
Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 513 acquires.When the setting of pressure sensor 513 exists
When the lower layer of touch display screen 505, the pressure operation of touch display screen 505 is realized to UI circle according to user by processor 501
Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu
At least one of control.
Fingerprint sensor 514 is used to acquire the fingerprint of user, collected according to fingerprint sensor 514 by processor 501
The identity of fingerprint recognition user, alternatively, by fingerprint sensor 514 according to the identity of collected fingerprint recognition user.It is identifying
When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 501
Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Terminal can be set in fingerprint sensor 514
500 front, the back side or side.When being provided with physical button or manufacturer Logo in terminal 500, fingerprint sensor 514 can be with
It is integrated with physical button or manufacturer Logo.
Optical sensor 515 is for acquiring ambient light intensity.In one embodiment, processor 501 can be according to optics
The ambient light intensity that sensor 515 acquires controls the display brightness of touch display screen 505.Specifically, when ambient light intensity is higher
When, the display brightness of touch display screen 505 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 505 is bright
Degree.In another embodiment, the ambient light intensity that processor 501 can also be acquired according to optical sensor 515, dynamic adjust
The acquisition parameters of CCD camera assembly 506.
Proximity sensor 516, also referred to as range sensor are generally arranged at the front panel of terminal 500.Proximity sensor 516
For acquiring the distance between the front of user Yu terminal 500.In one embodiment, when proximity sensor 516 detects use
When family and the distance between the front of terminal 500 gradually become smaller, touch display screen 505 is controlled from bright screen state by processor 501
It is switched to breath screen state;When proximity sensor 516 detects user and the distance between the front of terminal 500 becomes larger,
Touch display screen 505 is controlled by processor 501 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal 500 of structure shown in Fig. 5, can wrap
It includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
The terminal that the embodiment of the present disclosure provides is determined according to the number of channels on every layer of deep learning model after beta pruning
The model structure of transplantable deep learning model, and according on every layer of deep learning model after beta pruning number of channels and spy
Matrix is levied, cross-layer addition is carried out to the eigenmatrix of every layer of upper respective channel, is obtained on transplantable every layer of deep learning model
The corresponding eigenmatrix in different channels, and then according to the model structure of transplantable deep learning model and every layer of upper different channels
Corresponding eigenmatrix constructs transplantable deep learning model.Present disclose provides a kind of transplantable deep learnings of building
The method of model improves the practicability of model, and is constructed according to the number of channels on every layer after beta pruning, reduces mould
The calculation amount of type and model running time and occupied resource, to improve the efficiency of model running.
Fig. 6 is a kind of electronic equipment for the model construction based on beta pruning shown according to an exemplary embodiment, should
Electronic equipment is server.Referring to Fig. 6, it further comprises one or more processing that server 600, which includes processing component 622,
Device, and the memory resource as representated by memory 632, for store can by the instruction of the execution of processing component 622, such as
Application program.The application program stored in memory 632 may include it is one or more each correspond to one group refer to
The module of order.In addition, processing component 622 is configured as executing instruction, serviced with executing in the above-mentioned model construction based on beta pruning
Function performed by device.
Server 600 can also include that a power supply module 626 be configured as the power management of execute server 600, and one
A wired or wireless network interface 650 is configured as server 600 being connected to network and input and output (I/O) interface
658.Server 600 can be operated based on the operating system for being stored in memory 632, such as Windows ServerTM, Mac OS
XTM, UnixTM,LinuxTM, FreeBSDTMOr it is similar.
The server that the embodiment of the present disclosure provides, according to the number of channels on every layer of deep learning model after beta pruning, really
The model structure of fixed transplantable deep learning model, and according on every layer of deep learning model after beta pruning number of channels and
Eigenmatrix carries out cross-layer addition to the eigenmatrix of every layer of upper respective channel, obtains transplantable every layer of deep learning model
The corresponding eigenmatrix in upper difference channel, so it is upper Bu Tong logical with every layer according to the model structure of transplantable deep learning model
The corresponding eigenmatrix in road constructs transplantable deep learning model.Present disclose provides a kind of transplantable depth of building
The method for practising model, improves the practicability of model, and constructed according to the number of channels on every layer after beta pruning, reduces
The calculation amount of model and model running time and occupied resource, to improve the efficiency of model running.
The embodiment of the present disclosure provides a kind of computer readable storage medium, is stored at least one in the storage medium
Instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set
Or described instruction collection is loaded by processor and is executed to realize the model building method shown in Fig. 2 based on beta pruning.
The computer readable storage medium that the embodiment of the present disclosure provides, according on every layer of deep learning model after beta pruning
Number of channels determines the model structure of transplantable deep learning model, and according on every layer of deep learning model after beta pruning
Number of channels and eigenmatrix, cross-layer addition is carried out to the eigenmatrix of every layer of upper respective channel, obtains transplantable depth
The corresponding eigenmatrix in every layer of learning model upper different channels, so according to the model structure of transplantable deep learning model and
The corresponding eigenmatrix in every layer of upper different channels, constructs transplantable deep learning model.Present disclose provides a kind of buildings can
The method of the deep learning model of transplanting, improves the practicability of model, and according to the number of channels on every layer after beta pruning into
Row building reduces the calculation amount and model running time and occupied resource of model, to improve model running
Efficiency.
It should be understood that the model construction device provided by the above embodiment based on beta pruning is to the model based on beta pruning
When being constructed, only the example of the division of the above functional modules, in practical application, can according to need and will be upper
It states function distribution to be completed by different functional modules, i.e., the internal structure of the model construction device based on beta pruning is divided into difference
Functional module, to complete all or part of the functions described above.In addition, the mould provided by the above embodiment based on beta pruning
Type construction device and the model building method embodiment based on beta pruning belong to same design, and specific implementation process is detailed in method reality
Example is applied, which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiments of the disclosure, not to limit the disclosure, all spirit in the disclosure and
Within principle, any modification, equivalent replacement, improvement and so on be should be included within the protection scope of the disclosure.
Claims (10)
1. a kind of model building method based on beta pruning, which is characterized in that the described method includes:
Deep learning model after obtaining beta pruning, the deep learning model after the beta pruning are to carry out beta pruning to deep learning model
Operation obtains;
According to the number of channels on every layer of deep learning model after the beta pruning, the mould of transplantable deep learning model is determined
Type structure;
According to the number of channels and eigenmatrix on every layer of deep learning model after the beta pruning, to the depth after the beta pruning
The eigenmatrix of every layer of learning model upper respective channel carries out cross-layer addition, obtains described transplantable every layer of deep learning model
The corresponding eigenmatrix in upper difference channel;
According to the model structure of the transplantable deep learning model and it is transplantable every layer of deep learning model described on not
The corresponding eigenmatrix with channel constructs the transplantable deep learning model.
2. the method according to claim 1, wherein every layer of the deep learning model according to after the beta pruning
On number of channels, determine the model structure of transplantable deep learning model, comprising:
The output channel quantity on every layer of deep learning model after obtaining the beta pruning;
Using one layer on the deep learning model after the beta pruning of output channel quantity as next layer of input channel quantity, obtain
To the model structure of the transplantable deep learning model.
3. the method according to claim 1, wherein every layer of the deep learning model according to after the beta pruning
On number of channels and eigenmatrix, the eigenmatrix of every layer of deep learning model after the beta pruning upper respective channel is carried out
Cross-layer is added, and obtains described transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels, comprising:
Depth according to the number of channels and eigenmatrix on every layer of deep learning model after the beta pruning, after determining the beta pruning
Spend learning model in can cross-layer be added layer;
By in the deep learning model after the beta pruning can cross-layer be added layer on respective channel eigenmatrix carry out cross-layer phase
Add, obtains described transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels.
4. according to the method described in claim 3, it is characterized in that, every layer of the deep learning model according to after the beta pruning
On number of channels and eigenmatrix, can the layer that is added of cross-layer in the deep learning model after determining the beta pruning, comprising:
For any two layers in the deep learning model after the beta pruning, if two layers of number of channels is identical and each channel pair
The eigenmatrix size answered is identical, it is determined that described two layers can cross-layer addition.
5. according to the method described in claim 3, it is characterized in that, can be across in the deep learning model by after the beta pruning
The eigenmatrix of respective channel carries out cross-layer addition on the layer that layer is added, and obtains on described transplantable every layer of deep learning model
The corresponding eigenmatrix in different channels, comprising:
For in the deep learning model after the beta pruning can cross-layer be added any two layers, according to after beta pruning channel number and
The size of eigenmatrix, to two layers of progress zero padding operation to be added;
Based on the corresponding eigenmatrix of number of channels and each channel after zero padding, by the corresponding eigenmatrix in each channel of front layer
Eigenmatrix corresponding with rear layer respective channel is added, and layer each leads to after obtaining in the transplantable deep learning model
The corresponding eigenmatrix in road.
6. a kind of model construction device based on beta pruning, which is characterized in that described device includes:
Module is obtained, for obtaining the deep learning model after beta pruning, the deep learning model after the beta pruning is to depth
Model progress cut operator is practised to obtain;
Determining module, for determining transplantable depth according to the number of channels on every layer of deep learning model after the beta pruning
Spend the model structure of learning model;
Summation module, for according to the number of channels and eigenmatrix on every layer of deep learning model after the beta pruning, to institute
The eigenmatrix of every layer of deep learning model upper respective channel after stating beta pruning carries out cross-layer addition, obtains the transplantable depth
Spend every layer of the learning model above corresponding eigenmatrix in different channels;
Module is constructed, for the model structure and the transplantable deep learning according to the transplantable deep learning model
The corresponding eigenmatrix in every layer of model upper different channels, constructs the transplantable deep learning model.
7. device according to claim 6, which is characterized in that the determining module, for obtaining the depth after the beta pruning
Spend the output channel quantity on every layer of learning model;By one layer on the deep learning model after the beta pruning of output channel quantity
As next layer of input channel quantity, the model structure of the transplantable deep learning model is obtained.
8. device according to claim 6, which is characterized in that the summation module, for according to the depth after the beta pruning
The number of channels and eigenmatrix on every layer of learning model are spent, it can cross-layer addition in the deep learning model after determining the beta pruning
Layer;By in the deep learning model after the beta pruning can cross-layer be added layer on respective channel eigenmatrix carry out cross-layer phase
Add, obtains described transplantable every layer of the deep learning model above corresponding eigenmatrix in different channels.
9. a kind of electronic equipment, which is characterized in that the electronic equipment includes processor and memory, is stored in the memory
Have at least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu,
The code set or described instruction collection are loaded as the processor and are executed to realize as described in any one of claims 1 to 5
The model building method based on beta pruning.
10. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, extremely in the storage medium
A few Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or described
Instruction set is loaded as processor and is executed to realize the model construction based on beta pruning as described in any one of claims 1 to 5
Method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910498309.0A CN110210622A (en) | 2019-06-10 | 2019-06-10 | Model building method, device, electronic equipment and storage medium based on beta pruning |
CN201910922063.5A CN110458289B (en) | 2019-06-10 | 2019-09-27 | Multimedia classification model construction method, multimedia classification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910498309.0A CN110210622A (en) | 2019-06-10 | 2019-06-10 | Model building method, device, electronic equipment and storage medium based on beta pruning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110210622A true CN110210622A (en) | 2019-09-06 |
Family
ID=67791706
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910498309.0A Pending CN110210622A (en) | 2019-06-10 | 2019-06-10 | Model building method, device, electronic equipment and storage medium based on beta pruning |
CN201910922063.5A Active CN110458289B (en) | 2019-06-10 | 2019-09-27 | Multimedia classification model construction method, multimedia classification method and device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910922063.5A Active CN110458289B (en) | 2019-06-10 | 2019-09-27 | Multimedia classification model construction method, multimedia classification method and device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110210622A (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10740676B2 (en) * | 2016-05-19 | 2020-08-11 | Nec Corporation | Passive pruning of filters in a convolutional neural network |
CN107895192B (en) * | 2017-12-06 | 2021-10-08 | 广州方硅信息技术有限公司 | Deep convolutional network compression method, storage medium and terminal |
CN108932548A (en) * | 2018-05-22 | 2018-12-04 | 中国科学技术大学苏州研究院 | A kind of degree of rarefication neural network acceleration system based on FPGA |
-
2019
- 2019-06-10 CN CN201910498309.0A patent/CN110210622A/en active Pending
- 2019-09-27 CN CN201910922063.5A patent/CN110458289B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110458289A (en) | 2019-11-15 |
CN110458289B (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110841285B (en) | Interface element display method and device, computer equipment and storage medium | |
CN109800877A (en) | Parameter regulation means, device and the equipment of neural network | |
CN108304265A (en) | EMS memory management process, device and storage medium | |
CN109977333A (en) | Webpage display process, device, computer equipment and storage medium | |
CN110045960A (en) | Instruction set processing method, device and storage medium based on chip | |
CN110276840A (en) | Control method, device, equipment and the storage medium of more virtual roles | |
CN109816042B (en) | Data classification model training method and device, electronic equipment and storage medium | |
CN108762881A (en) | Interface method for drafting, device, terminal and storage medium | |
CN109522146A (en) | The method, apparatus and storage medium of abnormality test are carried out to client | |
CN110032384A (en) | Method, apparatus, equipment and the storage medium of resource updates | |
CN109218751A (en) | The method, apparatus and system of recommendation of audio | |
CN109102811A (en) | Generation method, device and the storage medium of audio-frequency fingerprint | |
CN110535890A (en) | The method and apparatus that file uploads | |
CN109189290A (en) | Click on area recognition methods, device and computer readable storage medium | |
CN109783176A (en) | Switch the method and apparatus of the page | |
CN110166275A (en) | Information processing method, device and storage medium | |
CN109833624A (en) | The display methods and device for line information of marching on virtual map | |
CN109413190A (en) | File acquisition method, device, electronic equipment and storage medium | |
CN109299319A (en) | Display methods, device, terminal and the storage medium of audio-frequency information | |
CN108966026A (en) | The method and apparatus for making video file | |
CN110264292A (en) | Determine the method, apparatus and storage medium of effective period of time | |
CN110210622A (en) | Model building method, device, electronic equipment and storage medium based on beta pruning | |
CN112052153B (en) | Product version testing method and device | |
CN113762054A (en) | Image recognition method, device, equipment and readable storage medium | |
CN110069256A (en) | Draw method, apparatus, terminal and the storage medium of component |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190906 |
|
WD01 | Invention patent application deemed withdrawn after publication |