CN109634401A - A kind of control method and electronic equipment - Google Patents
A kind of control method and electronic equipment Download PDFInfo
- Publication number
- CN109634401A CN109634401A CN201811639238.3A CN201811639238A CN109634401A CN 109634401 A CN109634401 A CN 109634401A CN 201811639238 A CN201811639238 A CN 201811639238A CN 109634401 A CN109634401 A CN 109634401A
- Authority
- CN
- China
- Prior art keywords
- layer
- energy consumption
- weight
- network model
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of control methods, comprising: obtains each layer of neural network model;According to the first computation rule, the energy consumption of each layer is calculated;Selection energy consumption meets the first layer of first condition, trims to the first weight set in the first layer, and the first weight set includes at least one weight;The first condition is: the energy consumption of the first energy consumption of first layer layer minimum not less than energy consumption.In the program, by the layered structure of neural network model, the fractional weight in the layer high to energy consumption is trimmed, and realizes the energy consumption for reducing neural network model, and due to being trimmed to the fractional weight in layer, it ensure that the counting accuracy of the neural network model.
Description
Technical field
This disclosure relates to field of electronic device, more specifically, it relates to a kind of control method and electronic equipments.
Background technique
It is detected from figure speech recognition to system mode, CNN (Convolutional Neural Network, convolution mind
Through network) it has a wide range of applications.
But AI (Artificial Intelligence, artificial intelligence) chip that electronic equipment uses, for video
The big application of the data volumes such as audio, the calculating of CNN model and storage meeting power consumption are very big, for this purpose, CNN can be controlled by needing one kind
The method of model power consumption.
Summary of the invention
In view of this, present disclose provides a kind of control method, solve that CNN model power consumption in the prior art is big to ask
Topic.
To achieve the above object, the disclosure provides the following technical solutions:
A kind of control method, comprising:
Obtain each layer of neural network model;
According to the first computation rule, the energy consumption of each layer is calculated;
Selection energy consumption meets the first layer of first condition, repairs to the first weight set in the first layer
It cuts, the first weight set includes at least one weight;
The first condition is:
The energy consumption of the first layer is higher than the energy consumption of the minimum layer of energy consumption.
Preferably, above-mentioned method, first computation rule of foundation calculate the energy consumption of each layer, comprising:
According to the calculation times in any layer, the calculating energy consumption of the layer is calculated;
According to the access times of memory in the layer, the storage energy consumption of the layer is calculated;
It is consumed according to the calculating energy consumption and the storage energy, obtains the energy consumption of the layer.
Preferably, above-mentioned method, the first weight set in the first layer are trimmed, comprising:
Delete the first weight set that weighted value in the first layer is less than first threshold.
Preferably, above-mentioned method, first weight sets deleted weighted value in the first layer and be less than first threshold
After conjunction, further includes:
According to the first nerves network model after trimming, default input information is handled, the first output result is obtained;
Output assessment errors are obtained according to the first output interpretation of result and/or output prograin is unsatisfactory for specific item
Part cancels the deletion to the second weight set, and first weight sets, which closes, includes at least the second weight set;
Weight in the second weight set from the first numerical value is revised as second value, first numerical value is greater than the
Two numerical value.
Preferably, above-mentioned method, deletion of the revocation to the second weight set, comprising:
The the second weight set for meeting essential condition in the first weight set is selected, is cancelled to second weight sets
The deletion of conjunction;
It maintains in the first weight set except the deletion of other weights of the second weight set.
Preferably, above-mentioned method, it is described that weight in the second weight set from the first numerical value is revised as second
After numerical value, further includes:
The first weight in the first layer in the second weight set is obtained, and obtains corresponding with first weight
One parameter;
Obtain at least one second parameter in the first layer with first parameter association;
Based on the correlation of described at least one second parameter and first parameter, adjust first parameter with it is described
The weighted value of at least one the second parameter, so that the neural network model output assessment errors and defeated based on weighted value after adjustment
Accuracy meets preset requirement out.
Preferably, above-mentioned method, it is described that weight in the second weight set from the first numerical value is revised as second
After numerical value, further includes:
Whole the second weight in the second weight set of first nerves network model after obtaining trimming, and obtain with it is described
The corresponding third parameter of second weight;
At least one the 4th parameter in first nerves network model after obtaining trimming with the third parameter association;
Based on the correlation of at least one the 4th parameter and the third parameter, adjust the third parameter with it is described
The weighted value of at least one the 4th parameter, so that the neural network model output assessment errors and defeated based on weighted value after adjustment
Accuracy meets preset requirement out.
Preferably, above-mentioned method, after the first weight set in the first layer is trimmed, further includes:
According to judgment rule is preset, non-trim layer is judged whether there is;
Based on there is non-trim layer, returning to circulation and executing according to the first computation rule, calculating the energy consumption of each layer
Step.
Preferably, above-mentioned method judges whether there is non-trim layer, comprising:
According to the first computation rule, the energy consumption of the first nerves network model after calculating trimming, the energy consumption
It is the summation of the energy consumption of each layer described in the first nerves network model after the trimming;
Energy consumption based on the first nerves network model after the trimming is lower than energy consumption threshold value, and there is no do not repair
Cut layer;
Energy consumption based on the first nerves network model after the trimming is not less than energy consumption threshold value, exists and does not repair
Cut layer.
A kind of electronic equipment, comprising:
Shell;
Processor, for obtaining each layer of neural network model;According to the first computation rule, the energy of each layer is calculated
Consumption;Selection energy consumption meets the first layer of first condition, trims to the first weight set in the first layer, institute
Stating the first weight set includes at least one weight;The first condition is: the energy consumption of the first layer disappears higher than energy
Consume the energy consumption of minimum layer.
It can be seen via above technical scheme that compared with prior art, present disclose provides a kind of control methods, comprising:
Obtain each layer of neural network model;According to the first computation rule, the energy consumption of each layer is calculated;Select energy consumption symbol
The first layer for closing first condition, trims the first weight set in the first layer, the first weight set includes
At least one weight;The first condition is: the first energy consumption of first layer layer minimum not less than energy consumption
Energy consumption.In the program, by the layered structure of neural network model, the fractional weight in the layer high to energy consumption is carried out
The energy consumption for reducing neural network model is realized in trimming, and due to being trimmed to the fractional weight in layer, guarantee
The counting accuracy of the neural network model.
Detailed description of the invention
In order to illustrate more clearly of the embodiment of the present disclosure or technical solution in the prior art, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Disclosed embodiment for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is convolutional neural networks model data treatment process schematic diagram in the prior art;
Fig. 2 is the configuration diagram of neural network model applied by control method provided by the present application and electronic equipment;
Fig. 3 is the configuration diagram that sensor polymerize environment scene perception;
Fig. 4 is schematic diagram of the framework provided by the present application in application scenarios;
Fig. 5 is a kind of flow chart of control method embodiment 1 provided by the present application;
Fig. 6 is a kind of flow chart of control method embodiment 2 provided by the present application;
Fig. 7 is a kind of flow chart of control method embodiment 3 provided by the present application;
Fig. 8 is a kind of flow chart of control method embodiment 4 provided by the present application;
Fig. 9 is a kind of flow chart of control method embodiment 5 provided by the present application;
Figure 10 is a kind of flow chart of control method embodiment 6 provided by the present application;
Figure 11 is a kind of flow chart of control method embodiment 7 provided by the present application;
Figure 12 is a kind of flow chart of control method embodiment 8 provided by the present application;
Figure 13 is a kind of application scenarios flow chart of control method provided by the present application;
The structural schematic diagram of Figure 14 a kind of electronic equipment embodiment provided by the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present disclosure, the technical solution in the embodiment of the present disclosure is carried out clear, complete
Site preparation description, it is clear that described embodiment is only disclosure a part of the embodiment, instead of all the embodiments.It is based on
Embodiment in the disclosure, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment belongs to the range of disclosure protection.
Firstly, it is necessary to explanation, in the prior art, in the big application of the data volumes such as audio-video, for audio-video
Processing is that layering carries out, and be as shown in Figure 1 convolutional neural networks model data treatment process schematic diagram in the prior art, refreshing
5 layers (L1-L5) processing are carried out for picture through network model, wherein picture is at L0 layers as input (input), pixel 512
× 512, L1 layers of processing are pixel 256 × 256, and L2 layer processing are pixel 128 × 128, and L3 layer are handled as pixel 64 × 64, L4 layers
Processing is pixel 32 × 32, is handled using L5 layers, by L6 layers as (output) is exported, finally obtains processing result, wherein
Processing mode between L0-L4 is convolutional calculation (convolution), and the processing mode between L4-L6 is to be fully connected
(fully connected).If it is the number of plies for reducing neural network model, then it will lead to the precision drop of the neural network model
It is low.So the scheme in the application, does not reduce the number of plies, but the fractional weight in the layer high for energy consumption is trimmed,
Realize the energy consumption for reducing neural network model, and due to being trimmed to the fractional weight in layer, it ensure that this
The counting accuracy of neural network model.
As shown in Figure 2 is the framework of neural network model applied by control method provided by the present application and electronic equipment
Schematic diagram, comprising: signals layer 201, reference lamina 202, environment sensing layer 203 and equipment application layer 204.
Specifically, the signal of sensor acquisition is directly inputted to signals layer 201, scene Recognition algorithm (AI etc.) is applied to mark
Will layer 202 and environment sensing layer 203;And application program calls directly equipment application layer 204 to obtain current environment scene letter
Breath.
Wherein, in the framework, mutually independent framework between layering, each layer can be reconstructed, and not influence other layers;Every layer
Reconstruct determined by the function property of this layer;Every layer of aggregating algorithm with independent design and can be implemented;
The framework can support the support of many algorithms, such as machine learning, neural network and hiding Marko husband machine, this
This is not limited in application.
Specifically, signals layer 201 is directly to connect with sensor, acquires and store for controlling data, also, the signal
Layer output is initial data, has a certain time label, the time tag illustrate initial data can with it is certain built-in when
Between characteristic.
Specifically, reference lamina 202 mainly extracts characteristic indication from individual sensing data, time series signal is turned
The special category space independently of the time is changed to, the special category space is including but not limited to frequency domain, complex frequency domain etc..
Specifically, perception environment layer 203 determines environment scene using the combination of mark or multiple marks, output is
Probability of user's (equipment) in some specific environment.
As an example, environment 90% is in a mobile car (off-highway) at present;In the perception environment
In layer, the customization of depth can be carried out to environment, using the AI such as neural network (analogue input, artificial intelligence) tool pair
Mark is learnt, and entire environment sensing has time tag, i.e., the switching of scene on having time axis.
Specifically, equipment application layer 204 is developed in device operating system, and the entire sensor of application converges scene
Sensory perceptual system as a result, being supplied to the program of needs.
As shown in Figure 3 is the configuration diagram of sensor polymerization environment scene perception provided by the present application, including signal
Layer 301, reference lamina 302, environment sensing layer 303 and equipment application layer 304, wherein signals layer 301 and sensor 1, sensor
2 ... connections pass through signal interface to the data that the sensor collects with multiple transmissions (channel 1, channel 2 ...)
To reference lamina 302;The label layer is identified extraction to obtained data, obtains mark 1, mark 2 ..., then pass through mark interface
It is transmitted to perception environment layer 303;For the perception environment layer 303 according to obtained spectroscopic analysis, obtained analysis result characterizes use
The analysis result and is transferred to equipment application layer, equipment application by environmental interface in the probability of a certain environment by family (equipment)
Layer exports the result.
As shown in Figure 4, it is schematic diagram of the framework provided by the present application in application scenarios, which includes: signals layer
401, reference lamina 402, environment sensing layer 403 and equipment application layer 404.
The signals layer 401 obtains following information: environment light fundamental frequency, average brightness, brightness from the external equipments such as sensor
Standard deviation, maximum temperature, minimum temperature, mean temperature, GPS/ position data, date data etc.;Label layer 402 is for the signal
The information of layer transmission is identified extraction, carries out spectral signature and time domain according to environment light fundamental frequency, average brightness, luminance standard difference
It is artificial light source that the extraction and analysis of feature, which obtains the environment, that is, extracts and obtain being identified as artificial light source, according to maximum temperature, minimum
Temperature, mean temperature carry out temperature time-domain analysis, and extraction obtains being identified as 24H (hour, hour) temperature, according to GPS/ positional number
Obtaining the data according to analyses such as, date datas is meteorological data here and now, that is, extracts and obtain being identified as meteorological number here and now
According to the mark that the label layer is extracted is converged, analyzes, can obtain in conjunction with data and mark of the AI model to the convergence
To the result of indoor probability 95% and outdoor probability 5%.Finally, which can be exported by equipment application layer 404.
As shown in Figure 5, it is a kind of flow chart of control method embodiment 1 provided by the present application, this method is applied to one
Electronic equipment has neural network model in the electronic equipment, method includes the following steps:
Step S501: each layer of neural network model is obtained;
Wherein, which can reconstruct using a kind of mutually independent framework of layering, each layer, not shadow
Ring other layers;And every layer of reconstruct is determined by the function property of this layer, every layer of aggregating algorithm can be with independent design simultaneously
Implement.
Wherein, which includes at least two layers.
Specifically, being determined to each layer in the neural network model, each of the neural network model is obtained to realize
Layer.
Step S502: according to the first computation rule, the energy consumption of each layer is calculated;
In specific implementation, which is treated in journey to the data of input, and every layer needs to be implemented place
Reason operation, correspondingly, every layer has energy consumption.
Wherein, it is preset with the first computation rule, the meter of energy consumption is successively carried out to each layer in the neural network model
It calculates, obtains the energy consumption of each layer.
Step S503: selection energy consumption meets the first layer of first condition, to the first weight sets in the first layer
Conjunction is trimmed.
Wherein, the first weight set includes at least one weight;
The first condition is: the energy consumption of the first layer is higher than the energy consumption of the minimum layer of energy consumption.
Wherein, in order to reduce the energy consumption of the neural network model, at least one layer of its inside is trimmed.
Specifically, selection higher/highest at least one layer of energy consumption is trimmed.
Wherein, higher/highest mode of the determination energy consumption is that it is minimum that the energy consumption of this layer is higher than energy consumption
Layer.
Specifically, being trimmed to the first weight set in the first layer of selection, to realize that this layer carries out data processing
In the process, the processing operation of execution is reduced, and reduces the energy consumption of this layer.
It should be noted that in specific implementation, can first one layer highest to energy consumption trim, and trimming
Cheng Hou calculates the energy consumption for carrying out every layer in neural network model again, then repairs to highest one layer of energy consumption
It cuts, and so on.
To sum up, a kind of information processing method provided in this embodiment, comprising: obtain each layer of neural network model;Foundation
First computation rule calculates the energy consumption of each layer;Selection energy consumption meets the first layer of first condition, to described the
The first weight set in one layer is trimmed, and the first weight set includes at least one weight;The first condition is:
The energy consumption of first energy consumption of the first layer layer minimum not less than energy consumption.In the present solution, passing through nerve net
The layered structure of network model, the fractional weight in the layer high to energy consumption are trimmed, and realization reduces neural network model
Energy consumption ensure that the calculating of the neural network model is accurate and due to being trimmed to the fractional weight in layer
Degree.
It is as shown in FIG. 6, it is a kind of flow chart of control method embodiment 2 provided by the present application, this method includes following
Step:
Step S601: each layer of neural network model is obtained;
Wherein, step S601 is consistent with the step S501 in embodiment 1, does not repeat them here in the present embodiment.
Step S602: according to the calculation times in any layer, the calculating energy consumption of the layer is calculated;
It is explained in the present embodiment mainly for the detailed process for calculating energy consumption in layer.
It should be noted that being treated in journey in neural network model to data, each layer is counted to data
Calculate and storing process in have an energy consumption, in the present embodiment, calculate separately energy consumption for different processing modes.
Wherein, according to the calculation times in any layer, the calculating energy consumption of this layer is calculated.
Specifically, the calculation processings such as addition subtraction multiplication and division of every progress are increased by primary calculating energy consumption, wherein every kind of meter
Calculation mode can correspond to different energy consumption values, such as carry out a plus/minus and be calculated as erg-ten (J), once multiplied/removed meter
Calculate is 1.1 joules;Identical energy consumption values can also be corresponded to, are such as erg-tens.Currently, the corresponding energy consumption of the calculating
The value of value is not restricted to this, can be configured according to the actual situation.
In general, the calculating energy consumption in layer is relevant to the complexity of its computational algorithm, calculates energy and disappears
Consumption is bigger, and the complexity of computational algorithm is higher;Calculating energy consumption is smaller, and the complexity of computational algorithm is lower.
Step S603: according to the access times of memory in the layer, the storage energy consumption of the layer is calculated;
Wherein, according to the access times of memory in any layer, the access energy consumption of this layer is calculated.
Specifically, every progress primary access processing is increased by a storage energy consumption, wherein storage or reading manner can
It with the different energy consumption values of correspondence, such as carries out being stored as 0.5 joule (J), carries out once being read as 0.6 joule;It can also be right
Identical energy consumption values are answered, are such as 0.5 joule.Currently, the value of the corresponding energy consumption values of the calculating is not restricted to
This, can be configured according to the actual situation.
In general, the storage energy consumption in layer is, storage energy relevant to the quantity that it stores required node
Consumption is bigger, and the quantity of node needed for storage is more;Storage energy consumption is smaller, the quantity of node needed for storage
It is fewer.
Step S604: it is consumed according to the calculating energy consumption and the storage energy, obtains the energy consumption of the layer;
Wherein, it is calculated according to the consumption of the calculating energy consumption and storage energy of this layer, the energy of this layer can be obtained
Amount consumption.
Specifically, directly the calculating energy consumption and storage energy consumption can be summed, resulting value is should
The energy consumption of layer;When total consumption is more likely to be determined by certain energy, or the calculating energy consumption and storage
Energy consumption distributes different weighted values, and the weighted value for such as calculating energy consumption is 0.6, and the weighted value of storage energy consumption is
0.4, then computation rule is energy consumption=calculating energy consumption × 0.6+ storage energy consumption × 0.4 of layer.
Step S605: selection energy consumption meets the first layer of first condition, to the first weight sets in the first layer
Conjunction is trimmed.
Wherein, the step 503 in step S605 and embodiment 1 is consistent.
To sum up, in a kind of information processing method provided in this embodiment, which calculates each layer
Energy consumption, comprising: according to the calculation times in any layer, the calculating energy consumption of the layer is calculated;According to described
The storage energy consumption of the layer is calculated in the access times of memory in layer;According to the calculating energy consumption and described deposit
Energy storage capacity consumption, obtains the energy consumption of the layer.In the present solution, respectively in layer calculating energy consumption and storage energy disappear
Consumption is calculated, and further calculates the energy consumption for obtaining this layer, calculating process is simple.
As shown in Figure 7, it is a kind of flow chart of control method embodiment 3 provided by the present application, this method includes following
Step:
Step S701: each layer of neural network model is obtained;
Step S702: according to the first computation rule, the energy consumption of each layer is calculated;
Wherein, step S701-702 is consistent with the step S501-502 in embodiment 1.
Step S703: selection energy consumption meets the first layer of first condition, deletes weighted value in the first layer and is less than
First weight set of first threshold.
Wherein, the weight in the first layer is trimmed, specifically fractional weight therein is trimmed.
Specifically, selecting weight to be less than the weight of first threshold in the first layer, deleted as the first weight set
It removes.
Specifically, the first threshold can be preset value, it can be the value adaptively adjusted.
In general, the first threshold is using adaptive adjustment to improve the data processing accuracy of the neural network model
Value.
Wherein, the process adaptively adjusted is as follows: preset threshold;According to preset threshold, weight in the first layer is deleted
Value is less than the weight of preset threshold;Default input information is handled based on the neural network model after deletion, is exported
As a result;If output assessment errors and/or output prograin are unsatisfactory for specified conditions, the power for being less than preset threshold to this is cancelled
The delete operation of weight, and threshold value is reduced to the first value from preset value;Weighted value is deleted in the first layer less than the power of the first value
Weight, is handled default input information based on the neural network model after deletion, obtains output result;If output assessment misses
Difference and/or output prograin are unsatisfactory for specified conditions, then cancel the delete operation to the weight less than the first value, and by threshold value
It is reduced to second value from the first value, until the near N value of the threshold value (value of N is natural number), and delete power in the first layer
Weight values are handled default input information based on the neural network model after deletion less than the weight of N value, obtain output knot
Fruit;If output assessment errors and output prograin meet specified conditions and terminate.
It in general, is to adjust the threshold value to small direction, numerical value is random.
In specific implementation, weight in the first weight set of the trimming can also have and targetedly select, i.e., with energy
Measure consumption mode relative to.
Specifically, if calculating energy consumption mostly, it may be possible to because computational algorithm complexity causes, the trimming
It needs to be adjusted for computational algorithm, the corresponding weight of parameter/step of Pruning Algorithm, so that the step of algorithm is reduced, drop
Low calculating energy consumption.If storage consumption energy is more, it may be possible to because node needed for storage causes more, should
Trimming needs to trim the corresponding weight of node, to reduce the node of storage, reduces storage energy consumption.
To sum up, in a kind of information processing method provided in this embodiment, this is to the first weight set in the first layer
It is trimmed, is specifically included: deleting the first weight set that weighted value in the first layer is less than first threshold.In the present solution,
The lesser fractional weight of weight in its first layer is deleted, ensure that the first layer can be to the essence that data are handled
Under the premise of degree, the energy consumption in this layer progress data handling procedure is reduced.
As shown in Figure 8, it is a kind of flow chart of control method embodiment 4 provided by the present application, this method includes following
Step:
Step S801: each layer of neural network model is obtained;
Step S802: according to the first computation rule, the energy consumption of each layer is calculated;
Step S803: selection energy consumption meets the first layer of first condition, deletes weighted value in the first layer and is less than
First weight set of first threshold;
Wherein, step S801-804 is consistent with the step S701-704 in embodiment 3.
Step S804: according to the first nerves network model after trimming, default input information is handled, obtains first
Export result;
It should be noted that in neural network model middle layer, the weight of interlayer may have incidence relation, with a certain power
The weight that there is incidence relation again is more, and the correlation of the weight and other weights is heavier, when deleting the weight, the neural network
Accuracy/error of model will receive bigger influence.
Specifically, one in the first weight set other even in several weights, with non-first weight set
Weight has incidence relation, then after deleting the first weight set, the relevant calculating/storage of other weights also will receive shadow
It rings.
For this purpose, the first nerves network model for needing to obtain the trimming survey after deleting the first weight set.
Specifically, by default input information input first nerves network model, so that the first nerves network mould
Type handles the input information, obtains the first output result.
Step S805: output assessment errors are obtained according to the first output interpretation of result and/or output prograin is discontented
Sufficient specified conditions cancel the deletion to the second weight set;
Step S806: weight in the second weight set from the first numerical value is revised as second value.
Wherein, first weight sets, which closes, includes at least the second weight set;
Wherein, first numerical value is greater than second value.
Wherein, output assessment errors are obtained according to the first output interpretation of result and/or output prograin is unsatisfactory for spy
Fixed condition, i.e., the output assessment errors are big and/or output prograin is lower.
Then, it needs to be adjusted the first weight set of the deletion.
Specifically, can modify to fractional weight therein, that is, after cancelling to the delete operation of the second weight set,
Reduce the numerical value of each weight in the second weight set.
In specific implementation, can according in the first weight set weight with other weights (weight in layer or its
The weight of his layer) between correlation determine whether to modify its numerical value.
In specific implementation, if a certain weight in the first weight set and the correlation of other layers are larger, can should
The numerical value of weight reduces, and is guaranteeing that other weights relevant to the weight are impacted smaller.
To sum up, a kind of information processing method provided in this embodiment, further includes: according to the first nerves network mould after trimming
Type handles default input information, obtains the first output result;Output is obtained according to the first output interpretation of result to comment
Estimate error and/or output prograin is unsatisfactory for specified conditions, cancels the deletion to the second weight set, the first weight set
Including at least the second weight set;Weight in the second weight set from the first numerical value is revised as second value,
First numerical value is greater than second value.In the present solution, the output result of neural network model after cutting in information processing
Output assessment errors and/or output prograin when being unsatisfactory for specified conditions, to the part in the first weight set of the deletion
Weight carries out revocation deletion, and reduces the numerical value of the weight of revocation deletion, to guarantee and weight phase in the second weight set
Other weights closed are impacted smaller, neural network model after also ensuring the trimming information processing output result it is defeated
Assessment errors and/or output prograin out.
As shown in Figure 9, it is a kind of flow chart of control method embodiment 5 provided by the present application, this method includes following
Step:
Step S901: each layer of neural network model is obtained;
Step S902: according to the first computation rule, the energy consumption of each layer is calculated;
Step S903: selection energy consumption meets the first layer of first condition, deletes weighted value in the first layer and is less than
First weight set of first threshold;
Wherein, step S901-903 is consistent with the step S701-703 in embodiment 3.
Step S904: according to the first nerves network model after trimming, default input information is handled, obtains first
Export result;
Step S905: output assessment errors are obtained according to the first output interpretation of result and/or output prograin is discontented
Sufficient specified conditions select the second weight set for meeting essential condition in the first weight set, and revocation is to second power
The deletion gathered again maintains in the first weight set except the deletion of other weights of the second weight set.
Wherein, the weight meet essential condition include the weight and other weights correlation it is higher, if deleting the power
If weight, the result of neural network model will lead to by large effect.
Then, the second weight set detailed process is selected:
The correlation of any weight and other weights in the first weight set is analyzed, the correlation is based on
Property meet condition, then deletion of the revocation to the weight;Otherwise, the deletion of the weight is maintained.
Wherein, the correlation analysis of any weight and other weights, including to the correlation analysis with the weight in layer
And the correlation analysis with the weight of interlayer.
To sum up, in a kind of information processing method provided in this embodiment, deletion of the revocation to the second weight set, packet
It includes: meeting the second weight set of essential condition in selection the first weight set, cancel to the second weight set
It deletes;It maintains in the first weight set except the deletion of other weights of the second weight set.In the present solution, for deleting
The fractional weight in the first weight set removed carries out revocation delete operation, and the neural network model after it ensure that the trimming exists
The output assessment errors and/or output prograin of the output result of information processing.
As shown in Figure 10, it is a kind of flow chart of control method embodiment 6 provided by the present application, this method includes following
Step:
Step S1001: each layer of neural network model is obtained;
Step S1002: according to the first computation rule, the energy consumption of each layer is calculated;
Step S1003: selection energy consumption meets the first layer of first condition, deletes weighted value in the first layer and is less than
First weight set of first threshold;
Step S1004: according to the first nerves network model after trimming, being handled default input information, obtains the
One output result;
Step S1005: output assessment errors are obtained according to the first output interpretation of result and/or output prograin is discontented
Sufficient specified conditions cancel the deletion to the second weight set;
Step S1006: weight in the second weight set from the first numerical value is revised as second value;
Wherein, step S1001-1006 is consistent with the step S801-806 in embodiment 4.
Step S1007: obtaining the first weight in the first layer in the second weight set, and obtains and first power
Corresponding first parameter of weight;
It should be noted that after carrying out numerical value modification to the weight in the second weight set in this layer, in order to mention
The overall accuracy and reduction output assessment errors of the high neural network model, will also carry out part to the neural network model
Subtle adjustment, by being adjusted to the weight with relevance in layer.
Specifically, first obtaining a weight in this layer in the second weight set, be defined as the first weight, and obtain this
Corresponding first parameter of one weight.
Wherein, first usually linear or nonlinear polynomial the parameter, multiple parameters calculate a weight,
First parameter is exactly polynomial first parameter.
Step S1008: at least one second parameter in the first layer with first parameter association is obtained;
Wherein, have relevance in the first layer with first weight has at least one weight, at least one weight
Also parameter, i.e. the second parameter are corresponding with.
Then, correspondingly, obtaining the second parameter in this layer with first parameter association, which is at least one.
Specifically, during the training neural network model, that is, can determine after the framework of neural network model determines
Relevance between parameters.
Step S1009: it is related to first parameter based at least one described second parameter, adjust first ginseng
Several weighted values at least one second parameter, so that the neural network model based on weighted value after adjustment exports assessment
Error and output prograin meet preset requirement.
Specifically, the incidence relation based on second parameter and the first parameter, respectively to first parameter and the second parameter
Weighted value be adjusted, and after each adjustment, which is detected, is exported as a result, determine should
The output assessment errors and output prograin for exporting result meet preset requirement.
In specific implementation, the accuracy of the preset requirement is higher than the specified conditions in step S1005, and/or presets and want
The output assessment errors asked are less than the specified conditions in step S1005.
To sum up, in a kind of information processing method provided in this embodiment, further includes: obtain the second weight in the first layer
The first weight in set, and obtain the first parameter corresponding with first weight;It obtains in the first layer with described the
At least one second parameter of one parameter association;Based on the correlation of described at least one second parameter and first parameter,
The weighted value of first parameter Yu at least one second parameter is adjusted, so that the nerve net based on weighted value after adjustment
Network model output assessment errors and output prograin meet preset requirement.In the present solution, also in the first layer have correlation
The parameter of weight be adjusted, advanced optimize neural network model, improve the output prograin of neural network model.
As shown in figure 11, it is a kind of flow chart of control method embodiment 7 provided by the present application, this method includes following
Step:
Step S1101: each layer of neural network model is obtained;
Step S1102: according to the first computation rule, the energy consumption of each layer is calculated;
Step S1103: selection energy consumption meets the first layer of first condition, deletes weighted value in the first layer and is less than
First weight set of first threshold;
Step S1104: according to the first nerves network model after trimming, being handled default input information, obtains the
One output result;
Step S1105: output assessment errors are obtained according to the first output interpretation of result and/or output prograin is discontented
Sufficient specified conditions cancel the deletion to the second weight set;
Step S1106: weight in the second weight set from the first numerical value is revised as second value;
Wherein, step S1101-1106 is consistent with the step S801-806 in embodiment 4.
Step S1107: the second weight in the whole second weight set of first nerves network model after obtaining trimming, and
Obtain third parameter corresponding with second weight;
It should be noted that after carrying out numerical value modification to the weight in the second weight set in this layer, in order to mention
The overall accuracy and reduction output assessment errors of the high neural network model, can also carry out the neural network model complete
The subtle adjustment of office, by being adjusted to the weight with relevance in different interlayers and layer.
Specifically, first obtaining a weight in this layer in the second weight set, be defined as the second weight, and obtain this
The corresponding third parameter of two weights.
Wherein, which is usually linear or nonlinear polynomial parameter, and multiple parameters calculate one
Weight, third parameter are exactly polynomial first parameter.
In specific implementation, step realization is executed to mind after can not also having unpruned layer in the neural network model
Global adaptation is carried out through network model.
Step S1108: obtain trimming after first nerves network model in the third parameter association at least one
4th parameter;
Wherein, have in the neural network model with second weight relevance have at least one weight (can be with
The second weight same layer, may not be same layer), which is also corresponding with parameter, i.e. the 4th parameter.
Then, correspondingly, obtaining the 4th parameter in this layer with the third parameter association, the 4th parameter is at least one.
Specifically, during the training neural network model, that is, can determine after the framework of neural network model determines
Relevance between parameters.
Step S1109: the correlation based at least one the 4th parameter and the third parameter adjusts the third
The weighted value of parameter and at least one the 4th parameter, so that the neural network model output based on weighted value after adjustment is commented
Estimate error and output prograin meets preset requirement.
Specifically, the incidence relation based on the third parameter and the 4th parameter, respectively to the third parameter and the 4th parameter
Weighted value be adjusted, and after each adjustment, which is detected, is exported as a result, determine should
The output assessment errors and output prograin for exporting result meet preset requirement.
In specific implementation, the accuracy of the preset requirement is higher than the specified conditions in step S1105, and/or presets and want
The output assessment errors asked are less than the specified conditions in step S1105.
To sum up, a kind of information processing method provided in this embodiment, further includes: the first nerves network mould after obtaining trimming
The second weight in the whole second weight set of type, and obtain third parameter corresponding with second weight;After obtaining trimming
First nerves network model at least one the 4th parameter with the third parameter association;Based on it is described at least one the 4th
The correlation of parameter and the third parameter adjusts the weighted value of the third parameter Yu at least one the 4th parameter, with
So that neural network model output assessment errors and output prograin based on weighted value after adjustment meet preset requirement.This programme
In, also the parameter of the weight in the neural network model after trimming with correlation is adjusted, advanced optimizes nerve net
Network model improves the output prograin of neural network model.
As shown in figure 12, it is a kind of flow chart of control method embodiment 8 provided by the present application, this method includes following
Step:
Step S1201: each layer of neural network model is obtained;
Step S1202: according to the first computation rule, the energy consumption of each layer is calculated;
Step S1203: selection energy consumption meets the first layer of first condition, to the first weight sets in the first layer
Conjunction is trimmed;
Wherein, step S1201-1203 is consistent with the step S501-503 in embodiment 1, does not repeat them here in the present embodiment.
Step S1204: according to judgment rule is preset, non-trim layer is judged whether there is;
Wherein, after carrying out the weight set trimming in a sublevel, it is also necessary to judge whether the neural network model is also deposited
In non-trim layer.
Specifically, judging whether there is non-trim layer in the step, comprising:
According to the first computation rule, the energy consumption of the first nerves network model after calculating trimming, the energy consumption
It is the summation of the energy consumption of each layer described in the first nerves network model after the trimming;
Energy consumption based on the first nerves network model after the trimming is lower than energy consumption threshold value, and there is no do not repair
Cut layer;
Energy consumption based on the first nerves network model after the trimming is not less than energy consumption threshold value, exists and does not repair
Cut layer.
Based on there is non-trim layer, returning to circulation and executing according to the first computation rule, calculating the energy consumption of each layer
Step S1202.
Step S1205: based on non-trim layer is not present, terminate.
Specifically, then continuing to disappear to the energy of each layer of the neural network model after the trimming if there is non-trim layer
Consumption is calculated, and is determined for compliance with the layer of energy consumption first condition, and circulation executes step S1203, until there is no do not trim
Layer terminates.
To sum up, a kind of information processing method provided in this embodiment, further includes: according to judgment rule is preset, judge whether
In the presence of non-trim layer;Based on there is non-trim layer, returning to circulation and executing according to the first computation rule, calculating the energy of each layer
Consume step.Using this method, by ensure that finally obtained neural network to whether there is also non-trim layers to judge
Each layer of energy consumption is all satisfied condition in model, whole also to meet condition, and output prograin and/or output assessment errors
It is all satisfied condition.
It is as shown in fig. 13 that a kind of application scenarios flow chart of control method provided by the present application, includes the following steps:
Step S1301: input model;
Wherein, the model of input is neural network model, includes multiple layers in the neural network model.
Step S1302: layer trimming sequence is determined by energy consumption;
In this step, repairing for layer is determined by being calculated each layer of energy consumption in the neural network model
Cut sequence.
In general, first trimming maximum one layer of energy consumption.
Step S1303: the medium and small weight in some thresholding of layer is removed;
In this step, the weight less than some thresholding in the layer for determining trimming is removed, so that the layer becomes
It is sparse, reduce the energy consumption of the neural network model.
Step S1304: restore certain weights to reduce output error;
In the previous step, if causing the accuracy of the neural network model and deviation to be affected after removing weight
Develop to worse direction, then needs to restore certain weights (can be the part in the weight of removal) to reduce the output and miss
Difference.
Specifically, can carry out reducing the processing of numerical value, to the weight of the recovery to further realize reduction output error.
Step S1305: the subtle adjustment weight in part;
Specifically, by the incidence relation of other weights in weight to the recovery and layer, modify the weight of the recovery with
And the numerical value of the corresponding parameter of associated weight, to realize the purpose for reducing output error, improving output accuracy.
Step S1306: other layers that do not trim are judged whether there is;
Wherein, by judging whether the overall power consumption of the neural network model after the trimming is less than threshold value, to determine,
If it is less than threshold value, then judgement performs the next step rapid S1307 there is no the layer that other are not trimmed;Otherwise, there are other not to have
The layer of trimming returns to step S1303 and starts to trim next layer simultaneously;
Step S1307: global subtle adjustment weight;
Specifically, by (can be to the weight restored in each layer and other weights in the neural network model entirety
Weight in the layer of identical layer is also possible to the interlayer weight of different layers) incidence relation, modify the weight and association of the recovery
Weight corresponding parameter numerical value, with realize reduce output error, improve output accuracy purpose.
Step S1308: judge whether accuracy reaches requirement;
If so, performing the next step rapid S1309;If not, returning to step the iteration that S1302 starts next round.
Specifically, judging the output prograin for the neural network model being adjusted, wanted if meeting accuracy
It asks, then judges that trimming adjusts successfully, the accuracy of the neural network model is high, energy consumption is low, can end processing;Otherwise,
Even if the neural network model has achieved the purpose that energy consumption is low, but since its accuracy is lower, so that the neural network
The purpose that initially sets up of model is not reached, it is desired nonetheless to and the neural network model is handled, starts to carry out next round iteration,
Until it meets precise requirements.
Step S1309: output model.
Corresponding with a kind of above-mentioned control method embodiment provided by the present application, present invention also provides apply the control
The electronic equipment embodiment of method.
As shown in figure 14 is the structural schematic diagram of a kind of electronic equipment embodiment provided by the present application, in the electronic equipment
With neural network model, which includes with flowering structure: ontology 1401 and processor 1402;
Wherein, which is set in ontology;
Wherein, processor, for obtaining each layer of neural network model;According to the first computation rule, calculate described each
The energy consumption of layer;Selection energy consumption meets the first layer of first condition, to the first weight set in the first layer into
Row trimming, the first weight set include at least one weight;The first condition is: the energy consumption of the first layer is high
In the energy consumption of the minimum layer of energy consumption.
Preferably, which calculates the energy consumption of each layer according to the first computation rule, comprising:
According to the calculation times in any layer, the calculating energy consumption of the layer is calculated;
According to the access times of memory in the layer, the storage energy consumption of the layer is calculated;
It is consumed according to the calculating energy consumption and the storage energy, obtains the energy consumption of the layer.
Preferably, which trims the first weight set in the first layer, comprising:
Delete the first weight set that weighted value in the first layer is less than first threshold.
Preferably, which deletes weighted value in the first layer and is less than after the first weight set of first threshold,
Further include:
According to the first nerves network model after trimming, default input information is handled, the first output result is obtained;
Output assessment errors are obtained according to the first output interpretation of result and/or output prograin is unsatisfactory for specific item
Part cancels the deletion to the second weight set, and first weight sets, which closes, includes at least the second weight set;
Weight in the second weight set from the first numerical value is revised as second value, first numerical value is greater than the
Two numerical value.
Preferably, which cancels the deletion to the second weight set, comprising:
The the second weight set for meeting essential condition in the first weight set is selected, is cancelled to second weight sets
The deletion of conjunction;
It maintains in the first weight set except the deletion of other weights of the second weight set.
Preferably, the processor by the second weight set weight from the first numerical value be revised as second value it
Afterwards, further includes:
The first weight in the first layer in the second weight set is obtained, and obtains corresponding with first weight
One parameter;
Obtain at least one second parameter in the first layer with first parameter association;
Based on the correlation of described at least one second parameter and first parameter, adjust first parameter with it is described
The weighted value of at least one the second parameter, so that the neural network model output assessment errors and defeated based on weighted value after adjustment
Accuracy meets preset requirement out.
Preferably, the processor by the second weight set weight from the first numerical value be revised as second value it
Afterwards, further includes:
Whole the second weight in the second weight set of first nerves network model after obtaining trimming, and obtain with it is described
The corresponding third parameter of second weight;
At least one the 4th parameter in first nerves network model after obtaining trimming with the third parameter association;
Based on the correlation of at least one the 4th parameter and the third parameter, adjust the third parameter with it is described
The weighted value of at least one the 4th parameter, so that the neural network model output assessment errors and defeated based on weighted value after adjustment
Accuracy meets preset requirement out.
Preferably, after which trims the first weight set in the first layer, further includes:
According to judgment rule is preset, non-trim layer is judged whether there is;
Based on there is non-trim layer, returning to circulation and executing according to the first computation rule, calculating the energy consumption of each layer
Step.
Preferably, which judges whether there is non-trim layer, comprising:
According to the first computation rule, the energy consumption of the first nerves network model after calculating trimming, the energy consumption
It is the summation of the energy consumption of each layer described in the first nerves network model after the trimming;
Energy consumption based on the first nerves network model after the trimming is lower than energy consumption threshold value, and there is no do not repair
Cut layer;
Energy consumption based on the first nerves network model after the trimming is not less than energy consumption threshold value, exists and does not repair
Cut layer.
To sum up, a kind of electronic equipment is present embodiments provided, by the layered structure of neural network model, to energy consumption
Fractional weight in high layer is trimmed, and realizes the energy consumption for reducing neural network model, and due to being only in layer
Fractional weight trimmed, ensure that the counting accuracy of the neural network model.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.The device provided for embodiment
For, since it is corresponding with the method that embodiment provides, so being described relatively simple, related place is said referring to method part
It is bright.
To the above description of provided embodiment, professional and technical personnel in the field is made to can be realized or use the disclosure.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or the scope of the present disclosure.Therefore, the disclosure
It is not intended to be limited to the embodiments shown herein, and is to fit to and principle provided in this article and features of novelty phase one
The widest scope of cause.
Claims (10)
1. a kind of control method, comprising:
Obtain each layer of neural network model;
According to the first computation rule, the energy consumption of each layer is calculated;
Selection energy consumption meets the first layer of first condition, trims to the first weight set in the first layer, institute
Stating the first weight set includes at least one weight;
The first condition is:
The energy consumption of the first layer is higher than the energy consumption of the minimum layer of energy consumption.
2. according to the method described in claim 1, the first computation rule of the foundation, calculates the energy consumption of each layer, packet
It includes:
According to the calculation times in any layer, the calculating energy consumption of the layer is calculated;
According to the access times of memory in the layer, the storage energy consumption of the layer is calculated;
It is consumed according to the calculating energy consumption and the storage energy, obtains the energy consumption of the layer.
3. according to the method described in claim 1, the first weight set in the first layer is trimmed, comprising:
Delete the first weight set that weighted value in the first layer is less than first threshold.
4. according to the method described in claim 3, first power deleted weighted value in the first layer and be less than first threshold
Gather again after, further includes:
According to the first nerves network model after trimming, default input information is handled, the first output result is obtained;
Output assessment errors are obtained according to the first output interpretation of result and/or output prograin is unsatisfactory for specified conditions, are removed
The deletion to the second weight set is sold, first weight sets, which closes, includes at least the second weight set;
Weight in the second weight set is revised as second value from the first numerical value, first numerical value is greater than the second number
Value.
5. according to the method described in claim 4, deletion of the revocation to the second weight set, comprising:
The the second weight set for meeting essential condition in the first weight set is selected, is cancelled to the second weight set
It deletes;
It maintains in the first weight set except the deletion of other weights of the second weight set.
6. according to the method described in claim 4, described be revised as from the first numerical value weight in the second weight set
After second value, further includes:
The first weight in the first layer in the second weight set is obtained, and obtains the first ginseng corresponding with first weight
Number;
Obtain at least one second parameter in the first layer with first parameter association;
Based on the correlation of described at least one second parameter and first parameter, adjust first parameter and it is described at least
The weighted value of one the second parameter, so that neural network model output assessment errors and output essence based on weighted value after adjustment
Exactness meets preset requirement.
7. according to the method described in claim 4, described be revised as from the first numerical value weight in the second weight set
After second value, further includes:
The second weight in the whole second weight set of first nerves network model after obtaining trimming, and obtain and described second
The corresponding third parameter of weight;
At least one the 4th parameter in first nerves network model after obtaining trimming with the third parameter association;
Based on the correlation of at least one the 4th parameter and the third parameter, adjust the third parameter and it is described at least
The weighted value of one the 4th parameter, so that neural network model output assessment errors and output essence based on weighted value after adjustment
Exactness meets preset requirement.
8. according to the method described in claim 1, also being wrapped after the first weight set in the first layer is trimmed
It includes:
According to judgment rule is preset, non-trim layer is judged whether there is;
Based on there is non-trim layer, returning to circulation and executing according to the first computation rule, calculating the energy consumption step of each layer.
9. according to the method described in claim 8, judging whether there is non-trim layer, comprising:
According to the first computation rule, the energy consumption of the first nerves network model after calculating trimming, the energy consumption is institute
The summation of the energy consumption of each layer described in first nerves network model after stating trimming;
Energy consumption based on the first nerves network model after the trimming is lower than energy consumption threshold value, and there is no do not trim
Layer;
Energy consumption based on the first nerves network model after the trimming is not less than energy consumption threshold value, exists and does not trim
Layer.
10. a kind of electronic equipment, comprising:
Shell;
Processor, for obtaining each layer of neural network model;According to the first computation rule, the energy for calculating each layer disappears
Consumption;Selection energy consumption meets the first layer of first condition, trims to the first weight set in the first layer, described
First weight set includes at least one weight;The first condition is: the energy consumption of the first layer is higher than energy consumption
The energy consumption of minimum layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811639238.3A CN109634401B (en) | 2018-12-29 | 2018-12-29 | Control method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811639238.3A CN109634401B (en) | 2018-12-29 | 2018-12-29 | Control method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109634401A true CN109634401A (en) | 2019-04-16 |
CN109634401B CN109634401B (en) | 2023-05-02 |
Family
ID=66055120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811639238.3A Active CN109634401B (en) | 2018-12-29 | 2018-12-29 | Control method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109634401B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070244842A1 (en) * | 2004-06-03 | 2007-10-18 | Mie Ishii | Information Processing Method and Apparatus, and Image Pickup Device |
CN106779075A (en) * | 2017-02-16 | 2017-05-31 | 南京大学 | The improved neutral net of pruning method is used in a kind of computer |
CN107368885A (en) * | 2017-07-13 | 2017-11-21 | 北京智芯原动科技有限公司 | Network model compression method and device based on more granularity beta prunings |
US20180114114A1 (en) * | 2016-10-21 | 2018-04-26 | Nvidia Corporation | Systems and methods for pruning neural networks for resource efficient inference |
CN108229681A (en) * | 2017-12-28 | 2018-06-29 | 郑州云海信息技术有限公司 | A kind of neural network model compression method, system, device and readable storage medium storing program for executing |
CN109063835A (en) * | 2018-07-11 | 2018-12-21 | 中国科学技术大学 | The compression set and method of neural network |
-
2018
- 2018-12-29 CN CN201811639238.3A patent/CN109634401B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070244842A1 (en) * | 2004-06-03 | 2007-10-18 | Mie Ishii | Information Processing Method and Apparatus, and Image Pickup Device |
US20180114114A1 (en) * | 2016-10-21 | 2018-04-26 | Nvidia Corporation | Systems and methods for pruning neural networks for resource efficient inference |
CN106779075A (en) * | 2017-02-16 | 2017-05-31 | 南京大学 | The improved neutral net of pruning method is used in a kind of computer |
CN107368885A (en) * | 2017-07-13 | 2017-11-21 | 北京智芯原动科技有限公司 | Network model compression method and device based on more granularity beta prunings |
CN108229681A (en) * | 2017-12-28 | 2018-06-29 | 郑州云海信息技术有限公司 | A kind of neural network model compression method, system, device and readable storage medium storing program for executing |
CN109063835A (en) * | 2018-07-11 | 2018-12-21 | 中国科学技术大学 | The compression set and method of neural network |
Non-Patent Citations (3)
Title |
---|
PAVLO MOLCHANOV ET AL.: "PRUNING CONVOLUTIONAL NEURAL NETWORKS FOR RESOURCE EFFICIENT INFERENCE", 《ARXIV》 * |
段秉环等: "面向嵌入式应用的深度神经网络压缩方法研究", 《航空计算技术》 * |
蒲兴成等: "基于显著性分析的神经网络混合修剪算法", 《智能系统学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN109634401B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107845389A (en) | A kind of sound enhancement method based on multiresolution sense of hearing cepstrum coefficient and depth convolutional neural networks | |
Al-Fattah et al. | Predicting natural gas production using artificial neural network | |
CN109523017A (en) | Compression method, device, equipment and the storage medium of deep neural network | |
WO2019223250A1 (en) | Pruning threshold determination method and device, as well as model pruning method and device | |
CN107817891A (en) | Screen control method, device, equipment and storage medium | |
CN107680586A (en) | Far field Speech acoustics model training method and system | |
KR20160032536A (en) | Signal process algorithm integrated deep neural network based speech recognition apparatus and optimization learning method thereof | |
CN110047512A (en) | A kind of ambient sound classification method, system and relevant apparatus | |
CN110428137A (en) | A kind of update method and device of risk prevention system strategy | |
CN109447461A (en) | User credit appraisal procedure and device, electronic equipment, storage medium | |
CN108573399A (en) | Method and its system are recommended by trade company based on transition probability network | |
CN113705864A (en) | Weather drought prediction method and device based on VMD-CNN-BilSTM-ATT mixed model | |
CN106097043A (en) | The processing method of a kind of credit data and server | |
CN109460613A (en) | Model method of cutting out and device | |
CN110149333A (en) | A kind of network security situation evaluating method based on SAE+BPNN | |
CN112861518B (en) | Text error correction method and device, storage medium and electronic device | |
CN108960530A (en) | Prediction technique based on the long crop field vegetation coverage index of memory network in short-term | |
CN107122825A (en) | A kind of activation primitive generation method of neural network model | |
CN115829024B (en) | Model training method, device, equipment and storage medium | |
US8755533B2 (en) | Automatic performance optimization for perceptual devices | |
CN109886333A (en) | A kind of data enhancement methods based on higher dimensional space sampling | |
CN116051388A (en) | Automatic photo editing via language request | |
Wen et al. | Using empirical wavelet transform to speed up selective filtered active noise control system | |
CN108566537A (en) | Image processing apparatus for carrying out neural network computing to video frame | |
CN109634401A (en) | A kind of control method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |