CN109754077A - Network model compression method, device and the computer equipment of deep neural network - Google Patents

Network model compression method, device and the computer equipment of deep neural network Download PDF

Info

Publication number
CN109754077A
CN109754077A CN201711092273.3A CN201711092273A CN109754077A CN 109754077 A CN109754077 A CN 109754077A CN 201711092273 A CN201711092273 A CN 201711092273A CN 109754077 A CN109754077 A CN 109754077A
Authority
CN
China
Prior art keywords
arithmetic element
network
neural network
different degree
deleted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711092273.3A
Other languages
Chinese (zh)
Other versions
CN109754077B (en
Inventor
张渊
陈伟杰
谢迪
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201711092273.3A priority Critical patent/CN109754077B/en
Priority to PCT/CN2018/114357 priority patent/WO2019091401A1/en
Publication of CN109754077A publication Critical patent/CN109754077A/en
Application granted granted Critical
Publication of CN109754077B publication Critical patent/CN109754077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the invention provides network model compression method, device and the computer equipments of a kind of deep neural network, wherein the network model compression method of deep neural network includes: to obtain original depth neural network;It is analyzed by the different degree of each arithmetic element in the network layer to original depth neural network, determines that different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer;The arithmetic element to be deleted for deleting each network layer in original depth neural network, obtains the compressed deep neural network of network model.The efficiency of target identification and target detection can be improved by this programme.

Description

Network model compression method, device and the computer equipment of deep neural network
Technical field
The present invention relates to technical field of data processing, more particularly to a kind of network model compression side of deep neural network Method, device and computer equipment.
Background technique
DNN (Deep Neural Network, deep neural network) is as an emerging neck in machine learning research Domain parses data by imitating the mechanism of human brain, is a kind of intelligent mould that analytic learning is carried out by establishing and simulating human brain Type, currently more popular DNN includes: CNN (Convolutional Neural Network, convolutional neural networks), RNN (Recurrent Neural Network, Recognition with Recurrent Neural Network), LSTM (Long Short Term Memory, shot and long term note Recall network) etc..Since DNN quickly and accurately can identify target by the operation of multiple network layers in network model With detection, target detection and segmentation, behavioral value and identification, in terms of be widely used.
With the development of target identification and target detection technique, target signature becomes increasingly complex, and the target for needing to extract is special Sign is also more and more, in this way, making in the design of DNN network model, the number of arithmetic element in network layer and each network layer Amount is all being significantly increased, and the computational complexity of target identification and target detection is caused to increase, and a large amount of network layer and operation Unit can consume excessive memory and bandwidth resources, influence the efficiency of target identification and target detection.
Summary of the invention
Network model compression method, device and the meter for being designed to provide a kind of deep neural network of the embodiment of the present invention Machine equipment is calculated, to improve the efficiency of target identification and target detection.Specific technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of network model compression method of deep neural network, the side Method includes:
Obtain original depth neural network;
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, determines institute It states different degree in network layer and is lower than the arithmetic element of default different degree as arithmetic element to be deleted;
The arithmetic element to be deleted for deleting each network layer in the original depth neural network, after obtaining network model compression Deep neural network.
Optionally, the different degree of each arithmetic element carries out in the network layer by the original depth neural network Analysis determines that different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer, comprising:
Extract the weight absolute value of each arithmetic element in the network layer of the original depth neural network;
According to the weight absolute value of arithmetic element each in the network layer, the corresponding different degree for configuring each arithmetic element;
Based on the different degree of each arithmetic element, determine that different degree is lower than the arithmetic element of default different degree as fortune to be deleted Calculate unit.
Optionally, in the network layer by the original depth neural network each arithmetic element different degree into Row analysis, before determining that different degree is lower than the arithmetic element for presetting different degree as arithmetic element to be deleted in the network layer, The method also includes:
Using rank analysis tool, the network layer of the original depth neural network is analyzed, obtains meeting default mistake Under conditions of poor tolerance, the first number of arithmetic element to be deleted in the network layer;
The different degree of each arithmetic element is analyzed in the network layer by the original depth neural network, really Different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the fixed network layer, comprising:
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, obtains institute State the different degree of each arithmetic element in network layer;
The fortune of first number described in different degree sequential selection from small to large according to each arithmetic element in the network layer Unit is calculated, and using selected arithmetic element as arithmetic element to be deleted.
Optionally, it in the arithmetic element to be deleted for deleting each network layer in the original depth neural network, obtains After the compressed deep neural network of network model, the method also includes:
Obtain the output result that operation is carried out using the compressed deep neural network of the network model;
If the output result is unsatisfactory for preset condition, using the original depth neural network output result with Difference between the output result of the compressed deep neural network of network model, by preset algorithm, to the network Weight in deep neural network after model compression in the arithmetic element of each network layer is adjusted, until the output result Meet the preset condition.
Optionally, it in the arithmetic element to be deleted for deleting each network layer in the original depth neural network, obtains After the compressed deep neural network of network model, the method also includes:
Obtain the phase in the compressed deep neural network of the network model between each arithmetic element of any network layer Guan Du;
Judge whether the degree of correlation is less than the default degree of correlation;
If it is not, being then adjusted using default regularization term to the weight in each arithmetic element of the network layer, until institute When stating the degree of correlation less than the default degree of correlation, stop adjusting the weight in each arithmetic element.
Second aspect, the embodiment of the invention provides a kind of network model compression set of deep neural network, the dresses It sets and includes:
First obtains module, for obtaining original depth neural network;
First determining module, for by the network layer to the original depth neural network each arithmetic element it is important Degree is analyzed, and determines that different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer;
Removing module is obtained for deleting the arithmetic element to be deleted of each network layer in the original depth neural network The compressed deep neural network of network model.
Optionally, first determining module, is specifically used for:
Extract the weight absolute value of each arithmetic element in the network layer of the original depth neural network;
According to the weight absolute value of arithmetic element each in the network layer, the corresponding different degree for configuring each arithmetic element;
Based on the different degree of each arithmetic element, determine that different degree is lower than the arithmetic element of default different degree as fortune to be deleted Calculate unit.
Optionally, described device further include:
Analysis module is analyzed the network layer of the original depth neural network, is obtained for utilizing rank analysis tool To under conditions of meeting default fault tolerance, the first number of arithmetic element to be deleted in the network layer;
First determining module, is specifically used for:
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, obtains institute State the different degree of each arithmetic element in network layer;
The fortune of first number described in different degree sequential selection from small to large according to each arithmetic element in the network layer Unit is calculated, and using selected arithmetic element as arithmetic element to be deleted.
Optionally, described device further include:
Second obtains module, carries out the defeated of operation using the compressed deep neural network of the network model for obtaining Result out;
The first adjustment module utilizes the original depth mind if being unsatisfactory for preset condition for the output result Difference between output result through network and the output result of the compressed deep neural network of the network model, by pre- Imputation method adjusts the weight in the arithmetic element of each network layer in the compressed deep neural network of the network model It is whole, until the output result meets the preset condition.
Optionally, described device further include:
Third obtains module, for obtaining each of any network layer in the compressed deep neural network of the network model The degree of correlation between arithmetic element;
Judgment module, for judging whether the degree of correlation is less than the default degree of correlation;
Second adjustment module, if the judging result for the judgment module is no, the default regularization term of use, to this Weight in each arithmetic element of network layer is adjusted, until stopping adjusting when the degree of correlation is less than the default degree of correlation Weight in whole each arithmetic element.
The third aspect, the embodiment of the invention provides a kind of computer equipments, including processor and memory, wherein
The memory, for storing computer program;
The processor when for executing the program stored on the memory, realizes side as described in relation to the first aspect Method step.
Network model compression method, device and the computer equipment of deep neural network provided in an embodiment of the present invention are led to The different degree for crossing each arithmetic element in the network layer to the original depth neural network got is analyzed, and determines the network layer Middle different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted, and then obtains in original depth neural network The arithmetic element to be deleted of each network layer deletes the arithmetic element to be deleted of each network layer in original depth neural network To obtain the compressed deep neural network of network model.Since the different degree of arithmetic element to be deleted is lower than default different degree, It is i.e. relatively small on the influence of the result of target identification and target detection, therefore, arithmetic element to be deleted is deleted, is not interfered with Identification and detection to target, in this way, realizing compression depth neural network by the arithmetic element to be deleted for deleting each network layer Network model, reach the computational complexity for reducing target identification and target detection, the mesh for reducing memory and bandwidth resource consumption , to improve the efficiency of target identification and target detection.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow diagram of the network model compression method of the deep neural network of one embodiment of the invention;
Fig. 2 is the flow diagram of the network model compression method of the deep neural network of another embodiment of the present invention;
Fig. 3 is the flow diagram of the network model compression method of the deep neural network of further embodiment of this invention;
Fig. 4 is the flow diagram of the network model compression method of the deep neural network of yet another embodiment of the invention;
Fig. 5 is the structural schematic diagram of the network model compression set of the deep neural network of one embodiment of the invention;
Fig. 6 is the structural schematic diagram of the network model compression set of the deep neural network of another embodiment of the present invention;
Fig. 7 is the structural schematic diagram of the network model compression set of the deep neural network of further embodiment of this invention;
Fig. 8 is the structural schematic diagram of the network model compression set of the deep neural network of yet another embodiment of the invention;
Fig. 9 is the structural schematic diagram of the computer equipment of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
In order to improve the efficiency of target detection, the embodiment of the invention provides a kind of network model pressures of deep neural network Contracting method, apparatus and computer equipment.In the following, being provided for the embodiments of the invention the network model of deep neural network first Compression method is introduced.
The executing subject of the network model compression method of deep neural network provided by the embodiment of the present invention can be real The computer equipment of the functions such as existing image classification, speech recognition, target detection, or there is image classification, target detection Etc. functions video camera, can also be the microphone with speech identifying function, including at least in executing subject has at data The kernel processor chip of reason ability.Realize the network model compression method of deep neural network provided by the embodiment of the present invention Mode can be at least one of the software, hardware circuit and logic circuit being set in executing subject mode.
As shown in Figure 1, for a kind of network model compression method of deep neural network provided by the embodiment of the present invention, it should The network model compression method of deep neural network may include steps of:
S101 obtains original depth neural network.
Original depth neural network is to realize target identifications and the target detections such as image classification, speech recognition, target detection The deep neural network of function is to identify as required and deep neural network designed by the target signature of detection.By obtaining Take original depth neural network, the network model of available original depth neural network, i.e. the original depth neural network The network parameter of network layer, the arithmetic element of each network layer and each network layer, network parameter here include in the network layer Specific value in the quantity for the arithmetic element for including and each arithmetic element.Due in current goal identification and target detection skill In art, target signature is complicated, and the target signature for needing to extract is various, in this way, making network model in original depth neural network Structure is complicated, the substantial amounts of arithmetic element in network layer and each network layer, and a large amount of network layer and arithmetic element can disappear Excessive memory and bandwidth resources are consumed, causes the computational complexity of target identification and target detection larger, therefore, of the invention real It applies in example, needs to analyze original depth neural network, by compressing to network model, reaching reduces operation complexity The purpose of degree, and then improve the efficiency of target identification and target detection.
S102 is analyzed by the different degree of each arithmetic element in the network layer to original depth neural network, is determined Different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer.
Each arithmetic element can be used for extracting different target signatures in the network layer of original depth neural network, such as It include for extracting the arithmetic element of eye feature, using in a network layer in the deep neural network for carrying out recognition of face In arithmetic element, the arithmetic element for extracting ear feature, the operation list for extracting face mask of extracting nose feature Member etc., it is practical during carrying out feature extraction, some features for target identification and target detection result influence compared with Greatly, and some features for target identification and target detection result substantially without influence, for example, in the depth for recognition of face In neural network, eye feature, nose feature, ear feature etc. are affected for result, if not extracting these features, nothing Method correctly detects and identifies face;And hair color, whether wearing spectacles, whether wear influence of the features such as earrings for result It is relatively small, if not extracting these features, it will not influence detection and identify the result of face.
The different degree of each arithmetic element in the network layer of original depth neural network is analyzed, can be by each Arithmetic element carries out analysis realization for the influence degree of target identification and object detection results, and arithmetic element knows target It is not stronger with the influence degree of object detection results, then illustrate that the different degree of arithmetic element is higher, wherein influence degree can be Characterize the property parameters of different degree, such as weight of each arithmetic element, during sample training, each operation list of deep neural network The weight of member can be constantly adjusted, by taking recognition of face as an example, after sample training, for extracting the spies such as eyes, nose, ear The weight absolute value of the arithmetic element of sign is greater than absolute for extracting the weight of the arithmetic element of the features such as color development, glasses, earrings Value then illustrates that the arithmetic element for extracting the features such as eyes, nose, ear is better than the influence degree of face recognition result and mentions The arithmetic element of the features such as color development, glasses, earrings is taken, i.e., for extracting the weight of the arithmetic element of the features such as eyes, nose, ear Want Du Genggao;For another example the ratio of the total element of target of identification needed for the characteristic element (pixel etc.) that each arithmetic element is extracted accounts for and detection Example, can obtain by feature extraction and to the analysis of characteristic element, still by taking recognition of face as an example, be extracted by arithmetic element The features such as eyes, nose, ear, the ratio that the element of these features accounts for the total element of target are greater than the hair that arithmetic element is extracted The element of the features such as color, glasses, earrings accounts for the ratio of the total element of target, then illustrates the fortune for extracting the features such as eyes, nose, ear It calculates unit and is better than the arithmetic element for extracting the features such as color development, glasses, earrings for the influence degree of face recognition result, that is, use It is higher in the different degree for the arithmetic element for extracting the features such as eyes, nose, ear.It is carried out by the different degree to each arithmetic element Analysis, the different degree of available each arithmetic element, such as can be according to each arithmetic element for target identification and target detection As a result influence degree configures corresponding different degree.
After the different degree for obtaining each arithmetic element, it can be compared respectively with default different degree, if different degree is low In default different degree, then the arithmetic element is determined as arithmetic element to be deleted.Wherein, presetting different degree is preset fortune Calculate the significance level of unit, influence of the general clarification of objective for identifying and detecting as needed to target identification and target detection Setting for example, different degree is divided into the first different degree, the second different degree, third different degree, the 4th different degree, and is known target It is not followed successively by that the first different degree is better than the second different degree, the second different degree is better than third with the sequence of the influence degree of target detection Different degree, third different degree are better than the 4th different degree.Assuming that the first different degree, the second different degree and the corresponding fortune of third different degree It is indispensable for the target for identifying and detecting to calculate the extracted feature of unit, i.e., if without these features, Wu Fazheng Really identification and detection target, and the corresponding extracted feature of arithmetic element of the 4th different degree is for final target identification and inspection Surveying result influences less, then third different degree can be set as to default different degree, if the different degree of an arithmetic element is 4th different degree, due to that lower than default different degree, then the arithmetic element can be determined as to arithmetic element to be deleted.It presets important Degree can also be the different degree determined after the number according to the arithmetic element for obtaining to delete in network layer by analysis, example It such as, is 5 by the number that analysis obtains the deletable arithmetic element of the network layer for a certain network layer, and in the network layer The sum of arithmetic element is 12, then the smallest different degree in remaining 7 arithmetic element can be determined as to default different degree, generally In the case of, the different degree of 5 deletable arithmetic elements is respectively less than the default different degree, in this way, can be by different degree lower than pre- If 5 arithmetic elements of different degree are determined as arithmetic element to be deleted.It is equal for each network layer in original depth neural network The step of executing S102, the then arithmetic element to be deleted of available each network layer.
Illustratively, the mode different degree of each arithmetic element in the network layer of original depth neural network analyzed By taking the right absolute value for extracting each arithmetic element as an example, the step of determining arithmetic element to be deleted, may include:
The first step extracts the weight absolute value of each arithmetic element in the network layer of original depth neural network.
Second step, it is corresponding to configure the important of each arithmetic element according to the weight absolute value of arithmetic element each in the network layer Degree.
Third step determines that different degree is lower than the arithmetic element conduct of default different degree based on the different degree of each arithmetic element Arithmetic element to be deleted.
The weight absolute value of each arithmetic element has respectively represented the arithmetic element in the network layer of original depth neural network To the influence degree of the result of target identification and target detection, weight absolute value is bigger, then illustrates that the arithmetic element knows target It is not stronger with the influence degree of the result of target detection.Therefore, can be according to the weight absolute value of each arithmetic element, corresponding configuration The different degree of each arithmetic element, specifically, can be directly using weight absolute value as different degree, it can also be absolute according to weight The weight absolute value correspondence in certain section is configured high different degree, medium different degree, low different degree, and weight by value It is proportional relation between absolute value and different degree, i.e. weight absolute value is bigger, then different degree is higher.It is, of course, also possible to according to need It asks, different degree is divided in more detail, for example, being divided into the first different degree, the second different degree, third different degree, Four different degrees etc..Based on the different degree of each arithmetic element, then the arithmetic element that different degree can be lower than to default different degree is true It is set to arithmetic element to be deleted, for example, default different degree is medium different degree, then it can be true by the arithmetic element of low different degree It is set to arithmetic element to be deleted.
S103 deletes the arithmetic element to be deleted of each network layer in original depth neural network, obtains network model compression Deep neural network afterwards.
The arithmetic element to be deleted of each network layer is to target identification and target detection in original depth neural network As a result lesser arithmetic element is influenced, it, can since these arithmetic elements are smaller on the influence of the result of target identification and target detection Directly to delete the arithmetic element to be deleted of network layer each in original depth neural network, in this way, target can not influenced Identification realizes the network model of compression depth neural network, to reach reduction target on the basis of the result of target detection Identification and the computational complexity of target detection, the purpose of reduction memory and bandwidth resource consumption, and then improve target identification and mesh Mark the efficiency of detection.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.Due to arithmetic element to be deleted Different degree be lower than default different degree, i.e., the result of target identification and target detection is influenced relatively small, therefore, deleted wait delete Division unit does not interfere with identification and detection to target, in this way, passing through the operation list to be deleted for deleting each network layer Member realizes the network model of compression depth neural network, and reaching reduces the computational complexity of target identification and target detection, reduces The purpose of memory and bandwidth resource consumption, to improve the efficiency of target identification and target detection.
Based on embodiment illustrated in fig. 1, the embodiment of the invention also provides a kind of compressions of the network model of deep neural network Method, as shown in Fig. 2, the network model compression method of the deep neural network may include steps of:
S201 obtains original depth neural network.
S202 is analyzed using rank analysis tool by the network layer to original depth neural network, obtains meeting pre- If under conditions of fault tolerance, the first number of arithmetic element to be deleted in the network layer.
For with miI-th of network layer Layer of the original depth neural network of a arithmetic elementi, miA arithmetic element The rank of matrix of composition characterizes network layer LayeriIn have several important arithmetic elements, for example, if by rank analysis tool, Analysis obtains miThe rank of matrix of a arithmetic element composition is 3, and the network layer Layer of original depth neural networkiIn it is actual The sum of arithmetic element is 8, then illustrates network layer LayeriIn have 3 important arithmetic elements, and have 5 arithmetic elements simultaneously Insignificant arithmetic element, the then maximum number for the arithmetic element that can be deleted are 5, in order to guarantee the identification and detection of target As a result within certain error range, determining for the number of arithmetic element to be deleted is needed based on default fault tolerance ε, then example Such as, if deleting 4 arithmetic elements, resultant error can be greater than default fault tolerance ε, and if delete 3 arithmetic elements, as a result miss Difference is less than default fault tolerance ε, then the first number of arithmetic element to be deleted can be determined as 3.Under normal conditions, in order to The network structure of deep neural network is simplified to the greatest extent, and the first number can be determined as meeting the item of default fault tolerance Under part, the maximum number for the arithmetic element that can be deleted, certainly, if the first number is less than the maximum number of numerical value, example In examples detailed above, the first number can also be determined as 2 or 1, in this way, also can achieve the net of simplified deep neural network Therefore the purpose of network structure also belongs to the protection scope of the embodiment of the present invention.Illustratively, rank analysis tool can be PCA (Principal Component Analysis, principal component analysis) method, certainly, rank analysis tool can be for by analyzing To any method of rank of matrix, no longer repeat one by one here.
S203 is analyzed by the different degree to each arithmetic element in the network layer, obtains each operation in the network layer The different degree of unit.
Can be according to the S102 of embodiment illustrated in fig. 1 the step of, carries out the different degree of each arithmetic element in the network layer Analysis, obtains the different degree of each arithmetic element in the network layer, which is not described herein again.
S204, according to the different degree sequential selection n from small to large of each arithmetic element in the network layeriA arithmetic element, And by the niA arithmetic element is as arithmetic element to be deleted, wherein niFor network layer LayeriIn arithmetic element to be deleted One number.
In the first number for determining arithmetic element to be deleted and obtain in network layer after the different degree of each arithmetic element, it can The minimum several arithmetic elements of different degree are determined as arithmetic element to be deleted, for network layer LayeriIf to be deleted The number of arithmetic element is niA, the sum of arithmetic element is miIt is a, then n that can be minimum by different degreeiA arithmetic element determines For arithmetic element to be deleted, in this way, network layer LayeriThe number of final arithmetic element is mi-niIt is a.For example, fortune to be deleted The first number for calculating unit is 3, i-th of network layer Layeri10 arithmetic elements different degree from small to large successively are as follows: 5th arithmetic element, the second arithmetic element, the 7th arithmetic element, the first arithmetic element, the 8th arithmetic element, the tenth operation list Member, the 6th arithmetic element, third arithmetic element, the 4th arithmetic element, the 9th arithmetic element, are determining the first number and different degree After size, default different degree can be set as to the different degree of the first arithmetic element, so as to by different degree lower Five arithmetic elements, the second arithmetic element and the 7th arithmetic element are determined as arithmetic element to be deleted.
S205 deletes the arithmetic element to be deleted of each network layer in original depth neural network, obtains network model compression Deep neural network afterwards.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.True by rank analysis tool First number of fixed arithmetic element to be deleted, default different degree can be according to the important of first number and each arithmetic element The size of degree is set, since the different degree of arithmetic element to be deleted is lower than default different degree, i.e., to target identification and target detection Result influence it is relatively small, therefore, delete arithmetic element to be deleted, do not interfere with identification and detection to target, this Sample realizes the network model of compression depth neural network, reaches reduction mesh by deleting the arithmetic element to be deleted of each network layer Mark not with the computational complexity of target detection, reduce memory and bandwidth resource consumption purpose, thus improve target identification with The efficiency of target detection, can several arithmetic elements to be deleted different degree is minimum, corresponding with first number delete, delete Except the deep neural network after arithmetic element to be deleted meets default fault tolerance condition, it ensure that target identification and target are examined The error of the result of survey in a certain range, accuracy with higher.
Based on embodiment illustrated in fig. 1, the embodiment of the invention also provides a kind of compressions of the network model of deep neural network Method, as shown in figure 3, the network model compression method of the deep neural network may include steps of:
S301 obtains original depth neural network.
S302 is analyzed by the different degree of each arithmetic element in the network layer to original depth neural network, is determined Different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer.
S303 deletes the arithmetic element to be deleted of each network layer in original depth neural network, obtains network model compression Deep neural network afterwards.
S304 obtains the output result that operation is carried out using the compressed deep neural network of network model.
S305 utilizes the output result and net of original depth neural network if output result is unsatisfactory for preset condition Difference between the output result of deep neural network after network model compression, through preset algorithm, after network model compression Deep neural network in each network layer arithmetic element in weight be adjusted, until output result meet preset condition.
It, can due to not absolutely not correlation between each arithmetic element, that is, if deleting some arithmetic elements Can the feature extraction performance to other arithmetic elements generate certain influence, cause to utilize network model compressed depth nerve The output result that network carries out operation is unable to satisfy preset condition, wherein preset condition is to need target identification to be achieved and mesh The effect of detection is marked, i.e., there are certain deviations between actual output result and need effect to be achieved, in order to reduce this partially Difference can use the output result of original depth neural network and the output result of the compressed deep neural network of network model Between difference, by preset algorithm, in the arithmetic element of each network layer in the compressed deep neural network of network model Weight be adjusted, until the output result of the compressed deep neural network of network model adjusted meets default item Part, wherein preset algorithm can be current general reversed gradient propagation algorithm, such as BP algorithm, and I will not elaborate.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.Due to arithmetic element to be deleted Different degree be lower than default different degree, i.e., the result of target identification and target detection is influenced relatively small, therefore, deleted wait delete Division unit does not interfere with identification and detection to target, in this way, passing through the operation list to be deleted for deleting each network layer Member realizes the network model of compression depth neural network, and reaching reduces the computational complexity of target identification and target detection, reduces The purpose of memory and bandwidth resource consumption, to improve the efficiency of target identification and target detection.Also, if utilizing network mould The output result that the compressed deep neural network of type carries out operation is unable to satisfy preset condition, then utilizes original depth nerve net Difference between the output result of the compressed deep neural network of output result and network model of network, by preset algorithm, Weight in arithmetic element is adjusted, until output result meets preset condition, effectively prevents having between arithmetic element There is output result caused by higher correlation to be unable to satisfy the case where needing effect to be achieved, ensure that target identification and mesh Mark the result accuracy of detection.
Based on embodiment illustrated in fig. 1, the embodiment of the invention also provides a kind of compressions of the network model of deep neural network Method, as shown in figure 4, the network model compression method of the deep neural network may include steps of:
S401 obtains original depth neural network.
S402 is analyzed by the different degree of each arithmetic element in the network layer to original depth neural network, is determined Different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer.
S403 deletes the arithmetic element to be deleted of each network layer in original depth neural network, obtains network model compression Deep neural network afterwards.
S404 obtains the phase in the compressed deep neural network of network model between each arithmetic element of any network layer Guan Du.
S405, judges whether the degree of correlation is less than the default degree of correlation, if so, executing S406, otherwise executes S407.
S406 stops adjusting the weight in each arithmetic element.
S407 is adjusted the weight in each arithmetic element of the network layer using default regularization term.
After deleting arithmetic element to be deleted, each operation in the network layer of the compressed deep neural network of network model It is likely present the higher degree of correlation between unit, in the higher situation of the degree of correlation, there are still certain between each arithmetic element Redundancy, cause the performance of network model poor, if the degree of correlation between each arithmetic element is greater than or equal to default phase Guan Du then illustrates that the redundancy of the network layer is more, and network structure is not enough simplified, then can be using default regularization term, example Such as orthogonal regularization term, each arithmetic element of the network layer is adjusted, until the degree of correlation is less than the default degree of correlation.If former Such as tradition L2 regularization term is used in beginning deep neural network, tradition L2 regularization term can be replaced with for example orthogonal The default regularization term of regularization term, to realize the purpose of the degree of correlation between reducing each arithmetic element.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.Due to arithmetic element to be deleted Different degree be lower than default different degree, i.e., the result of target identification and target detection is influenced relatively small, therefore, deleted wait delete Division unit does not interfere with identification and detection to target, in this way, passing through the operation list to be deleted for deleting each network layer Member realizes the network model of compression depth neural network, and reaching reduces the computational complexity of target identification and target detection, reduces The purpose of memory and bandwidth resource consumption, to improve the efficiency of target identification and target detection.Also, by network model The degree of correlation in compressed deep neural network between each arithmetic element of network layer judged, if the degree of correlation be greater than or Equal to the default degree of correlation, then using default regularization term, the weight in each arithmetic element of the network layer is adjusted, until The degree of correlation is less than the default degree of correlation, and influence of the redundancy to result precision is effectively reduced, and ensure that target identification and target are examined The result precision of survey.
Based on Fig. 3 and embodiment illustrated in fig. 4, the embodiment of the invention also provides a kind of network models of deep neural network The network model compression method of compression method, the deep neural network may include embodiment illustrated in fig. 3 and embodiment illustrated in fig. 4 All steps, i.e., the adjustment of weight in each arithmetic element is not carried out merely with default regularization term, also monitoring output as a result, Output result be unsatisfactory for preset condition in the case where, weight in each arithmetic element is adjusted, thus realize target identification with The high-precision of object detection results, high accuracy requirement, I will not elaborate.
Corresponding to above method embodiment, the embodiment of the invention provides a kind of compressions of the network model of deep neural network Device, as shown in figure 5, the network model compression set of the deep neural network may include:
First obtains module 510, for obtaining original depth neural network;
First determining module 520, for passing through each arithmetic element in the network layer to the original depth neural network Different degree is analyzed, and determines that different degree is lower than the arithmetic element of default different degree as operation list to be deleted in the network layer Member;
Removing module 530 is obtained for deleting the arithmetic element to be deleted of each network layer in the original depth neural network To the compressed deep neural network of network model.
Optionally, first determining module 520, specifically can be used for:
Extract the weight absolute value of each arithmetic element in the network layer of the original depth neural network;
According to the weight absolute value of arithmetic element each in the network layer, the corresponding different degree for configuring each arithmetic element;
Based on the different degree of each arithmetic element, determine that different degree is lower than the arithmetic element of default different degree as fortune to be deleted Calculate unit.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.Due to arithmetic element to be deleted Different degree be lower than default different degree, i.e., the result of target identification and target detection is influenced relatively small, therefore, deleted wait delete Division unit does not interfere with identification and detection to target, in this way, passing through the operation list to be deleted for deleting each network layer Member realizes the network model of compression depth neural network, and reaching reduces the computational complexity of target identification and target detection, reduces The purpose of memory and bandwidth resource consumption, to improve the efficiency of target identification and target detection.
Based on embodiment illustrated in fig. 5, the embodiment of the invention also provides a kind of compressions of the network model of deep neural network Device, as shown in fig. 6, the network model compression set of the deep neural network may include:
First obtains module 610, for obtaining original depth neural network;
Analysis module 620 divides the network layer of the original depth neural network for utilizing rank analysis tool Analysis, obtains under conditions of meeting default fault tolerance, the first number of arithmetic element to be deleted in the network layer;
First determining module 630, for passing through each arithmetic element in the network layer to the original depth neural network Different degree is analyzed, and the different degree of each arithmetic element in the network layer is obtained;According to each arithmetic element in the network layer Different degree sequential selection from small to large described in the first number arithmetic element, and using selected arithmetic element as wait delete Division unit;
Removing module 640 is obtained for deleting the arithmetic element to be deleted of each network layer in the original depth neural network To the compressed deep neural network of network model.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.True by rank analysis tool First number of fixed arithmetic element to be deleted, default different degree can be according to the important of first number and each arithmetic element The size of degree is set, since the different degree of arithmetic element to be deleted is lower than default different degree, i.e., to target identification and target detection Result influence it is relatively small, therefore, delete arithmetic element to be deleted, do not interfere with identification and detection to target, this Sample realizes the network model of compression depth neural network, reaches reduction mesh by deleting the arithmetic element to be deleted of each network layer Mark not with the computational complexity of target detection, reduce memory and bandwidth resource consumption purpose, thus improve target identification with The efficiency of target detection can delete the minimum several arithmetic elements to be deleted corresponding with the number of different degree, delete to Deep neural network after deleting arithmetic element meets default fault tolerance condition, ensure that target identification and target detection As a result error in a certain range, accuracy with higher.
Based on embodiment illustrated in fig. 5, the embodiment of the invention also provides a kind of compressions of the network model of deep neural network Device, as shown in fig. 7, the network model compression set of the deep neural network may include:
First obtains module 710, for obtaining original depth neural network;
First determining module 720, for passing through each arithmetic element in the network layer to the original depth neural network Different degree is analyzed, and determines that different degree is lower than the arithmetic element of default different degree as operation list to be deleted in the network layer Member;
Removing module 730 is obtained for deleting the arithmetic element to be deleted of each network layer in the original depth neural network To the compressed deep neural network of network model;
Second obtains module 740, carries out operation using the compressed deep neural network of the network model for obtaining Output result;
The first adjustment module 750 utilizes the original depth if being unsatisfactory for preset condition for the output result Difference between the output result of neural network and the output result of the compressed deep neural network of the network model, passes through Preset algorithm adjusts the weight in the arithmetic element of each network layer in the compressed deep neural network of the network model It is whole, until the output result meets the preset condition.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.Due to arithmetic element to be deleted Different degree be lower than default different degree, i.e., the result of target identification and target detection is influenced relatively small, therefore, deleted wait delete Division unit does not interfere with identification and detection to target, in this way, passing through the operation list to be deleted for deleting each network layer Member realizes the network model of compression depth neural network, and reaching reduces the computational complexity of target identification and target detection, reduces The purpose of memory and bandwidth resource consumption, to improve the efficiency of target identification and target detection.Also, if utilizing network mould The output result that the compressed deep neural network of type carries out operation is unable to satisfy preset condition, then utilizes original depth nerve net Difference between the output result of the compressed deep neural network of output result and network model of network, by preset algorithm, Weight in arithmetic element is adjusted, until output result meets preset condition, effectively prevents having between arithmetic element There is output result caused by higher correlation to be unable to satisfy the case where needing effect to be achieved, ensure that target identification and mesh Mark the result accuracy of detection.
Based on embodiment illustrated in fig. 5, the embodiment of the invention also provides a kind of compressions of the network model of deep neural network Device, as shown in figure 8, the network model compression set of the deep neural network may include:
First obtains module 810, for obtaining original depth neural network;
First determining module 820, for passing through each arithmetic element in the network layer to the original depth neural network Different degree is analyzed, and determines that different degree is lower than the arithmetic element of default different degree as operation list to be deleted in the network layer Member;
Removing module 830 is obtained for deleting the arithmetic element to be deleted of each network layer in the original depth neural network To the compressed deep neural network of network model;
Third obtains module 840, for obtaining any network layer in the compressed deep neural network of the network model Each arithmetic element between the degree of correlation;
Judgment module 850, for judging whether the degree of correlation is less than the default degree of correlation;
Second adjustment module 860, if the judging result for the judgment module 850 is no, the default regularization of use , the weight in each arithmetic element of the network layer is adjusted, until when the degree of correlation is less than the default degree of correlation, Stop adjusting the weight in each arithmetic element.
Using the present embodiment, by the network layer to the original depth neural network got each arithmetic element it is important Degree is analyzed, and determines that different degree in the network layer is lower than the arithmetic element of default different degree as arithmetic element to be deleted, into And the arithmetic element to be deleted of each network layer in original depth neural network is obtained, delete each network in original depth neural network The arithmetic element to be deleted of layer, it can obtain the compressed deep neural network of network model.Due to arithmetic element to be deleted Different degree be lower than default different degree, i.e., the result of target identification and target detection is influenced relatively small, therefore, deleted wait delete Division unit does not interfere with identification and detection to target, in this way, passing through the operation list to be deleted for deleting each network layer Member realizes the network model of compression depth neural network, and reaching reduces the computational complexity of target identification and target detection, reduces The purpose of memory and bandwidth resource consumption, to improve the efficiency of target identification and target detection.Also, by network model The degree of correlation in compressed deep neural network between each arithmetic element of network layer judged, if the degree of correlation be greater than or Equal to the default degree of correlation, then using default regularization term, the weight in each arithmetic element of the network layer is adjusted, until The degree of correlation is less than the default degree of correlation, and influence of the redundancy to result precision is effectively reduced, and ensure that target identification and target are examined The result precision of survey.
Based on Fig. 7 and embodiment illustrated in fig. 8, the embodiment of the invention also provides a kind of network models of deep neural network The network model compression set of compression set, the deep neural network may include embodiment illustrated in fig. 7 and embodiment illustrated in fig. 8 All modules, to realize high-precision, the high accuracy requirement of target identification and object detection results, I will not elaborate.
The embodiment of the invention also provides a kind of computer equipments, as shown in figure 9, including processor 901 and memory 902, wherein
The memory 902, for storing computer program;
The processor 901 when for executing the program stored on the memory 902, realizes following steps:
Obtain original depth neural network;
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, determines institute It states different degree in network layer and is lower than the arithmetic element of default different degree as arithmetic element to be deleted;
The arithmetic element to be deleted for deleting each network layer in the original depth neural network, after obtaining network model compression Deep neural network.
Optionally, the processor 901 is described by each in the network layer to the original depth neural network in realization The different degree of arithmetic element is analyzed, determine different degree in the network layer be lower than default different degree arithmetic element be used as to In the step of deleting arithmetic element, specifically it may be implemented:
Extract the weight absolute value of each arithmetic element in the network layer of the original depth neural network;
According to the weight absolute value of arithmetic element each in the network layer, the corresponding different degree for configuring each arithmetic element;
Based on the different degree of each arithmetic element, determine that different degree is lower than the arithmetic element of default different degree as fortune to be deleted Calculate unit.
Optionally, the processor 901 can also be realized:
Using rank analysis tool, the network layer of the original depth neural network is analyzed, obtains meeting default mistake Under conditions of poor tolerance, the first number of arithmetic element to be deleted in the network layer;
The processor 901 described passes through each arithmetic element in the network layer to the original depth neural network realizing Different degree analyzed, determine that different degree in the network layer is lower than the arithmetic element of default different degree as operation to be deleted In the step of unit, specifically it may be implemented:
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, obtains institute State the different degree of each arithmetic element in network layer;
The fortune of first number described in different degree sequential selection from small to large according to each arithmetic element in the network layer Unit is calculated, and using selected arithmetic element as arithmetic element to be deleted.
Optionally, the processor 901 can also be realized:
Obtain the output result that operation is carried out using the compressed deep neural network of the network model;
If the output result is unsatisfactory for preset condition, using the original depth neural network output result with Difference between the output result of the compressed deep neural network of network model, by preset algorithm, to the network Weight in deep neural network after model compression in the arithmetic element of each network layer is adjusted, until the output result Meet the preset condition.
Optionally, the processor 901 can also be realized:
Obtain the phase in the compressed deep neural network of the network model between each arithmetic element of any network layer Guan Du;
Judge whether the degree of correlation is less than the default degree of correlation;
If it is not, being then adjusted using default regularization term to the weight in each arithmetic element of the network layer, until institute When stating the degree of correlation less than the default degree of correlation, stop adjusting the weight in each arithmetic element.
Above-mentioned memory may include RAM (Random Access Memory, random access memory), also may include NVM (Non-Volatile Memory, nonvolatile memory), for example, at least a magnetic disk storage.Optionally, memory It can also be that at least one is located remotely from the storage device of above-mentioned processor.
Above-mentioned processor can be general processor, including CPU (Central Processing Unit, central processing Device), NP (Network Processor, network processing unit) etc.;Can also be DSP (Digital Signal Processing, Digital signal processor), ASIC (Application Specific Integrated Circuit, specific integrated circuit), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device are divided Vertical door or transistor logic, discrete hardware components.
In the present embodiment, the processor of the computer equipment is led to by reading the computer program stored in memory It crosses and runs the computer program, can be realized: since the different degree of arithmetic element to be deleted is lower than default different degree, i.e., to target The influence of the result of identification and target detection is relatively small, therefore, deletes arithmetic element to be deleted, does not interfere with to target Identification and detection, in this way, realizing the network mould of compression depth neural network by the arithmetic element to be deleted for deleting each network layer Type achievees the purpose that reduce the computational complexity of target identification and target detection, reduces memory and bandwidth resource consumption, to mention The efficiency of high target identification and target detection.
In addition, the present invention is real corresponding to the network model compression method of deep neural network provided by above-described embodiment It applies example and provides a kind of computer readable storage medium, for storing computer program, the computer program is held by processor When row, realize as above-mentioned deep neural network network model compression method the step of.
In the present embodiment, computer-readable recording medium storage has to be executed provided by the embodiment of the present invention deeply at runtime The application program for spending the network model compression method of neural network, therefore can be realized: important due to arithmetic element to be deleted Degree is lower than default different degree, i.e., relatively small on the influence of the result of target identification and target detection, therefore, deletes operation to be deleted Unit does not interfere with identification and detection to target, in this way, by the arithmetic element to be deleted for deleting each network layer, it is real The network model of existing compression depth neural network, reaching reduces the computational complexity of target identification and target detection, reduces memory With the purpose of bandwidth resource consumption, to improve the efficiency of target identification and target detection.
For computer equipment and computer readable storage medium embodiment, method content as involved in it It is substantially similar to embodiment of the method above-mentioned, so being described relatively simple, related place is said referring to the part of embodiment of the method It is bright.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (11)

1. a kind of network model compression method of deep neural network, which is characterized in that the described method includes:
Obtain original depth neural network;
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, determines the net Different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in network layers;
The arithmetic element to be deleted for deleting each network layer in the original depth neural network obtains the compressed depth of network model Spend neural network.
2. the method according to claim 1, wherein the network by the original depth neural network The different degree of each arithmetic element is analyzed in layer, determines that different degree is lower than the arithmetic element for presetting different degree in the network layer As arithmetic element to be deleted, comprising:
Extract the weight absolute value of each arithmetic element in the network layer of the original depth neural network;
According to the weight absolute value of arithmetic element each in the network layer, the corresponding different degree for configuring each arithmetic element;
Based on the different degree of each arithmetic element, determine that different degree is lower than the arithmetic element of default different degree as operation list to be deleted Member.
3. the method according to claim 1, wherein in the net by the original depth neural network The different degree of each arithmetic element is analyzed in network layers, determines that different degree is lower than the operation list for presetting different degree in the network layer Member is used as before arithmetic element to be deleted, the method also includes:
Using rank analysis tool, the network layer of the original depth neural network is analyzed, obtains meeting default error appearance Under conditions of degree of bearing, the first number of arithmetic element to be deleted in the network layer;
The different degree of each arithmetic element is analyzed in the network layer by the original depth neural network, determines institute It states different degree in network layer and is lower than the arithmetic element of default different degree as arithmetic element to be deleted, comprising:
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, obtains the net The different degree of each arithmetic element in network layers;
First number operation list described in different degree sequential selection from small to large according to each arithmetic element in the network layer Member, and using selected arithmetic element as arithmetic element to be deleted.
4. the method according to claim 1, wherein deleting each net in the original depth neural network described The arithmetic element to be deleted of network layers, after obtaining the compressed deep neural network of network model, the method also includes:
Obtain the output result that operation is carried out using the compressed deep neural network of the network model;
If the output result is unsatisfactory for preset condition, using the original depth neural network output result with it is described Difference between the output result of the compressed deep neural network of network model, by preset algorithm, to the network model Weight in compressed deep neural network in the arithmetic element of each network layer is adjusted, until the output result meets The preset condition.
5. the method according to claim 1, wherein deleting each net in the original depth neural network described The arithmetic element to be deleted of network layers, after obtaining the compressed deep neural network of network model, the method also includes:
Obtain the degree of correlation in the compressed deep neural network of the network model between each arithmetic element of any network layer;
Judge whether the degree of correlation is less than the default degree of correlation;
If it is not, being then adjusted using default regularization term to the weight in each arithmetic element of the network layer, until the phase When Guan Du is less than the default degree of correlation, stop adjusting the weight in each arithmetic element.
6. a kind of network model compression set of deep neural network, which is characterized in that described device includes:
First obtains module, for obtaining original depth neural network;
First determining module, for the different degree by each arithmetic element in the network layer to the original depth neural network into Row analysis determines that different degree is lower than the arithmetic element of default different degree as arithmetic element to be deleted in the network layer;
Removing module obtains network for deleting the arithmetic element to be deleted of each network layer in the original depth neural network Deep neural network after model compression.
7. device according to claim 6, which is characterized in that first determining module is specifically used for:
Extract the weight absolute value of each arithmetic element in the network layer of the original depth neural network;
According to the weight absolute value of arithmetic element each in the network layer, the corresponding different degree for configuring each arithmetic element;
Based on the different degree of each arithmetic element, determine that different degree is lower than the arithmetic element of default different degree as operation list to be deleted Member.
8. device according to claim 6, which is characterized in that described device further include:
Analysis module is analyzed the network layer of the original depth neural network, is expired for utilizing rank analysis tool Under conditions of the default fault tolerance of foot, the first number of arithmetic element to be deleted in the network layer;
First determining module, is specifically used for:
It is analyzed by the different degree of each arithmetic element in the network layer to the original depth neural network, obtains the net The different degree of each arithmetic element in network layers;
First number operation list described in different degree sequential selection from small to large according to each arithmetic element in the network layer Member, and using selected arithmetic element as arithmetic element to be deleted.
9. device according to claim 6, which is characterized in that described device further include:
Second obtains module, for obtaining the output knot for carrying out operation using the compressed deep neural network of the network model Fruit;
The first adjustment module utilizes the original depth nerve net if being unsatisfactory for preset condition for the output result Difference between the output result of network and the output result of the compressed deep neural network of the network model, by imputing in advance Method is adjusted the weight in the arithmetic element of each network layer in the compressed deep neural network of the network model, directly Meet the preset condition to the output result.
10. device according to claim 6, which is characterized in that described device further include:
Third obtains module, for obtaining each operation of any network layer in the compressed deep neural network of the network model The degree of correlation between unit;
Judgment module, for judging whether the degree of correlation is less than the default degree of correlation;
Second adjustment module, if the judging result for the judgment module is no, the default regularization term of use, to the network Weight in each arithmetic element of layer is adjusted, until it is each to stop adjustment when the degree of correlation is less than the default degree of correlation Weight in arithmetic element.
11. a kind of computer equipment, which is characterized in that including processor and memory, wherein
The memory, for storing computer program;
The processor when for executing the program stored on the memory, realizes any side claim 1-5 Method step.
CN201711092273.3A 2017-11-08 2017-11-08 Network model compression method and device of deep neural network and computer equipment Active CN109754077B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711092273.3A CN109754077B (en) 2017-11-08 2017-11-08 Network model compression method and device of deep neural network and computer equipment
PCT/CN2018/114357 WO2019091401A1 (en) 2017-11-08 2018-11-07 Network model compression method and apparatus for deep neural network, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711092273.3A CN109754077B (en) 2017-11-08 2017-11-08 Network model compression method and device of deep neural network and computer equipment

Publications (2)

Publication Number Publication Date
CN109754077A true CN109754077A (en) 2019-05-14
CN109754077B CN109754077B (en) 2022-05-06

Family

ID=66402063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711092273.3A Active CN109754077B (en) 2017-11-08 2017-11-08 Network model compression method and device of deep neural network and computer equipment

Country Status (2)

Country Link
CN (1) CN109754077B (en)
WO (1) WO2019091401A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598731A (en) * 2019-07-31 2019-12-20 浙江大学 Efficient image classification method based on structured pruning
CN110650370A (en) * 2019-10-18 2020-01-03 北京达佳互联信息技术有限公司 Video coding parameter determination method and device, electronic equipment and storage medium
CN114692816A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Processing method and equipment of neural network model

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418086B (en) * 2021-12-02 2023-02-28 北京百度网讯科技有限公司 Method and device for compressing neural network model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269351B1 (en) * 1999-03-31 2001-07-31 Dryken Technologies, Inc. Method and system for training an artificial neural network
CN1945602A (en) * 2006-07-07 2007-04-11 华中科技大学 Characteristic selecting method based on artificial nerve network
CN103778467A (en) * 2014-01-16 2014-05-07 天津大学 Power system transient state stability estimation inputted characteristic quantity selection method
CN106650928A (en) * 2016-10-11 2017-05-10 广州视源电子科技股份有限公司 Method and device for optimizing neural network
CN106779068A (en) * 2016-12-05 2017-05-31 北京深鉴智能科技有限公司 The method and apparatus for adjusting artificial neural network
CN107248144A (en) * 2017-04-27 2017-10-13 东南大学 A kind of image de-noising method based on compression-type convolutional neural networks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297778A (en) * 2015-05-21 2017-01-04 中国科学院声学研究所 The neutral net acoustic model method of cutting out based on singular value decomposition of data-driven
CN106127297B (en) * 2016-06-02 2019-07-12 中国科学院自动化研究所 The acceleration of depth convolutional neural networks based on tensor resolution and compression method
CN106203376B (en) * 2016-07-19 2020-04-10 北京旷视科技有限公司 Face key point positioning method and device
CN106355210B (en) * 2016-09-14 2019-03-19 华北电力大学(保定) Insulator Infrared Image feature representation method based on depth neuron response modes
CN111860826A (en) * 2016-11-17 2020-10-30 北京图森智途科技有限公司 Image data processing method and device of low-computing-capacity processing equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269351B1 (en) * 1999-03-31 2001-07-31 Dryken Technologies, Inc. Method and system for training an artificial neural network
CN1945602A (en) * 2006-07-07 2007-04-11 华中科技大学 Characteristic selecting method based on artificial nerve network
CN103778467A (en) * 2014-01-16 2014-05-07 天津大学 Power system transient state stability estimation inputted characteristic quantity selection method
CN106650928A (en) * 2016-10-11 2017-05-10 广州视源电子科技股份有限公司 Method and device for optimizing neural network
CN106779068A (en) * 2016-12-05 2017-05-31 北京深鉴智能科技有限公司 The method and apparatus for adjusting artificial neural network
CN107248144A (en) * 2017-04-27 2017-10-13 东南大学 A kind of image de-noising method based on compression-type convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HIROTAKA INOUE: "《Self-Organizing Neural Grove: Efficient Multiple Classifier System with Pruned Self-Generating Neural Trees》", 《INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS》 *
SONG HAN,ET AL: "《Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding》", 《ARXIV PREPRINT ARXIV:1510.00149V5》 *
SONG HAN,ET AL: "《Learning both Weights and Connections for Efficient Neural Networks》", 《ARXIV PREPRINT ARXIV:1506.02626V3》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598731A (en) * 2019-07-31 2019-12-20 浙江大学 Efficient image classification method based on structured pruning
CN110598731B (en) * 2019-07-31 2021-08-20 浙江大学 Efficient image classification method based on structured pruning
CN110650370A (en) * 2019-10-18 2020-01-03 北京达佳互联信息技术有限公司 Video coding parameter determination method and device, electronic equipment and storage medium
CN110650370B (en) * 2019-10-18 2021-09-24 北京达佳互联信息技术有限公司 Video coding parameter determination method and device, electronic equipment and storage medium
CN114692816A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Processing method and equipment of neural network model
CN114692816B (en) * 2020-12-31 2023-08-25 华为技术有限公司 Processing method and equipment of neural network model

Also Published As

Publication number Publication date
WO2019091401A1 (en) 2019-05-16
CN109754077B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US11176418B2 (en) Model test methods and apparatuses
CN109754077A (en) Network model compression method, device and the computer equipment of deep neural network
CN106982359B (en) A kind of binocular video monitoring method, system and computer readable storage medium
CN107358157A (en) A kind of human face in-vivo detection method, device and electronic equipment
CN109376615A (en) For promoting the method, apparatus and storage medium of deep learning neural network forecast performance
CN112801057B (en) Image processing method, image processing device, computer equipment and storage medium
CN108985159A (en) Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN111008640A (en) Image recognition model training and image recognition method, device, terminal and medium
CN109543826A (en) A kind of activation amount quantization method and device based on deep neural network
CN105303179A (en) Fingerprint identification method and fingerprint identification device
CN108986075A (en) A kind of judgment method and device of preferred image
CN109583561A (en) A kind of the activation amount quantization method and device of deep neural network
CN113095370A (en) Image recognition method and device, electronic equipment and storage medium
CN109919296A (en) A kind of deep neural network training method, device and computer equipment
CN112434556A (en) Pet nose print recognition method and device, computer equipment and storage medium
CN111931179A (en) Cloud malicious program detection system and method based on deep learning
CN110070106A (en) Smog detection method, device and electronic equipment
CN110287767A (en) Can attack protection biopsy method, device, computer equipment and storage medium
CN108875519A (en) Method for checking object, device and system and storage medium
CN111291773A (en) Feature identification method and device
CN108875500A (en) Pedestrian recognition methods, device, system and storage medium again
CN113642360A (en) Behavior timing method and device, electronic equipment and storage medium
CN108596094B (en) Character style detection system, method, terminal and medium
CN110458600A (en) Portrait model training method, device, computer equipment and storage medium
CN108875536A (en) Pedestrian's analysis method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant