CN108229679A - Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium - Google Patents

Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN108229679A
CN108229679A CN201711183838.9A CN201711183838A CN108229679A CN 108229679 A CN108229679 A CN 108229679A CN 201711183838 A CN201711183838 A CN 201711183838A CN 108229679 A CN108229679 A CN 108229679A
Authority
CN
China
Prior art keywords
convolution kernel
neural networks
convolutional neural
beta pruning
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711183838.9A
Other languages
Chinese (zh)
Inventor
杨成熙
孙文秀
庞家昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201711183838.9A priority Critical patent/CN108229679A/en
Publication of CN108229679A publication Critical patent/CN108229679A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The embodiment of the invention discloses a kind of convolutional neural networks de-redundancy method and device, electronic equipment and storage medium, wherein methods to include:For the initial convolutional neural networks, the similarity between each convolution kernel is determined;According to the similarity between each convolution kernel, beta pruning processing is carried out to the convolutional layer of the initial neural network;Convolutional neural networks after beta pruning are trained, obtain target convolutional neural networks.The embodiment of the present invention can improve the speed of service of convolutional neural networks.

Description

Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium
Technical field
The present invention relates to depth learning technology field, especially a kind of convolutional neural networks de-redundancy method and device, electricity Sub- equipment and storage medium.
Background technology
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, it Artificial neuron can respond the surrounding cells in a part of coverage area, be especially suitable for completing large-scale image procossing.
Invention content
The embodiment of the present invention provides a kind of technical solution of convolutional neural networks de-redundancy.
According to embodiments of the present invention on one side, a kind of convolutional neural networks de-redundancy method is provided, including:For initial Convolutional neural networks determine the similarity between each convolution kernel;According to the similarity between each convolution kernel, to described initial The convolutional layer of neural network carries out beta pruning processing;Convolutional neural networks after beta pruning are trained, obtain target convolutional Neural Network.
In a kind of optional mode, further include:Training obtains the initial convolutional neural networks;The training obtains described Initial convolutional neural networks include:Determine the network structure for meeting preset standard of accruacy;According to data sample training network, obtain To the initial convolutional neural networks for meeting the network structure.
In a kind of optional mode, according to the similarity between each convolution kernel, to the volume of the initial neural network Lamination carries out beta pruning, including:Similarity between each convolution kernel of each convolutional layer and preset beta pruning similarity threshold are carried out Compare;Eliminate the corresponding convolution kernel of similarity higher than the beta pruning similarity threshold.
It is similar between calculating each convolution kernel for the initial convolutional neural networks in a kind of optional mode Degree, including:For the initial convolutional neural networks, Euclidean distance or COS distance between each convolution kernel are calculated;By each Euclidean distance or COS distance between convolution kernel, determine the similarity between each convolution kernel.
In a kind of optional mode, the corresponding convolution kernel of similarity eliminated higher than the beta pruning similarity threshold, Including:The corresponding convolution kernel of Euclidean distance less than default beta pruning Euclidean distance threshold value is eliminated, alternatively, eliminating less than default beta pruning The corresponding convolution kernel of COS distance of COS distance threshold value.
It is described for the initial convolutional neural networks in a kind of optional mode, it calculates European between each convolution kernel Distance or COS distance, including:For each convolutional layer in the initial convolutional neural networks, according to the convolution of convolutional layer Core size determines the output channel quantity of characteristic pattern;M vector is formed according to the parameter of each output channel;Calculate the M Euclidean distance of the Euclidean distance between each convolution kernel between vectorial each two vector, alternatively, calculating the M vector every two COS distance of the COS distance between each convolution kernel between a vector.
In a kind of optional mode, the similarity according between each convolution kernel, to the volume of the neural convolutional network Lamination carries out beta pruning, including:First matrix is obtained according to the Euclidean distance between described M vectorial each two vector;By described in Euclidean distance numerical value in first matrix is compared with preset beta pruning Euclidean distance threshold value;It, will in first matrix Position more than the beta pruning Euclidean distance threshold value is set as 0, will be set as less than the position of the beta pruning Euclidean distance threshold value 1, obtain the second matrix;Alternatively, the first matrix is obtained according to the COS distance between described M vectorial each two vector;By institute The COS distance numerical value stated in the first matrix is compared with preset beta pruning COS distance threshold value;In first matrix, The position that will be greater than the beta pruning COS distance threshold value is set as 0, will be set less than the position of the beta pruning COS distance threshold value It is 1, obtains the second matrix.
In a kind of optional mode, the convolutional neural networks to after beta pruning are trained, and obtain target convolutional Neural Network, including:According to the value of element each in second matrix, connection figure is established;According to connector algorithm, institute is calculated State the sum of connector in connection figure;The sum of the connector is determined as to the output channel number of corresponding convolutional layer, it will Corresponding vector is averaged in connector, the parameter new as the convolutional layer;Institute is obtained according to the new parameter training State target convolutional neural networks.
It is described that the target convolutional neural networks packet is obtained according to the new parameter training in a kind of optional mode It includes:The real-time testing currently speed for the convolutional neural networks that training obtains and accuracy;By it is described test obtained speed and/or Accuracy is compared respectively with target velocity threshold value and/or aimed at precision threshold value, if reaching target velocity and/or target standard Exactness requirement, it is determined that training is completed and obtains the target convolutional neural networks.
According to embodiments of the present invention on one side, a kind of convolutional neural networks de-redundancy device is provided, including:Similarity is true Order member, for being directed to initial convolutional neural networks, determines the similarity between each convolution kernel;Beta pruning unit, for according to institute The similarity between each convolution kernel is stated, beta pruning processing is carried out to the convolutional layer of the initial neural network;Target training unit is used It is trained in the convolutional neural networks after beta pruning, obtains target convolutional neural networks.
In a kind of optional mode, further include:Initial training unit obtains the initial convolution nerve net for training Network;The initial training unit is specifically used for:Determine the network structure for meeting preset standard of accruacy and, according to data sample This training network obtains the initial convolutional neural networks for meeting the network structure.
In a kind of optional mode, the beta pruning unit is specifically used for:It will be similar between each convolution kernel of each convolutional layer Degree be compared with preset beta pruning similarity threshold and, eliminate higher than the beta pruning similarity threshold similarity correspondence Convolution kernel.
In a kind of optional mode, the similarity determining unit includes:It is described first for being directed to apart from computation subunit Beginning convolutional neural networks calculate Euclidean distance or COS distance between each convolution kernel;Similarity determination subelement passes through each volume Euclidean distance or COS distance between product core, determine the similarity between each convolution kernel.
In a kind of optional mode, the beta pruning unit is specifically used for:It eliminates less than default beta pruning Euclidean distance threshold value The corresponding convolution kernel of Euclidean distance, alternatively, eliminating the corresponding convolution kernel of COS distance less than default beta pruning COS distance threshold value.
It is described to be specifically used for apart from computation subunit in a kind of optional mode:For the initial convolutional neural networks In each convolutional layer, according to the convolution kernel size of convolutional layer, determine the output channel quantity of characteristic pattern;According to each output The parameter of channel forms M vector;And the Euclidean distance calculated between described M vectorial each two vector is each convolution kernel Between Euclidean distance, alternatively, the COS distance calculated between described M vectorial each two vector is remaining between each convolution kernel Chordal distance.
In a kind of optional mode, the beta pruning unit includes:First matrix computing unit, for according to being calculated Euclidean distance between M vectorial each two vector obtains the first matrix;The sub- computing unit of second matrix, for by described first Euclidean distance numerical value in matrix is compared with preset beta pruning Euclidean distance threshold value;In first matrix, it will be greater than The position of the beta pruning Euclidean distance threshold value is set as 0, will be set as 1 less than the position of the beta pruning Euclidean distance threshold value, obtains To the second matrix;Alternatively, the sub- computing unit of the first matrix, for according to the cosine between described M vectorial each two vector away from From obtaining the first matrix;The sub- computing unit of second matrix, for by the COS distance numerical value in first matrix with it is preset Beta pruning COS distance threshold value is compared;In first matrix, the position that will be greater than the beta pruning COS distance threshold value is set 0 is set to, 1 will be set as less than the position of the beta pruning COS distance threshold value, obtain the second matrix.
In a kind of optional mode, the target training unit includes:Connection figure establishes subelement, for according to described The value of each element, establishes connection figure in two matrixes;And according to connector algorithm, connector in the connection figure is calculated Sum;New parameter determination subelement, for the sum of the connector to be determined as to the output channel number of corresponding convolutional layer, Vector corresponding in connector is averaged, the parameter new as the convolutional layer;Re -training subelement, for according to institute It states new parameter training and obtains the target convolutional neural networks.
In a kind of optional mode, the re -training subelement is specifically used for:The real-time testing volume that currently training obtains The speed of product neural network and accuracy;And by it is described test obtained speed and/or accuracy respectively with target velocity threshold Value and/or aimed at precision threshold value are compared, if reaching target velocity and/or aimed at precision requirement, it is determined that trained Into and obtain the target convolutional neural networks.
According to embodiments of the present invention on one side, a kind of computer readable storage medium is provided, is stored thereon with computer Program, when which is executed by processor described in realization any of the above-described the step of convolutional neural networks de-redundancy method.
According to embodiments of the present invention on one side, a kind of electronic equipment is provided, including memory, processor and is stored in On reservoir and the computer program that can run on a processor, any one of the processor is realized when the performing described program volume The step of product neural network de-redundancy method.
Based on the convolutional neural networks de-redundancy method that the above embodiment of the present invention provides, by analyzing convolutional neural networks The similarity of middle convolution kernel, the high redundancy convolutional layer of removal similarity carry out beta pruning processing, and what it is due to removal is convolutional Neural net The higher redundancy convolutional layer of convolution kernel similarity in network, therefore the same of the accuracy of entire convolutional neural networks can not influenced When, network size is reduced, improves the speed of service of network.
Below by drawings and examples, technical scheme of the present invention is described in further detail.
Description of the drawings
The attached drawing of a part for constitution instruction describes the embodiment of the present invention, and is used to explain together with description The principle of the present invention.
With reference to attached drawing, according to following detailed description, the present invention can be more clearly understood, wherein:
Fig. 1 is the flow chart of convolutional neural networks de-redundancy method one embodiment of the present invention.
Fig. 2 is the flow chart of another embodiment of convolutional neural networks de-redundancy method of the present invention.
Fig. 3 is convolutional neural networks de-redundancy Method And Principle schematic diagram of the present invention.
Fig. 4 is the structure diagram of convolutional neural networks de-redundancy device one embodiment of the present invention.
Fig. 5 is the structure diagram of another embodiment of convolutional neural networks de-redundancy device of the present invention.
Fig. 6 is the structure diagram of electronic equipment one embodiment of the present invention.
Specific embodiment
Carry out the various exemplary embodiments of detailed description of the present invention now with reference to attached drawing.It should be noted that:Unless in addition have Body illustrates that the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally The range of invention.Simultaneously, it should be appreciated that for ease of description, the sizes of the various pieces shown in attached drawing be not according to What practical proportionate relationship was drawn.
It is illustrative to the description only actually of at least one exemplary embodiment below, is never used as to the present invention And its application or any restrictions that use.It may not for technology, method and apparatus known to person of ordinary skill in the relevant It is discussed in detail, but in the appropriate case, the technology, method and apparatus should be considered as part of specification.It should be noted that It arrives:Similar label and letter represents similar terms in following attached drawing, therefore, once determined in a certain Xiang Yi attached drawing Justice then in subsequent attached drawing does not need to that it is further discussed.
The general calculation amount of convolutional neural networks is all huger, how in the premise for keeping network model performance almost unchanged Under, model calculation amount is reduced, is the major issue that convolutional neural networks are applied in actual conditions.
Characteristic pattern (feature map) number of channels exported simply by each convolutional layer is changed is reduced every The quantity of layer convolution kernel, it will be able to reduce the calculation amount for having network.But to which convolutional layer reduction, reduction how much, why It is the key that beta pruning problem that model performance, which could will not be greatly lowered, in sample reduction.However, many existing methods are blindness Number of channels is reduced, there is no carry out enough analyses to every layer of convolution to determine reduced method.
Fig. 1 is the flow chart of one convolutional neural networks de-redundancy embodiment of the method for the method of the present invention.It as shown in Figure 1, should Embodiment method includes S101-S103.
S101:For initial convolutional neural networks, the similarity between each convolution kernel is determined.
Initial convolutional neural networks are without the convolutional neural networks before beta pruning.Such as, it is first determined meet preset standard The network structure of exactness standard;According to data sample training network, network is trained using training platform, obtains meeting net The initial convolutional neural networks of network structure.In the initial convolutional neural networks of training, often in the situation for not considering calculation amount Lower design network structure, thus the obtained initial convolutional neural networks accuracy of training is higher but speed is not fast enough.
Similarity between convolution kernel can be understood as the correlation of the parameter values of each convolution kernel.Convolution kernel is convolution When using the power arrived, represent that the matrix is identical with the image area size used with a matrix, row, column is all odd number, is One weight matrix.Therefore the similarity between each convolution kernel can be represented by the correlation of the parameter values of each weight matrix.
S102:According to the similarity between each convolution kernel, beta pruning is carried out to the convolutional layer of initial convolutional neural networks.
A beta pruning similarity threshold can be preset, by by the similarity between each convolution kernel and the beta pruning similarity Threshold value, which is compared, to be determined whether to carry out beta pruning to the convolution kernel of convolutional layer.For example, by the similarity between each convolution kernel and in advance If beta pruning similarity threshold be compared;Eliminate the corresponding convolution kernel of similarity higher than beta pruning similarity threshold;And retain Less than the corresponding convolution kernel of similarity of beta pruning similarity threshold.
S103:Convolutional neural networks after beta pruning are trained, obtain target convolutional neural networks.
After the convolution kernel by eliminating redundancy convolutional layer according to convolution kernel similarity, due to having deleted a part Convolution kernel, therefore the quantity of convolution kernel reduces.It carries out convolutional neural networks training again accordingly, obtains the target volume of scale is smaller Product neural network.Due to scale down, target convolutional neural networks can ensure treatment effeciency, and due to elimination be redundancy Convolution kernel, therefore higher accuracy rate can be kept.
As it can be seen that in convolutional neural networks de-redundancy method provided in an embodiment of the present invention, by analyzing convolutional neural networks The similarity of middle convolution kernel, the high redundancy convolutional layer of removal similarity carry out beta pruning processing, and what it is due to removal is convolutional Neural net The high redundancy convolutional layer of convolution kernel similarity in network, therefore the same of the accuracy of entire convolutional neural networks can not influenced When, network size is reduced, improves network operation speed.
Referring to the flow chart that Fig. 2 is another embodiment of convolutional neural networks de-redundancy method of the present invention.In the embodiment, Specific illustrative describing determines the similarity between convolution kernel by the Euclidean distance between convolution kernel or COS distance Mode.
S201:For initial convolutional neural networks, Euclidean distance or COS distance between each convolution kernel are calculated, by each Euclidean distance or COS distance between convolution kernel, determine the similarity between each convolution kernel.
Initial convolutional neural networks are without the convolutional neural networks before beta pruning.Such as, it is first determined meet preset standard The network structure of exactness standard;According to data sample training network, network is trained using training platform, obtains meeting net The initial convolutional neural networks of network structure.In the initial convolutional neural networks of training, often in the situation for not considering calculation amount Lower design network structure, thus the obtained initial convolutional neural networks accuracy of training is higher but speed is not fast enough.
In the present embodiment, by calculating Euclidean distance between convolution kernel or COS distance determines phase between convolution kernel Like degree.It is appreciated that the Euclidean distance between convolution kernel is bigger, then the similarity between convolution kernel is lower;Similar, convolution kernel Between COS distance distance it is bigger, then the similarity between convolution kernel is lower.
Euclidean distance (euclidean metric), also referred to as euclidean metric are the distance definitions of a generally use, Refer to the natural length (i.e. the distance of the point to origin) of the actual distance or vector in m-dimensional space between two points.Two Euclidean distance in peacekeeping three dimensions is exactly the actual range between 2 points.COS distance, also referred to as cosine similarity are to use Measurement of two vectorial angle cosine values as the size for weighing two inter-individual differences in vector space.COS distance uses Two vectorial angle cosine values are as the size for weighing two individual differences.Compared to Euclidean distance, COS distance is more focused on Difference of two vectors on direction.
In one implementation, the Euclidean distance or COS distance between each convolution kernel are calculated as follows:
S2011:For each convolutional layer in initial convolutional neural networks, according to the convolution kernel size of convolutional layer, really Determine the output channel quantity M of characteristic pattern;
S2012:M vector is formed according to the parameter of each output channel;
S2013:Calculate Euclidean distance of the Euclidean distance between M vector each two vector between each convolution kernel or Person calculates COS distance of the COS distance between M vectorial each two vector between each convolution kernel.
For example, to each convolutional layer in initial neural network, N number of parameter composition of each output channel is corresponded to M vectors, wherein N=(k*k*i+1);I is the number of channels of the characteristic pattern (feature map) of convolutional layer input, and M is the layer The number of channels of characteristic pattern (feature map) is exported, k is the size of the convolutional layer convolution kernel, in N=(k*k*i+1) "+1 " Refer to an additional deviation (bias) parameter of the convolutional layer.Calculate this M vector Euclidean distance between any two or cosine away from From obtaining the first matrix (such as matrix A) of a M*M.
S202:According to the Euclidean distance or COS distance between each convolution kernel, to the convolutional layer of initial convolutional neural networks Carry out beta pruning processing.
As previously mentioned, the Euclidean distance or COS distance between convolution kernel are bigger, then corresponding similarity is smaller;Conversely, Euclidean distance or COS distance between convolution kernel is smaller, then corresponding similarity is bigger.Since de-redundancy is that removal is similar The process of the convolution kernel of high convolutional layer is spent, convolution kernel therefore namely in removal Euclidean distance or the small convolutional layer of COS distance Process.
In one implementation, beta pruning processing is carried out to the convolutional layer of initial convolutional neural networks as follows:
S2021:According between the vectorial each two vector of the number of channels that is calculated Euclidean distance (cosine away from From) obtain the first matrix;
S2022:By the Euclidean distance numerical value (COS distance numerical value) in the first matrix and preset beta pruning Euclidean distance threshold Value (beta pruning COS distance threshold value) is compared;In the first matrix, beta pruning Euclidean distance threshold value (beta pruning COS distance will be greater than Threshold value) position be set as 0, will be less than beta pruning Euclidean distance threshold value (beta pruning COS distance threshold value) position be set as 1, obtain Second matrix.
Such as:Numerical value in analysis matrix A sets an appropriately distance threshold value, will be greater than the corresponding position of this threshold value It installs and is set to 0, the position less than this value is set as 1, obtains new the second matrix (matrix B).
S203:Convolutional neural networks after beta pruning are trained, obtain target convolutional neural networks.
In one implementation, training obtains target convolutional neural networks as follows:
S2031:According to the value of element each in the second matrix, connection figure is established;
S2032:According to connector algorithm, the sum of connector in connection figure is calculated;
S2033:The sum of connector is determined as to the output channel number of corresponding convolutional layer, it will be corresponding in connector Vector average, the parameter new as this layer of convolution;
S2034:The target convolutional neural networks are obtained according to new parameter training.
As described before, it is that will be greater than beta pruning Euclidean distance threshold value (COS distance threshold during the second matrix is obtained Value) weight be set as 0, that is, it is 0 to represent Euclidean distance (COS distance) between the two nodes, represents the two nodes not phase Even;Conversely, the weight less than beta pruning Euclidean distance threshold value (beta pruning COS distance threshold value) is set as 1, then mean the two nodes It is connected.Setting in this way can put in order all nodes less than threshold value in same connector, i.e. this connector In all nodes it is all similar, so only need in this is to this connector retain a convolution kernel, so as to reduce redundancy.
Such as:Matrix B can be regarded as to the adjacency matrix of a figure, using this adjacency matrix, build a connection Figure.Since the xth in each matrix arranges the element w of y rows, the weight for representing the side between node x and node y is w;Therefore it is logical The value of element each in this way is crossed, a complete connection figure can be established.
(included but not limited to using connector algorithm:Breadth-first search algorithm, Depth Priority Algorithm), acquire this The total x of connector in connection figure.This x is set as to the output channel number of corresponding this layer of convolution, institute in connector is right The vector answered is averaged, and the parameter new as this layer of convolution is put back into initial convolutional neural networks.To this with new ginseng Several smaller networks optimizes training and obtains target convolutional neural networks.
The process of re -training is, for example,:Same training platform and same data training network are reused, is instructed To not there is no the parameter corresponding to the layer changed to be put into smaller network corresponding position when practicing in former network structure to be trained.It repeats Above training operation is to reach the equalization point of model size and model accuracy, for example, according to the model size of target design, fortune It calculates between speed and model capability to determine whether reaching demand.
In a kind of optional mode, the process of target convolutional neural networks is obtained according to new parameter training is:It surveys in real time The speed for the convolutional neural networks that the current training of examination obtains and accuracy;Also, obtained speed and/or accuracy point will be tested It is not compared with target velocity threshold value and/or aimed at precision threshold value, it will if reaching target velocity and/or aimed at precision It asks, it is determined that training is completed and obtains target convolutional neural networks.
It such as specifically can be there are many realization method.
If assuming that paying close attention to the speed of service of convolutional neural networks, obtained speed and target velocity threshold value will be tested It is compared, however, it is determined that reached minimum target velocity threshold value, then confirmed that the convolutional neural networks that current training obtains reach The requirement of target convolutional neural networks, training process terminate.If assuming that paying close attention to the accuracy of convolutional neural networks, will survey The accuracy for trying to obtain is compared with aimed at precision threshold value, however, it is determined that has reached minimum aimed at precision threshold value, then really Recognize the requirement that the convolutional neural networks that current training obtains reach target convolutional neural networks, training process terminates.Assuming that simultaneously Pay close attention to speed and the accuracy of convolutional neural networks, then will test obtained speed and accuracy respectively with target velocity threshold value and Aimed at precision threshold value is compared, however, it is determined that while reach minimum target velocity threshold value and aimed at precision threshold value, then really Recognize the requirement that the convolutional neural networks that current training obtains reach target convolutional neural networks, training process terminates.
Referring to Fig. 3, schematic diagram is realized for convolutional neural networks de-redundancy method of the present invention.Fig. 3 shows one to convolution Convolution kernel carries out beta pruning in layer.It is assumed that the convolution kernel of reel lamination includes k1, k2, k3, k4 ... kn;After beta pruning, eliminate Convolution kernel k1, k3, k4 of convolutional layer etc. retain convolution kernel k2 etc..It is by determining convolution kernel similarity and high to similarity Convolution kernel carries out beta pruning processing, has effectively removed redundancy convolution kernel;The net that convolution kernel re -training according to remaining obtains Network scale down can improve treatment effeciency.
Convolutional neural networks de-redundancy method provided in an embodiment of the present invention, due to being that purposive removal similarity is high Convolution kernel, therefore relative to the mode of large-sized model religion mini Mod, the video memory of occupancy is smaller;It is set in mini Mod compared to blindness The number of channels of convolutional layer, number of channels set more science, it can be ensured that the accuracy of model.It is such as larger in calculation amount In large-sized model application (in the application of the binocular depth of field), it can analyze redundancy convolutional layer in network with present invention method, cut off this The convolution kernel of a little redundancy convolutional layers to reduce the number of channels of network, further reduces network size and calculation amount, improves network The speed of service.
Fig. 4 is the structure diagram of convolutional neural networks de-redundancy device one embodiment of the present invention.The dress of the embodiment It puts available for realizing the above-mentioned each method embodiment of the present invention.As shown in figure 4, the device of the embodiment includes:
Similarity determining unit 401 for being directed to the initial convolutional neural networks, determines similar between each convolution kernel Degree;
Beta pruning unit 402, for according to the similarity between each convolution kernel, to the convolutional layer of the initial neural network into Row beta pruning is handled;
Target training unit 403 for being trained to the convolutional neural networks after beta pruning, obtains target convolution nerve net Network.
In a kind of optional mode, further include:Initial training unit 400 obtains the initial convolutional Neural for training Network;The initial training unit is specifically used for:Determine the network structure for meeting preset standard of accruacy and, according to data Sample training network obtains the initial convolutional neural networks for meeting the network structure.
In a kind of optional mode, the beta pruning unit 402 is specifically used for:By the phase between each convolution kernel of each convolutional layer Like degree be compared with preset beta pruning similarity threshold and, eliminate be higher than the beta pruning similarity threshold similarity pair The convolution kernel answered.
Fig. 5 is the structure diagram of another embodiment of convolutional neural networks de-redundancy device of the present invention.The embodiment Device can be used for realizing the above-mentioned each method embodiment of the present invention.As shown in figure 5, the device of the embodiment includes:
Similarity determining unit 501 for being directed to the initial convolutional neural networks, determines similar between each convolution kernel Degree;
Beta pruning unit 502, for according to the similarity between each convolution kernel, to the convolutional layers of initial convolutional neural networks into Row beta pruning;
Target training unit 503 for being trained to the convolution log on after beta pruning, obtains target convolution nerve net Network.
In a kind of optional mode, further include:Initial training unit 500 obtains the initial convolutional Neural for training Network;The initial training unit 500 is specifically used for:Determine the network structure for meeting preset standard of accruacy and, according to number According to sample training network, the initial convolutional neural networks for meeting the network structure are obtained.
In a kind of optional mode, the similarity determining unit 501 includes:
Apart from computation subunit 5011, for being directed to the initial convolutional neural networks, the Europe between each convolution kernel is calculated Formula distance or COS distance;
Similarity determination subelement 5012 by the Euclidean distance or COS distance between each convolution kernel, determines each convolution Similarity between core.
In a kind of optional mode, the beta pruning unit 502 is specifically used for:It eliminates less than default beta pruning Euclidean distance threshold value The corresponding convolution kernel of Euclidean distance, alternatively, eliminating the corresponding convolution of COS distance less than default beta pruning COS distance threshold value Core.
It is described to be specifically used for apart from computation subunit 5011 in a kind of optional mode:For the initial convolutional Neural Each convolutional layer in network according to the convolution kernel size of convolutional layer, determines the output channel quantity of characteristic pattern;According to each The parameter of output channel forms M vector;And the Euclidean distance calculated between described M vectorial each two numerical value is each volume Euclidean distance between product core, alternatively, calculating COS distance between the M vector each two vector between each convolution kernel COS distance.
In a kind of optional mode, the beta pruning unit 502 includes:
The sub- computing unit 5021 of first matrix, for according between calculated M vectorial each two numerical value it is European away from From obtaining the first matrix;
The sub- computing unit 5022 of second matrix, for by the Euclidean distance numerical value in first matrix and preset beta pruning Euclidean distance threshold value is compared;In first matrix, the position that will be greater than the beta pruning Euclidean distance threshold value is set as 0,1 will be set as less than the position of the beta pruning Euclidean distance threshold value, obtains the second matrix;Alternatively,
The sub- computing unit of first matrix, for obtaining the according to the COS distance between described M vectorial each two vector One matrix;
The sub- computing unit of second matrix, for by the COS distance numerical value in first matrix and preset beta pruning cosine Distance threshold is compared;In first matrix, the position that will be greater than the beta pruning COS distance threshold value is set as 0, will Position less than the beta pruning COS distance threshold value is set as 1, obtains the second matrix
In a kind of optional mode, the target training unit 503 includes:
Connection figure establishes subelement 5031, for the value according to element each in second matrix, establishes connection figure;And According to connector algorithm, the sum of connector in the connection figure is calculated;
New parameter determination subelement 5032, the output for the sum of the connector to be determined as to corresponding convolutional layer are led to Road number averages vector corresponding in connector, the parameter new as this layer of convolution;
Re -training subelement 5033, for obtaining the target convolutional neural networks according to the new parameter training.
In a kind of optional mode, the re -training subelement 5033 is specifically used for:Currently training obtains real-time testing Convolutional neural networks speed and accuracy;And the speed for obtaining the test and/or accuracy are fast with target respectively Degree threshold value and/or aimed at precision threshold value are compared, if reaching target velocity and/or aimed at precision requirement, it is determined that instruction Practice and complete and obtain the target convolutional neural networks.
The embodiment of the present invention additionally provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down Plate computer, server etc..Below with reference to Fig. 6, it illustrates suitable for being used for realizing the terminal device of the embodiment of the present application or service The structure diagram of the electronic equipment 600 of device:As shown in fig. 6, computer system 600 includes one or more processors, communication Portion etc., one or more of processors are for example:One or more central processing unit (CPU) 601 and/or one or more Image processor (GPU) 613 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 602 or From the executable instruction that storage section 608 is loaded into random access storage device (RAM) 603 perform various appropriate actions and Processing.Communication unit 612 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card,
Processor can communicate with read-only memory 602 and/or random access storage device 630 to perform executable instruction, It is connected by bus 604 with communication unit 612 and is communicated through communication unit 612 with other target devices, is implemented so as to complete the application The corresponding operation of any one method that example provides, for example, training obtains initial convolutional neural networks;For the initial convolution god Through network, the similarity between each convolution kernel is determined;According to the similarity between each convolution kernel, beta pruning is carried out to convolution kernel;Root According to the convolution kernel after beta pruning, training obtains target convolutional neural networks.
In addition, in RAM 603, it can also be stored with various programs and data needed for device operation.CPU601、ROM602 And RAM603 is connected with each other by bus 604.In the case where there is RAM603, ROM602 is optional module.RAM603 is stored Executable instruction is written in executable instruction into ROM602 at runtime, and it is above-mentioned logical that executable instruction performs processor 601 The corresponding operation of letter method.Input/output (I/O) interface 605 is also connected to bus 604.Communication unit 612 can be integrally disposed, It may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 605 are connected to lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.; And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because The network of spy's net performs communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon Computer program be mounted into storage section 608 as needed.
Need what is illustrated, framework as shown in Figure 6 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Fig. 6 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments Each fall within protection domain disclosed by the invention.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product, it is machine readable including being tangibly embodied in Computer program on medium, computer program are included for the program code of the method shown in execution flow chart, program code It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, receiving the picture of certificate to be detected Or video;After picture or video to certificate to be detected are handled, certificate image to be detected is obtained;To certificate image to be detected Feature extraction is carried out, obtains the characteristic information of multiple classifications;Certificate is carried out according to the characteristic information of the multiple classification and forges knowledge Not, the recognition result of the certificate is obtained.In such embodiments, the computer program can by communications portion 609 from It is downloaded and installed on network and/or is mounted from detachable media 611.In the computer program by central processing unit (CPU) during 601 execution, the above-mentioned function of being limited in the present processes is performed.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through Program instruction relevant hardware is completed, aforementioned program can be stored in one it is computer-readable in, the program when being executed, Perform step including the steps of the foregoing method embodiments;And aforementioned storage medium includes:ROM, RAM, magnetic disc or CD etc. are various The medium of program code can be stored.
Methods and apparatus of the present invention may be achieved in many ways.For example, can by software, hardware, firmware or Software, hardware, firmware any combinations realize methods and apparatus of the present invention.The said sequence of the step of for the method Merely to illustrate, the step of method of the invention, is not limited to sequence described in detail above, special unless otherwise It does not mentionlet alone bright.In addition, in some embodiments, the present invention can be also embodied as recording program in the recording medium, these programs Including being used to implement machine readable instructions according to the method for the present invention.Thus, the present invention also covering stores to perform basis The recording medium of the program of the method for the present invention.Description of the invention provides for the sake of example and description, and not It is exhaustively or limits the invention to disclosed form.Many modifications and variations are for those of ordinary skill in the art For be obvious.Selection and description embodiment are to more preferably illustrate the principle of the present invention and practical application, and make ability The those of ordinary skill in domain is it will be appreciated that of the invention so as to design the various embodiments with various modifications suitable for special-purpose.

Claims (10)

  1. A kind of 1. convolutional neural networks de-redundancy method, which is characterized in that including:
    For initial convolutional neural networks, the similarity between each convolution kernel is determined;
    According to the similarity between each convolution kernel, beta pruning processing is carried out to the convolutional layer of the initial neural network;
    Convolutional neural networks after beta pruning are trained, obtain target convolutional neural networks.
  2. 2. it according to the method described in claim 1, it is characterized in that, further includes:Training obtains the initial convolutional neural networks;
    The training obtains the initial convolutional neural networks and includes:Determine the network structure for meeting preset standard of accruacy;Root According to data sample training network, the initial convolutional neural networks for meeting the network structure are obtained.
  3. 3. according to the method described in claim 1, it is characterized in that, according to the similarity between each convolution kernel, to described The convolutional layer of initial neural network carries out beta pruning, including:
    Similarity between each convolution kernel of each convolutional layer and preset beta pruning similarity threshold are compared;
    Eliminate the corresponding convolution kernel of similarity higher than the beta pruning similarity threshold.
  4. 4. according to the method described in claim 3, it is characterized in that, for the initial convolutional neural networks, calculate described each Similarity between convolution kernel, including:
    For the initial convolutional neural networks, Euclidean distance or COS distance between each convolution kernel are calculated;
    By the Euclidean distance or COS distance between each convolution kernel, the similarity between each convolution kernel is determined.
  5. 5. according to the method described in claim 4, it is characterized in that, described eliminate is higher than the similar of the beta pruning similarity threshold Corresponding convolution kernel is spent, including:
    The corresponding convolution kernel of Euclidean distance less than default beta pruning Euclidean distance threshold value is eliminated, alternatively, eliminating less than default beta pruning The corresponding convolution kernel of COS distance of COS distance threshold value.
  6. 6. according to the method described in claim 4, it is characterized in that, described for the initial convolutional neural networks, calculating is respectively Euclidean distance or COS distance between convolution kernel, including:
    For each convolutional layer in the initial convolutional neural networks, according to the convolution kernel size of convolutional layer, feature is determined The output channel quantity of figure;
    M vector is formed according to the parameter of each output channel;
    Euclidean distance of the Euclidean distance between described M vectorial each two vector between each convolution kernel is calculated, alternatively, calculating COS distance of the COS distance between each convolution kernel between described M vectorial each two vector.
  7. 7. according to the method described in claim 6, it is characterized in that, the similarity according between each convolution kernel, to described The convolutional layer of neural convolutional network carries out beta pruning, including:
    First matrix is obtained according to the Euclidean distance between described M vectorial each two vector;By the Europe in first matrix Formula distance values are compared with preset beta pruning Euclidean distance threshold value;In first matrix, the beta pruning Europe will be greater than The position of formula distance threshold is set as 0, will be set as 1 less than the position of the beta pruning Euclidean distance threshold value, obtains the second matrix;
    Alternatively, the first matrix is obtained according to the COS distance between described M vectorial each two vector;It will be in first matrix COS distance numerical value be compared with preset beta pruning COS distance threshold value;In first matrix, described cut will be greater than The position of branch COS distance threshold value is set as 0, will be set as 1 less than the position of the beta pruning COS distance threshold value, obtains second Matrix.
  8. 8. a kind of convolutional neural networks de-redundancy device, which is characterized in that including:
    Similarity determining unit for being directed to initial convolutional neural networks, determines the similarity between each convolution kernel;
    Beta pruning unit, for according to the similarity between each convolution kernel, being carried out to the convolutional layer of the initial neural network Beta pruning is handled;
    Target training unit for being trained to the convolutional neural networks after beta pruning, obtains target convolutional neural networks.
  9. 9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor The step of any one of claim 1-7 the methods are realized during row.
  10. 10. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor Machine program, which is characterized in that the processor realizes the step of any one of claim 1-7 the methods when performing described program Suddenly.
CN201711183838.9A 2017-11-23 2017-11-23 Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium Pending CN108229679A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711183838.9A CN108229679A (en) 2017-11-23 2017-11-23 Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711183838.9A CN108229679A (en) 2017-11-23 2017-11-23 Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN108229679A true CN108229679A (en) 2018-06-29

Family

ID=62652717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711183838.9A Pending CN108229679A (en) 2017-11-23 2017-11-23 Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108229679A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308444A (en) * 2018-07-16 2019-02-05 重庆大学 A kind of abnormal behaviour recognition methods under indoor environment
CN109522949A (en) * 2018-11-07 2019-03-26 北京交通大学 Model of Target Recognition method for building up and device
CN109598340A (en) * 2018-11-15 2019-04-09 北京知道创宇信息技术有限公司 Method of cutting out, device and the storage medium of convolutional neural networks
CN109671020A (en) * 2018-12-17 2019-04-23 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium
CN109686440A (en) * 2018-12-20 2019-04-26 深圳市新产业眼科新技术有限公司 A kind of on-line intelligence diagnosis cloud platform and its operation method and readable storage medium storing program for executing
CN109948717A (en) * 2019-03-26 2019-06-28 江南大学 A kind of growth training method certainly generating confrontation network
CN110188865A (en) * 2019-05-21 2019-08-30 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
CN110826713A (en) * 2019-10-25 2020-02-21 广州思德医疗科技有限公司 Method and device for acquiring special convolution kernel
WO2020042658A1 (en) * 2018-08-31 2020-03-05 华为技术有限公司 Data processing method, device, apparatus, and system
CN111079833A (en) * 2019-12-16 2020-04-28 腾讯科技(深圳)有限公司 Image recognition method, image recognition device and computer-readable storage medium
CN111126501A (en) * 2019-12-26 2020-05-08 厦门市美亚柏科信息股份有限公司 Image identification method, terminal equipment and storage medium
CN111507203A (en) * 2020-03-27 2020-08-07 北京百度网讯科技有限公司 Method for constructing variable lane detection model, electronic device, and storage medium
CN112241789A (en) * 2020-10-16 2021-01-19 广州云从凯风科技有限公司 Structured pruning method, device, medium and equipment for lightweight neural network
CN113128660A (en) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 Deep learning model compression method and related equipment
CN113240085A (en) * 2021-05-12 2021-08-10 平安科技(深圳)有限公司 Model pruning method, device, equipment and storage medium
WO2021164737A1 (en) * 2020-02-20 2021-08-26 华为技术有限公司 Neural network compression method, data processing method, and related apparatuses
CN113313694A (en) * 2021-06-05 2021-08-27 西北工业大学 Surface defect rapid detection method based on light-weight convolutional neural network
CN113554104A (en) * 2021-07-28 2021-10-26 哈尔滨工程大学 Image classification method based on deep learning model
CN113762505A (en) * 2021-08-13 2021-12-07 中国电子科技集团公司第三十八研究所 Clustering pruning method of convolutional neural network according to norm of channel L2
CN114677545A (en) * 2022-03-29 2022-06-28 电子科技大学 Lightweight image classification method based on similarity pruning and efficient module
CN115035912A (en) * 2022-06-08 2022-09-09 哈尔滨工程大学 Automatic underwater acoustic signal sample labeling method based on MOC model
CN111382839B (en) * 2020-02-23 2024-05-07 华为技术有限公司 Method and device for pruning neural network

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308444A (en) * 2018-07-16 2019-02-05 重庆大学 A kind of abnormal behaviour recognition methods under indoor environment
WO2020042658A1 (en) * 2018-08-31 2020-03-05 华为技术有限公司 Data processing method, device, apparatus, and system
CN109522949A (en) * 2018-11-07 2019-03-26 北京交通大学 Model of Target Recognition method for building up and device
CN109522949B (en) * 2018-11-07 2021-01-26 北京交通大学 Target recognition model establishing method and device
CN109598340A (en) * 2018-11-15 2019-04-09 北京知道创宇信息技术有限公司 Method of cutting out, device and the storage medium of convolutional neural networks
CN109671020A (en) * 2018-12-17 2019-04-23 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium
CN109671020B (en) * 2018-12-17 2023-10-24 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium
CN109686440A (en) * 2018-12-20 2019-04-26 深圳市新产业眼科新技术有限公司 A kind of on-line intelligence diagnosis cloud platform and its operation method and readable storage medium storing program for executing
CN109686440B (en) * 2018-12-20 2023-12-05 深圳市新产业眼科新技术有限公司 Online intelligent diagnosis cloud platform, operation method thereof and readable storage medium
CN109948717B (en) * 2019-03-26 2023-08-18 江南大学 Self-growth training method for generating countermeasure network
CN109948717A (en) * 2019-03-26 2019-06-28 江南大学 A kind of growth training method certainly generating confrontation network
CN110188865A (en) * 2019-05-21 2019-08-30 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
CN110826713A (en) * 2019-10-25 2020-02-21 广州思德医疗科技有限公司 Method and device for acquiring special convolution kernel
CN110826713B (en) * 2019-10-25 2022-06-10 广州思德医疗科技有限公司 Method and device for acquiring special convolution kernel
CN111079833A (en) * 2019-12-16 2020-04-28 腾讯科技(深圳)有限公司 Image recognition method, image recognition device and computer-readable storage medium
CN111079833B (en) * 2019-12-16 2022-05-06 腾讯医疗健康(深圳)有限公司 Image recognition method, image recognition device and computer-readable storage medium
CN111126501B (en) * 2019-12-26 2022-09-16 厦门市美亚柏科信息股份有限公司 Image identification method, terminal equipment and storage medium
CN111126501A (en) * 2019-12-26 2020-05-08 厦门市美亚柏科信息股份有限公司 Image identification method, terminal equipment and storage medium
CN113128660A (en) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 Deep learning model compression method and related equipment
WO2021164737A1 (en) * 2020-02-20 2021-08-26 华为技术有限公司 Neural network compression method, data processing method, and related apparatuses
CN111382839B (en) * 2020-02-23 2024-05-07 华为技术有限公司 Method and device for pruning neural network
CN111507203A (en) * 2020-03-27 2020-08-07 北京百度网讯科技有限公司 Method for constructing variable lane detection model, electronic device, and storage medium
CN111507203B (en) * 2020-03-27 2023-09-26 北京百度网讯科技有限公司 Construction method of variable lane detection model, electronic equipment and storage medium
CN112241789A (en) * 2020-10-16 2021-01-19 广州云从凯风科技有限公司 Structured pruning method, device, medium and equipment for lightweight neural network
CN113240085A (en) * 2021-05-12 2021-08-10 平安科技(深圳)有限公司 Model pruning method, device, equipment and storage medium
CN113240085B (en) * 2021-05-12 2023-12-22 平安科技(深圳)有限公司 Model pruning method, device, equipment and storage medium
CN113313694A (en) * 2021-06-05 2021-08-27 西北工业大学 Surface defect rapid detection method based on light-weight convolutional neural network
CN113554104A (en) * 2021-07-28 2021-10-26 哈尔滨工程大学 Image classification method based on deep learning model
CN113762505B (en) * 2021-08-13 2023-12-01 中国电子科技集团公司第三十八研究所 Method for clustering pruning according to L2 norms of channels of convolutional neural network
CN113762505A (en) * 2021-08-13 2021-12-07 中国电子科技集团公司第三十八研究所 Clustering pruning method of convolutional neural network according to norm of channel L2
CN114677545A (en) * 2022-03-29 2022-06-28 电子科技大学 Lightweight image classification method based on similarity pruning and efficient module
CN115035912A (en) * 2022-06-08 2022-09-09 哈尔滨工程大学 Automatic underwater acoustic signal sample labeling method based on MOC model
CN115035912B (en) * 2022-06-08 2024-04-26 哈尔滨工程大学 Automatic underwater sound signal sample labeling method based on MOC model

Similar Documents

Publication Publication Date Title
CN108229679A (en) Convolutional neural networks de-redundancy method and device, electronic equipment and storage medium
CN109922032B (en) Method, device, equipment and storage medium for determining risk of logging in account
CN108509915A (en) The generation method and device of human face recognition model
CN109447156B (en) Method and apparatus for generating a model
CN108197652B (en) Method and apparatus for generating information
CN108229419A (en) For clustering the method and apparatus of image
CN109829432B (en) Method and apparatus for generating information
CN108229479A (en) The training method and device of semantic segmentation model, electronic equipment, storage medium
CN108229489A (en) Crucial point prediction, network training, image processing method, device and electronic equipment
CN107679466B (en) Information output method and device
CN110766080B (en) Method, device and equipment for determining labeled sample and storage medium
CN108830235A (en) Method and apparatus for generating information
CN111553488B (en) Risk recognition model training method and system for user behaviors
CN108805091A (en) Method and apparatus for generating model
CN111414953B (en) Point cloud classification method and device
CN108280477A (en) Method and apparatus for clustering image
CN108229591A (en) Neural network adaptive training method and apparatus, equipment, program and storage medium
CN108280455A (en) Human body critical point detection method and apparatus, electronic equipment, program and medium
CN109376757A (en) A kind of multi-tag classification method and system
CN110245488A (en) Cipher Strength detection method, device, terminal and computer readable storage medium
CN108205802A (en) Deep neural network model training, image processing method and device and equipment
CN107169769A (en) The brush amount recognition methods of application program, device
CN109033148A (en) One kind is towards polytypic unbalanced data preprocess method, device and equipment
CN108229494A (en) network training method, processing method, device, storage medium and electronic equipment
CN108154153A (en) Scene analysis method and system, electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180629