CN107491787A - Local binarization CNN processing method, device, storage medium and processor - Google Patents

Local binarization CNN processing method, device, storage medium and processor Download PDF

Info

Publication number
CN107491787A
CN107491787A CN201710720466.2A CN201710720466A CN107491787A CN 107491787 A CN107491787 A CN 107491787A CN 201710720466 A CN201710720466 A CN 201710720466A CN 107491787 A CN107491787 A CN 107491787A
Authority
CN
China
Prior art keywords
convolutional neural
neural networks
packet
target
articulamentum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710720466.2A
Other languages
Chinese (zh)
Inventor
王志鹏
周文明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xi Yue Information Technology Co Ltd
Original Assignee
Zhuhai Xi Yue Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xi Yue Information Technology Co Ltd filed Critical Zhuhai Xi Yue Information Technology Co Ltd
Priority to CN201710720466.2A priority Critical patent/CN107491787A/en
Publication of CN107491787A publication Critical patent/CN107491787A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention discloses a kind of local binarization CNN processing method, device, storage medium and processor.Wherein, this method includes:Neutral net is accumulated according to the preset data collection training first volume, obtains the second convolutional neural networks;Whole convolution layer units in first convolutional neural networks are replaced with into local binarization convolution unit, so as to obtain the 3rd convolutional neural networks;The full articulamentum of target in 3rd convolutional neural networks is replaced with into packet articulamentum, so as to obtain Volume Four product neutral net, wherein, the full articulamentum of target is remaining full articulamentum in addition to bottom classifies layer in the 3rd convolutional neural networks;Initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;It is trained based on the second convolutional neural networks and the convolutional neural networks of preset data set pair the 5th, obtains target convolutional neural networks.The present invention solves the relatively low technical problem of operational efficiency existing for local binarization convolutional neural networks of the prior art.

Description

Local binarization CNN processing method, device, storage medium and processor
Technical field
The present invention relates to convolutional neural networks field, in particular to a kind of local binarization CNN processing method, Device, storage medium and processor.
Background technology
With the popularization of deep learning, increasing convolutional neural networks technology, which enters, applies the landing stage.Convolution god Embodied through network by profound, high-dimensional model framework in the multiple-tasks such as image recognition, target detection, scene classification Superior performance is gone out.However, in the localization application of terminal platform, convolutional neural networks are faced with huge challenge, its Parameter amount is more, it is complicated to calculate, it is difficult to adapts to the built-in terminal platform stored and computing resource is limited.
In recent years, the binaryzation convolutional neural networks of low precision have obtained substantial amounts of research, paper《Binarized Neural Networks:Training Neural Networks with Weights and Activations Constrained to+1or-1》It is proposed the weights of convolution kernel being converted into+1 or -1 value by double-precision floating pointses, will be original Convolution algorithm is converted into plus and minus calculation, reduces the amount of storage and amount of calculation of parameter.However, binaryzation network can lose substantial amounts of power Value information, the ability of its extraction feature is poor, thus brings larger performance loss, can not be competent at complex image classification Task.Paper《Local binary convolutional neural networks》Propose a kind of local binarization convolution Neutral net, by generating binaryzation sparse convolution core at random, with reference to trainable 1x1 convolution, realize the volume in convolution operation Long-pending, non-linear and ranking operation, reduce performance loss.However, intensive 1x1 convolutional calculations in the network be present, occupy absolutely Most computing resource, and the increase of the network number of plies easily causes to produce gradient disappearance problem in training process, causes network difficult With convergence.To sum up, there is performance degradation in binaryzation convolutional neural networks of the prior art, efficiency is limited, training difficulty is higher The defects of, therefore, there is the relatively low technical problem of operational efficiency in local binarization convolutional neural networks of the prior art.
For it is above-mentioned the problem of, not yet propose effective solution at present.
The content of the invention
The embodiments of the invention provide a kind of local binarization CNN processing method, device, storage medium and processor, At least to solve the relatively low technical problem of operational efficiency existing for local binarization convolutional neural networks of the prior art.
One side according to embodiments of the present invention, there is provided a kind of local binarization CNN processing method, this method bag Include:Neutral net is accumulated according to the preset data collection training first volume, obtains the second convolutional neural networks;By above-mentioned first convolutional Neural Whole convolution layer units in network replace with local binarization convolution unit, so as to obtain the 3rd convolutional neural networks;Will be upper The full articulamentum of target stated in the 3rd convolutional neural networks replaces with packet articulamentum, so as to obtain Volume Four product neutral net, Wherein, the full articulamentum of above-mentioned target is remaining full articulamentum in addition to bottom classifies layer in above-mentioned 3rd convolutional neural networks; Initialization process is carried out to above-mentioned Volume Four product neutral net, obtains the 5th convolutional neural networks;Based on above-mentioned second convolution god It is trained through network and above-mentioned 5th convolutional neural networks of above-mentioned preset data set pair, obtains target convolutional neural networks, its In, above-mentioned target convolutional neural networks are above-mentioned 5th convolutional neural networks for reaching convergence state.
Further, above-mentioned local binarization convolution unit includes binaryzation convolutional layer, 1x1 packet convolutional layers, above-mentioned to incite somebody to action Whole convolution layer units in above-mentioned first convolutional neural networks replace with local binarization convolution unit, so as to obtain volume three Product neutral net includes:The above-mentioned 1x1 in above-mentioned local binarization convolution unit is set to be grouped the first packet count of convolutional layer;Will The output characteristic figure of above-mentioned binaryzation convolutional layer carries out two-dimensional topology expansion according to above-mentioned first packet count, obtains first object point Group;The output characteristic figure that the convolution kernel of above-mentioned 1x1 packet convolutional layers is grouped with above-mentioned first object is attached, obtained above-mentioned 3rd convolutional neural networks.
Further, the above-mentioned full articulamentum of target by above-mentioned 3rd convolutional neural networks replaces with packet articulamentum, Include so as to obtain Volume Four product neutral net:Second of above-mentioned packet articulamentum in above-mentioned 3rd convolutional neural networks is set Packet count;The input neuron of above-mentioned packet articulamentum is subjected to two-dimensional topology expansion according to above-mentioned second packet number, obtains the Two targeted packets;The output neuron of above-mentioned packet articulamentum and the input neuron of above-mentioned second targeted packets are connected Connect, obtain above-mentioned Volume Four product neutral net.
Further, it is above-mentioned based on above-mentioned second convolutional neural networks and the above-mentioned 5th convolution god of above-mentioned preset data set pair It is trained through network, obtaining target convolutional neural networks includes:Corresponding to the training label that above-mentioned preset data is concentrated Picture is inputted to above-mentioned volume Two and accumulated in neutral net, so as to obtain the output valve of above-mentioned second convolutional neural networks;To above-mentioned Output valve carries out sofening treatment, the above-mentioned output valve after being softened;To the above-mentioned output valve after above-mentioned softening and above-mentioned training Label is weighted read group total, obtains target training label;According to default stochastic gradient descent method and above-mentioned target training mark Label are trained to above-mentioned 5th convolutional neural networks, obtain above-mentioned target convolutional neural networks.
Another aspect according to embodiments of the present invention, additionally provide a kind of local binarization CNN processing unit, the device Including:First training unit, for accumulating neutral net according to the preset data collection training first volume, obtain the second convolution nerve net Network;First replacement unit, for whole convolution layer units in above-mentioned first convolutional neural networks to be replaced with into local binarization Convolution unit, so as to obtain the 3rd convolutional neural networks;Second replacement unit, for by above-mentioned 3rd convolutional neural networks The full articulamentum of target replaces with packet articulamentum, so as to obtain Volume Four product neutral net, wherein, the above-mentioned full articulamentum of target is Remaining full articulamentum in above-mentioned 3rd convolutional neural networks in addition to bottom classifies layer;Processing unit, for the above-mentioned 4th Convolutional neural networks carry out initialization process, obtain the 5th convolutional neural networks;Second training unit, for based on above-mentioned second Convolutional neural networks and above-mentioned 5th convolutional neural networks of above-mentioned preset data set pair are trained, and obtain target convolution nerve net Network, wherein, above-mentioned target convolutional neural networks are above-mentioned 5th convolutional neural networks for reaching convergence state.
Further, above-mentioned local binarization convolution unit includes binaryzation convolutional layer, 1x1 packet convolutional layers, and above-mentioned the One replacement unit includes:First sets subelement, for setting the above-mentioned 1x1 in above-mentioned local binarization convolution unit to be grouped volume First packet count of lamination;First processing subelement, for by the output characteristic figure of above-mentioned binaryzation convolutional layer according to above-mentioned the One packet count carries out two-dimensional topology expansion, obtains first object packet;Second processing subelement, rolled up for above-mentioned 1x1 to be grouped The output characteristic figure that the convolution kernel of lamination is grouped with above-mentioned first object is attached, and obtains above-mentioned 3rd convolutional neural networks.
Further, above-mentioned second replacement unit includes:Second sets subelement, for setting above-mentioned 3rd convolutional Neural The second packet number of above-mentioned packet articulamentum in network;3rd processing subelement, for by the input of above-mentioned packet articulamentum Neuron carries out two-dimensional topology expansion according to above-mentioned second packet number, obtains the second targeted packets;Fourth process subelement, is used for The output neuron of above-mentioned packet articulamentum and the input neuron of above-mentioned second targeted packets are attached, obtain above-mentioned Four convolutional neural networks.
Further, above-mentioned second training unit includes:Subelement is inputted, for the training for concentrating above-mentioned preset data Picture corresponding to label is inputted to above-mentioned volume Two and accumulated in neutral net, so as to obtain the defeated of above-mentioned second convolutional neural networks Go out value;5th processing subelement, for carrying out sofening treatment, the above-mentioned output valve after being softened to above-mentioned output valve;Calculate Subelement, for being weighted read group total to the above-mentioned output valve after above-mentioned softening and above-mentioned training label, obtain target instruction Practice label;Subelement is trained, for training label to above-mentioned 5th convolution according to default stochastic gradient descent method and above-mentioned target Neutral net is trained, and obtains above-mentioned target convolutional neural networks.
Another aspect according to embodiments of the present invention, additionally provides a kind of storage medium, and above-mentioned storage medium includes storage Program, wherein, equipment where above-mentioned storage medium is controlled when said procedure is run performs above-mentioned local binarization CNN's Processing method.
Another aspect according to embodiments of the present invention, additionally provides a kind of processor, and above-mentioned processor is used for operation program, Wherein, above-mentioned local binarization CNN processing method is performed when said procedure is run.
In embodiments of the present invention, neutral net is accumulated using according to the preset data collection training first volume, obtains the second convolution The mode of neutral net, by the way that whole convolution layer units in the first convolutional neural networks are replaced with into local binarization convolution list Member, so as to obtain the 3rd convolutional neural networks;The full articulamentum of target in 3rd convolutional neural networks is replaced with into packet connection Layer, so as to obtain Volume Four product neutral net, wherein, the full articulamentum of target is except bottom classification layer in the 3rd convolutional neural networks Outside remaining full articulamentum;Initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;Reach It is trained based on the second convolutional neural networks and preset data set pair the 5th convolutional neural networks, obtains target convolutional Neural The purpose of network, wherein, target convolutional neural networks are the 5th convolutional neural networks for reaching convergence state.The embodiment of the present invention Realize the operational efficiency of lifting local binarization convolutional neural networks, reduction local binarization convolutional neural networks were being run The technique effect of the training difficulty of performance degradation and reduction local binarization convolutional neural networks in journey, and then solve existing There is the relatively low technical problem of operational efficiency existing for the local binarization convolutional neural networks in technology.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the part of the application, this hair Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 (a) is a kind of flow signal of optional local binarization CNN according to embodiments of the present invention processing method Figure;
Fig. 1 (b) is local two in a kind of optional local binarization CNN according to embodiments of the present invention processing method The structural representation of value convolution unit;
Fig. 2 is the flow signal of the optional local binarization CNN of another kind according to embodiments of the present invention processing method Figure;
Fig. 3 is the flow signal of another optional local binarization CNN according to embodiments of the present invention processing method Figure;
Fig. 4 is the flow signal of another optional local binarization CNN according to embodiments of the present invention processing method Figure;
Fig. 5 is a kind of structural representation of optional local binarization CNN according to embodiments of the present invention processing unit.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, it should all belong to the model that the present invention protects Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned accompanying drawing, " Two " etc. be for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so use Data can exchange in the appropriate case, so as to embodiments of the invention described herein can with except illustrating herein or Order beyond those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment Those steps or unit clearly listed, but may include not list clearly or for these processes, method, product Or the intrinsic other steps of equipment or unit.
Embodiment 1
According to embodiments of the present invention, there is provided a kind of embodiment of local binarization CNN processing method, it is necessary to explanation It is that can be performed the step of the flow of accompanying drawing illustrates in the computer system of such as one group computer executable instructions, Also, although logical order is shown in flow charts, in some cases, can be with different from order execution herein Shown or described step.
Fig. 1 (a) is a kind of flow signal of optional local binarization CNN according to embodiments of the present invention processing method Figure, as shown in Fig. 1 (a), this method comprises the following steps:
Step S102, neutral net is accumulated according to the preset data collection training first volume, obtains the second convolutional neural networks;
Step S104, whole convolution layer units in the first convolutional neural networks are replaced with into local binarization convolution list Member, so as to obtain the 3rd convolutional neural networks;
Step S106, the full articulamentum of target in the 3rd convolutional neural networks is replaced with into packet articulamentum, so as to obtain Volume Four accumulates neutral net, wherein, the full articulamentum of target is remaining in addition to bottom classifies layer in the 3rd convolutional neural networks Full articulamentum;
Step S108, initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;
Step S110, it is trained based on the second convolutional neural networks and the convolutional neural networks of preset data set pair the 5th, Target convolutional neural networks are obtained, wherein, target convolutional neural networks are the 5th convolutional neural networks for reaching convergence state.
In embodiments of the present invention, neutral net is accumulated using according to the preset data collection training first volume, obtains the second convolution The mode of neutral net, by the way that whole convolution layer units in the first convolutional neural networks are replaced with into local binarization convolution list Member, so as to obtain the 3rd convolutional neural networks;The full articulamentum of target in 3rd convolutional neural networks is replaced with into packet connection Layer, so as to obtain Volume Four product neutral net, wherein, the full articulamentum of target is except bottom classification layer in the 3rd convolutional neural networks Outside remaining full articulamentum;Initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;Reach It is trained based on the second convolutional neural networks and preset data set pair the 5th convolutional neural networks, obtains target convolutional Neural The purpose of network, wherein, target convolutional neural networks are the 5th convolutional neural networks for reaching convergence state.The embodiment of the present invention Realize the operational efficiency of lifting local binarization convolutional neural networks, reduction local binarization convolutional neural networks were being run The technique effect of the training difficulty of performance degradation and reduction local binarization convolutional neural networks in journey, and then solve existing There is the relatively low technical problem of operational efficiency existing for the local binarization convolutional neural networks in technology.
Alternatively, convolutional neural networks (Convolutional Neural Network, abbreviation CNN), is a kind of feedforward Neutral net, artificial neuron can respond surrounding cells, can carry out large-scale image procossing.Convolutional neural networks can include Convolutional layer, pond layer, non-linear layer, full articulamentum.
Alternatively, when performing step S102, the stochastic gradient descent method combination preset data set pair first volume can be used Product neutral net is trained, so as to obtain the second convolutional neural networks.
Alternatively, Fig. 1 (b) is in a kind of optional local binarization CNN according to embodiments of the present invention processing method Local binarization convolution unit structural representation, as shown in Fig. 1 (b), local binarization convolution unit include binaryzation volume Lamination, convolution kernel size are 3x3x128, step-length 1;Sigmoid active coatings;1x1 is grouped convolutional layer, and convolution kernel size is 1x1x128, step-length 1;Residual branch;Superpositing unit.Wherein, 1x1 is grouped convolutional layer and uses Topology connection mode.
Alternatively, the processing method of the local binarization CNN in the embodiment of the present invention is compared to existing convolutional Neural net Network possesses following significant technique effects:
Technique effect one, by the way that traditional convolutional layer replaced with into local binarization convolution unit, structure local binarization volume Product neutral net, has higher parameter and computational efficiency, while avoid binaryzation convolution than traditional convolutional neural networks The problem of performance degradation in neutral net is excessive.
Binaryzation convolutional layer in technique effect two, local binarization convolution unit is obtained by generating at random, and is had It is openness, greatly reduce the parameter amount and amount of calculation of network.
1x1 convolution in technique effect three, local binarization convolution unit uses block form, is connected between reduction passage Redundancy, lifted operation efficiency.
Residual branch is introduced in technique effect four, local binarization convolution unit, is avoided under network extraction feature capabilities Performance excessive attenuation caused by drop, neural network accuracy is lifted, and beneficial to the backpropagation of gradient when training.
Technique effect five, full articulamentum is used to packet connection mode, reduce the redundancy connected between neuron, lifting Operation efficiency.
Binaryzation convolutional layer in technique effect six, local binarization convolution unit keeps not updating in training, reduces The number of parameters that can learn, training for promotion efficiency.
Technique effect seven, the training using knowledge distill methods progress local binarization convolutional neural networks, Be advantageous to the Fast Convergent of network.
Alternatively, Fig. 2 is the stream of the optional local binarization CNN of another kind according to embodiments of the present invention processing method Journey schematic diagram, as shown in Fig. 2 local binarization convolution unit includes binaryzation convolutional layer, 1x1 is grouped convolutional layer, step S104, Whole convolution layer units in first convolutional neural networks are replaced with into local binarization convolution unit, so as to obtain the 3rd convolution Neutral net includes:
Step S202, the first packet count of the 1x1 packet convolutional layers in local binarization convolution unit is set;
Step S204, the output characteristic figure of binaryzation convolutional layer is subjected to two-dimensional topology expansion according to the first packet count, obtained It is grouped to first object;
Step S206, the output characteristic figure that the convolution kernel of 1x1 packet convolutional layers is grouped with first object is attached, obtained To the 3rd convolutional neural networks.
Alternatively, 1x1 is grouped convolutional layer and uses Topology connection mode.Wherein:
In step S202,1x1 is set to be grouped the packet count of convolutional layer;Specifically, packet count 2;
In step S204, the output characteristic figure of binaryzation convolutional layer is subjected to two-dimensional topology expansion and packet;Specifically, it is defeated It is 128 to go out characteristic pattern port number, expands into 16x8, wherein being one group per 8x8 characteristic pattern;
In step S206, the output characteristic figure that the convolution kernel of 1x1 packet convolutional layers is grouped with first object is attached. Specifically, the convolution kernel of first group of 1x1 packet convolutional layer is connected entirely with first group of 8x8 output characteristic figure, is not connected with other groups Connect;The convolution kernel of second group of 1x1 packet convolutional layer is connected entirely with second group of 8x8 output characteristic figure, is not connected to other groups.
Alternatively, Fig. 3 is the stream of another optional local binarization CNN according to embodiments of the present invention processing method Journey schematic diagram, as shown in figure 3, step S106, packet connection is replaced with by the full articulamentum of target in the 3rd convolutional neural networks Layer, include so as to obtain Volume Four product neutral net:
Step S302, the second packet number of the packet articulamentum in the 3rd convolutional neural networks is set;
Step S304, the input neuron for being grouped articulamentum is subjected to two-dimensional topology expansion according to second packet number, obtained Second targeted packets;
Step S306, the output neuron for being grouped articulamentum and the input neuron of the second targeted packets are attached, Obtain Volume Four product neutral net.
Alternatively, the packet articulamentum in the 3rd convolutional neural networks uses Topology connection mode.Wherein:
Further, articulamentum is grouped in above-mentioned steps S108 and uses Topology connection mode, comprising:
In step S302, the packet count of packet articulamentum is set;Specifically, packet count 16;
In step S304, the input neuron for being grouped articulamentum is subjected to two-dimensional topology expansion and packet;Specifically, input Neuron number is 4096, expands into 128x32, wherein input neuron is one group per 16x16;
In step S306, the output neuron for being grouped articulamentum is attached with corresponding group of input neuron.Specifically Ground, the neuron of set of group articulamentum are connected, not connected with other neurons entirely with first group of 16x16 input neuron Connect.
Alternatively, Fig. 4 is the stream of another optional local binarization CNN according to embodiments of the present invention processing method Journey schematic diagram, as shown in figure 4, step S110, based on the second convolutional neural networks and the convolutional Neural net of preset data set pair the 5th Network is trained, and obtaining target convolutional neural networks includes:
Step S402, the picture corresponding to training label that preset data is concentrated are inputted to the second convolutional neural networks In, so as to obtain the output valve of the second convolutional neural networks;
Step S404, sofening treatment, the output valve after being softened are carried out to output valve;
Step S406, read group total is weighted to the output valve after softening and training label, obtains target training label;
Step S408, the 5th convolutional neural networks are instructed according to default stochastic gradient descent method and target training label Practice, obtain target convolutional neural networks.
It is alternatively possible to be trained using stochastic gradient descent method to the 5th convolutional neural networks, the training can be with base In knowledge distill (knowledge extraction).Specifically include:
In step S402, the picture that input tape label data is concentrated obtains the second convolution god to the second convolutional neural networks Output through network;
In step S404, the output valve of the second convolutional neural networks is softened, optionally, softening is according to the following equation Carry out, wherein ziAnd zjFor the output valve of the second convolutional neural networks, T is Softening factor, qiFor the output valve after softening;
In step S406, the output valve after softening and the label of picture are weighted summation;
In step S408, using the result after weighted sum as true tag (i.e. target trains label), using boarding steps Spend descent method and train the 5th convolutional neural networks, renewal 1x1 packet convolutional layers and the weights for being grouped articulamentum, keep binaryzation volume The weights of lamination are constant, until convergence, the 5th convolutional neural networks that convergence state is reached after trained are the target convolution Neutral net.
Alternatively, before step S102 is performed, i.e., before neutral net is accumulated according to the preset data collection training first volume, This method can also include:
Step S10, create the first convolutional neural networks.
Alternatively, first convolutional neural networks can include input layer, convolutional layer, nonlinear activation layer, pond layer, entirely Articulamentum, output layer.Specifically, first volume machine neural network can include:Input layer;First convolutional layer, convolution kernel size are 3x3x64, step-length 1;Second convolutional layer, convolution kernel size are 3x3x64, step-length 1;3rd convolutional layer, convolution kernel size are 3x3x64, step-length 1;Volume Four lamination, convolution kernel size are 3x3x64, step-length 1;First maximum pond layer, pond size For 2x2, step-length 2;5th convolutional layer, convolution kernel size are 3x3x128, step-length 1;6th convolutional layer, convolution kernel size are 3x3x128, step-length 1;7th convolutional layer, convolution kernel size are 3x3x128, step-length 1;8th convolutional layer, convolution kernel size For 3x3x128, step-length 1;Second maximum pond layer, pond size are 2x2, step-length 2;9th convolutional layer, convolution kernel size For 3x3x256, step-length 1;Tenth convolutional layer, convolution kernel size are 3x3x256, step-length 1;11st convolutional layer, convolution kernel Size is 3x3x256, step-length 1;12nd convolutional layer, convolution kernel size are 3x3x256, step-length 1;13rd convolutional layer, Convolution kernel size is 3x3x256, step-length 1;14th convolutional layer, convolution kernel size are 3x3x256, step-length 1;3rd is maximum Pond layer, pond size are 2x2, step-length 2;15th convolutional layer, convolution kernel size are 3x3x512, step-length 1;16th Convolutional layer, convolution kernel size are 3x3x512, step-length 1;17th convolutional layer, convolution kernel size are 3x3x512, step-length 1; 18th convolutional layer, convolution kernel size are 3x3x512, step-length 1;First full articulamentum, neuron number 4096;Second is complete Articulamentum, neuron number 2048;Output layer.
Alternatively, step S108, initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks It can include:
Step S20, the degree of rarefication that the binaryzation convolutional layer in neutral net is accumulated to Volume Four carry out Initialize installation, to the Other levels that binaryzation convolutional layer was removed in four convolutional neural networks carry out random initializtion processing, random generation Volume Four product Binaryzation convolution kernel in neutral net.
Specifically, in step S20, to Volume Four accumulate neutral net initialization package containing set binaryzation convolutional layer it is sparse Degree, random generation binaryzation convolution kernel, random initializtion other layers.Wherein, binaryzation convolution kernel is distributed random according to Bernoulli Jacob Generation, its weights are that+1, -1 or 0, and 0 value proportion is equal to degree of rarefication.Optionally, degree of rarefication 0.8.
Alternatively, in embodiments of the present invention, by the way of local binarization convolution, traditional convolution operation is substituted, is dropped The low parameter amount and amount of calculation of model.By generating sparse binaryzation convolution kernel at random, and to 1x1 convolutional layers and full connection Layer carries out division operation, has reached the purpose of further reduction amount of calculation, by adding residual to local binarization convolution unit Branch, reach lifting network performance and helped the effect of gradient passback, trained by using knowledge distill, Reach the effect for helping network Fast Fitting, it is achieved thereby that local binarization convolutional neural networks model, and then solve Technical problem present in existing binaryzation convolutional neural networks technology.The local binarization convolutional Neural obtained by the present invention Network, be applicable to the high technology industry field of the calculating such as mobile terminal, embedded equipment, robot and constrained storage, have compared with High economical and practical value.
In embodiments of the present invention, neutral net is accumulated using according to the preset data collection training first volume, obtains the second convolution The mode of neutral net, by the way that whole convolution layer units in the first convolutional neural networks are replaced with into local binarization convolution list Member, so as to obtain the 3rd convolutional neural networks;The full articulamentum of target in 3rd convolutional neural networks is replaced with into packet connection Layer, so as to obtain Volume Four product neutral net, wherein, the full articulamentum of target is except bottom classification layer in the 3rd convolutional neural networks Outside remaining full articulamentum;Initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;Reach It is trained based on the second convolutional neural networks and preset data set pair the 5th convolutional neural networks, obtains target convolutional Neural The purpose of network, wherein, target convolutional neural networks are the 5th convolutional neural networks for reaching convergence state.The embodiment of the present invention Realize the operational efficiency of lifting local binarization convolutional neural networks, reduction local binarization convolutional neural networks were being run The technique effect of the training difficulty of performance degradation and reduction local binarization convolutional neural networks in journey, and then solve existing There is the relatively low technical problem of operational efficiency existing for the local binarization convolutional neural networks in technology.
Embodiment 2
Other side according to embodiments of the present invention, a kind of local binarization CNN processing unit is additionally provided, such as schemed Shown in 5, the device includes:First training unit 501, the first replacement unit 503, the second replacement unit 505, processing unit 507, Second training unit 509.
Wherein, the first training unit 501, for accumulating neutral net according to the preset data collection training first volume, second is obtained Convolutional neural networks;First replacement unit 503, for whole convolution layer units in the first convolutional neural networks to be replaced with into office Portion's binaryzation convolution unit, so as to obtain the 3rd convolutional neural networks;Second replacement unit 505, for by the 3rd convolutional Neural The full articulamentum of target in network replaces with packet articulamentum, so as to obtain Volume Four product neutral net, wherein, target connects entirely Layer is remaining full articulamentum in addition to bottom classifies layer in the 3rd convolutional neural networks;Processing unit 507, for Volume Four Product neutral net carries out initialization process, obtains the 5th convolutional neural networks;Second training unit 509, for based on volume Two Product neutral net and the convolutional neural networks of preset data set pair the 5th are trained, and obtain target convolutional neural networks, wherein, mesh It is the 5th convolutional neural networks for reaching convergence state to mark convolutional neural networks.
Alternatively, local binarization convolution unit includes binaryzation convolutional layer, 1x1 packet convolutional layers, the first replacement unit 503 include:First sets subelement, for setting the 1x1 in local binarization convolution unit to be grouped the first packet of convolutional layer Number;First processing subelement, for the output characteristic figure of binaryzation convolutional layer to be carried out into two-dimensional topology exhibition according to the first packet count Open, obtain first object packet;Second processing subelement, the convolution kernel for 1x1 to be grouped to convolutional layer are grouped with first object Output characteristic figure be attached, obtain the 3rd convolutional neural networks.
Alternatively, the second replacement unit 505 includes:Second sets subelement, for setting in the 3rd convolutional neural networks Packet articulamentum second packet number;3rd processing subelement, for the input neuron by articulamentum is grouped according to second Packet count carries out two-dimensional topology expansion, obtains the second targeted packets;Fourth process subelement, for the output of articulamentum will to be grouped Neuron and the input neuron of the second targeted packets are attached, and obtain Volume Four product neutral net.
Alternatively, the second training unit includes:Subelement is inputted, for corresponding to the training label of concentrating preset data Picture input into the second convolutional neural networks, so as to obtain the output valve of the second convolutional neural networks;5th processing is single Member, for carrying out sofening treatment, the output valve after being softened to output valve;Computation subunit, for the output after softening Value and training label are weighted read group total, obtain target training label;Subelement is trained, for according to default stochastic gradient Descent method and target training label are trained to the 5th convolutional neural networks, obtain target convolutional neural networks.
Alternatively, the device can also include:Subelement is created, for creating the first convolutional neural networks.
Alternatively, processing unit includes:6th processing subelement, the binaryzation for being accumulated to Volume Four in neutral net are rolled up The degree of rarefication of lamination carries out Initialize installation, other levels progress that binaryzation convolutional layer was removed in neutral net is accumulated to Volume Four Binaryzation convolution kernel in random initializtion processing, random generation Volume Four product neutral net.
Another aspect according to embodiments of the present invention, additionally provides a kind of storage medium, and the storage medium includes storage Program, wherein, equipment performs above-mentioned local binarization CNN processing method where controlling storage medium when program is run.
Another aspect according to embodiments of the present invention, additionally provides a kind of processor, and the processor is used for operation program, its In, program performs above-mentioned local binarization CNN processing method when running.
In embodiments of the present invention, neutral net is accumulated using according to the preset data collection training first volume, obtains the second convolution The mode of neutral net, by the way that whole convolution layer units in the first convolutional neural networks are replaced with into local binarization convolution list Member, so as to obtain the 3rd convolutional neural networks;The full articulamentum of target in 3rd convolutional neural networks is replaced with into packet connection Layer, so as to obtain Volume Four product neutral net, wherein, the full articulamentum of target is except bottom classification layer in the 3rd convolutional neural networks Outside remaining full articulamentum;Initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;Reach It is trained based on the second convolutional neural networks and preset data set pair the 5th convolutional neural networks, obtains target convolutional Neural The purpose of network, wherein, target convolutional neural networks are the 5th convolutional neural networks for reaching convergence state.The embodiment of the present invention Realize the operational efficiency of lifting local binarization convolutional neural networks, reduction local binarization convolutional neural networks were being run The technique effect of the training difficulty of performance degradation and reduction local binarization convolutional neural networks in journey, and then solve existing There is the relatively low technical problem of operational efficiency existing for the local binarization convolutional neural networks in technology.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not have in some embodiment The part of detailed description, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents, others can be passed through Mode is realized.Wherein, device embodiment described above is only schematical, such as the division of the unit, Ke Yiwei A kind of division of logic function, can there is an other dividing mode when actually realizing, for example, multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual Between coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module Connect, can be electrical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On unit.Some or all of unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part to be contributed in other words to prior art or all or part of the technical scheme can be in the form of software products Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer Equipment (can be personal computer, server or network equipment etc.) perform each embodiment methods described of the present invention whole or Part steps.And foregoing storage medium includes:USB flash disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with store program codes Medium.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (10)

  1. A kind of 1. local binarization CNN processing method, it is characterised in that including:
    Neutral net is accumulated according to the preset data collection training first volume, obtains the second convolutional neural networks;
    Whole convolution layer units in first convolutional neural networks are replaced with into local binarization convolution unit, so as to obtain 3rd convolutional neural networks;
    The full articulamentum of target in 3rd convolutional neural networks is replaced with into packet articulamentum, so as to obtain Volume Four product god Through network, wherein, the full articulamentum of target is complete for remaining in the 3rd convolutional neural networks in addition to bottom classifies layer Articulamentum;
    Initialization process is carried out to Volume Four product neutral net, obtains the 5th convolutional neural networks;
    It is trained, is obtained based on the 5th convolutional neural networks described in second convolutional neural networks and the preset data set pair To target convolutional neural networks, wherein, the target convolutional neural networks are the 5th convolutional Neural for reaching convergence state Network.
  2. 2. according to the method for claim 1, it is characterised in that the local binarization convolution unit includes binaryzation convolution Layer, 1x1 packet convolutional layers, whole convolution layer units by first convolutional neural networks replace with local binarization Convolution unit, include so as to obtain the 3rd convolutional neural networks:
    The 1x1 in the local binarization convolution unit is set to be grouped the first packet count of convolutional layer;
    The output characteristic figure of the binaryzation convolutional layer is subjected to two-dimensional topology expansion according to first packet count, obtains first Targeted packets;
    The output characteristic figure that the convolution kernel of 1x1 packet convolutional layers is grouped with the first object is attached, obtains institute State the 3rd convolutional neural networks.
  3. 3. according to the method for claim 1, it is characterised in that described that target in 3rd convolutional neural networks is complete Articulamentum replaces with packet articulamentum, includes so as to obtain Volume Four product neutral net:
    The second packet number of the packet articulamentum in 3rd convolutional neural networks is set;
    The input neuron of the packet articulamentum is subjected to two-dimensional topology expansion according to the second packet number, obtains the second mesh Mark packet;
    The output neuron of the packet articulamentum and the input neuron of second targeted packets are attached, obtain institute State Volume Four product neutral net.
  4. 4. according to the method for claim 1, it is characterised in that described based on second convolutional neural networks and described pre- If data set is trained to the 5th convolutional neural networks, obtaining target convolutional neural networks includes:
    Picture corresponding to training label that the preset data is concentrated is inputted into second convolutional neural networks, so as to Obtain the output valve of second convolutional neural networks;
    Sofening treatment, the output valve after being softened are carried out to the output valve;
    Read group total is weighted to the output valve after the softening and the training label, obtains target training label;
    The 5th convolutional neural networks are trained according to default stochastic gradient descent method and target training label, obtained To the target convolutional neural networks.
  5. A kind of 5. local binarization CNN processing unit, it is characterised in that including:
    First training unit, for accumulating neutral net according to the preset data collection training first volume, obtain the second convolutional neural networks;
    First replacement unit, for whole convolution layer units in first convolutional neural networks to be replaced with into local binarization Convolution unit, so as to obtain the 3rd convolutional neural networks;
    Second replacement unit, for the full articulamentum of target in the 3rd convolutional neural networks to be replaced with into packet articulamentum, So as to obtain Volume Four product neutral net, wherein, the full articulamentum of target is to remove bottom in the 3rd convolutional neural networks Remaining full articulamentum outside classification layer;
    Processing unit, for carrying out initialization process to Volume Four product neutral net, obtain the 5th convolutional neural networks;
    Second training unit, for based on the 5th convolution god described in second convolutional neural networks and the preset data set pair It is trained through network, obtains target convolutional neural networks, wherein, the target convolutional neural networks are to reach convergence state 5th convolutional neural networks.
  6. 6. device according to claim 5, it is characterised in that the local binarization convolution unit includes binaryzation convolution Layer, 1x1 packet convolutional layers, first replacement unit include:
    First sets subelement, for setting the 1x1 in the local binarization convolution unit to be grouped the first of convolutional layer Packet count;
    First processing subelement, for the output characteristic figure of the binaryzation convolutional layer to be carried out into two according to first packet count Dimension topology expansion, obtains first object packet;
    Second processing subelement, the convolution kernel and the output of first object packet for the 1x1 to be grouped to convolutional layer are special Sign figure is attached, and obtains the 3rd convolutional neural networks.
  7. 7. device according to claim 5, it is characterised in that second replacement unit includes:
    Second sets subelement, for setting the second packet of the packet articulamentum in the 3rd convolutional neural networks Number;
    3rd processing subelement, for the input neuron of the packet articulamentum to be carried out into two dimension according to the second packet number Topology expansion, obtains the second targeted packets;
    Fourth process subelement, for the output neuron of the packet articulamentum and the input of second targeted packets is refreshing It is attached through member, obtains the Volume Four product neutral net.
  8. 8. device according to claim 5, it is characterised in that second training unit includes:
    Subelement is inputted, is inputted for the picture corresponding to the training label of concentrating the preset data to second convolution In neutral net, so as to obtain the output valve of second convolutional neural networks;
    5th processing subelement, for carrying out sofening treatment, the output valve after being softened to the output valve;
    Computation subunit, for being weighted read group total to the output valve after the softening and the training label, obtain Label is trained to target;
    Subelement is trained, for training label to the 5th convolutional Neural according to default stochastic gradient descent method and the target Network is trained, and obtains the target convolutional neural networks.
  9. A kind of 9. storage medium, it is characterised in that the storage medium includes the program of storage, wherein, run in described program When control the storage medium where equipment perform claim require 1 local binarization into claim 4 described in any one CNN processing method.
  10. A kind of 10. processor, it is characterised in that the processor is used for operation program, wherein, right of execution when described program is run Profit requires the 1 local binarization CNN into claim 4 described in any one processing method.
CN201710720466.2A 2017-08-21 2017-08-21 Local binarization CNN processing method, device, storage medium and processor Pending CN107491787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710720466.2A CN107491787A (en) 2017-08-21 2017-08-21 Local binarization CNN processing method, device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710720466.2A CN107491787A (en) 2017-08-21 2017-08-21 Local binarization CNN processing method, device, storage medium and processor

Publications (1)

Publication Number Publication Date
CN107491787A true CN107491787A (en) 2017-12-19

Family

ID=60646327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710720466.2A Pending CN107491787A (en) 2017-08-21 2017-08-21 Local binarization CNN processing method, device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN107491787A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765506A (en) * 2018-05-21 2018-11-06 上海交通大学 Compression method based on successively network binaryzation
CN109002890A (en) * 2018-07-11 2018-12-14 北京航空航天大学 The modeling method and device of convolutional neural networks model
CN109117939A (en) * 2018-06-11 2019-01-01 西北大学 Neural network and the method for affixing one's name to neural network in mobile awareness upper side
CN109117940A (en) * 2018-06-19 2019-01-01 腾讯科技(深圳)有限公司 To accelerated method, apparatus and system before a kind of convolutional neural networks
EP3511870A1 (en) * 2018-01-15 2019-07-17 Idemia Identity & Security France Methods for learning of parameters of a convolutional neural network, and classification of input data
CN110245748A (en) * 2018-03-09 2019-09-17 北京深鉴智能科技有限公司 Convolutional neural networks implementation method, device, hardware accelerator, storage medium
CN111247527A (en) * 2017-12-20 2020-06-05 华为技术有限公司 Method and device for determining characteristic image in convolutional neural network model
US20210150313A1 (en) * 2019-11-15 2021-05-20 Samsung Electronics Co., Ltd. Electronic device and method for inference binary and ternary neural networks
CN113098805A (en) * 2021-04-01 2021-07-09 清华大学 Efficient MIMO channel feedback method and device based on binarization neural network
CN113298102A (en) * 2020-02-23 2021-08-24 初速度(苏州)科技有限公司 Training method and device for target classification model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060159120A1 (en) * 2005-01-17 2006-07-20 Joonsuk Kim Method and system for rate selection algorithm to maximize throughput in closed loop multiple input multiple output (MIMO) wireless local area network (WLAN) system
CN107480640A (en) * 2017-08-16 2017-12-15 上海荷福人工智能科技(集团)有限公司 A kind of face alignment method based on two-value convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060159120A1 (en) * 2005-01-17 2006-07-20 Joonsuk Kim Method and system for rate selection algorithm to maximize throughput in closed loop multiple input multiple output (MIMO) wireless local area network (WLAN) system
CN107480640A (en) * 2017-08-16 2017-12-15 上海荷福人工智能科技(集团)有限公司 A kind of face alignment method based on two-value convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FELIX JUEFEI-XU ET.AL: "Local Binary Convolutional Neural Networks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
LEIBIN NI ET.AL: "An energy-efficient and high-throughput bitwise CNN on sneak-path-free digital ReRAM crossbar", 《2017 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED)》 *
MOHAMMAD RASTEGARI ET.AL: "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks", 《COMPUTER VISION – ECCV 2016》 *
YIWEN GUO ET.AL: "Network Sketching: Exploiting Binary Structure in Deep CNNs", 《THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
王大伟 等: "基于LBP与卷积神经网络的人脸识别", 《天津理工大学学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247527B (en) * 2017-12-20 2023-08-22 华为技术有限公司 Method and device for determining characteristic images in convolutional neural network model
CN111247527A (en) * 2017-12-20 2020-06-05 华为技术有限公司 Method and device for determining characteristic image in convolutional neural network model
FR3076935A1 (en) * 2018-01-15 2019-07-19 Idemia Identity & Security France METHODS OF LEARNING PARAMETERS FROM A CONVOLVED NEURON NETWORK, AND CLASSIFYING AN INPUT DATA
US11574180B2 (en) 2018-01-15 2023-02-07 Idemia Identity & Security France Methods for learning parameters of a convolutional neural network, and classifying an input datum
EP3511870A1 (en) * 2018-01-15 2019-07-17 Idemia Identity & Security France Methods for learning of parameters of a convolutional neural network, and classification of input data
CN110245748B (en) * 2018-03-09 2021-07-13 赛灵思电子科技(北京)有限公司 Convolutional neural network implementation method, device, hardware accelerator and storage medium
CN110245748A (en) * 2018-03-09 2019-09-17 北京深鉴智能科技有限公司 Convolutional neural networks implementation method, device, hardware accelerator, storage medium
CN108765506B (en) * 2018-05-21 2021-01-29 上海交通大学 Layer-by-layer network binarization-based compression method
CN108765506A (en) * 2018-05-21 2018-11-06 上海交通大学 Compression method based on successively network binaryzation
CN109117939A (en) * 2018-06-11 2019-01-01 西北大学 Neural network and the method for affixing one's name to neural network in mobile awareness upper side
CN109117940A (en) * 2018-06-19 2019-01-01 腾讯科技(深圳)有限公司 To accelerated method, apparatus and system before a kind of convolutional neural networks
CN109117940B (en) * 2018-06-19 2020-12-15 腾讯科技(深圳)有限公司 Target detection method, device, terminal and storage medium based on convolutional neural network
CN109002890A (en) * 2018-07-11 2018-12-14 北京航空航天大学 The modeling method and device of convolutional neural networks model
US20210150313A1 (en) * 2019-11-15 2021-05-20 Samsung Electronics Co., Ltd. Electronic device and method for inference binary and ternary neural networks
CN113298102A (en) * 2020-02-23 2021-08-24 初速度(苏州)科技有限公司 Training method and device for target classification model
CN113298102B (en) * 2020-02-23 2022-06-24 魔门塔(苏州)科技有限公司 Training method and device for target classification model
CN113098805A (en) * 2021-04-01 2021-07-09 清华大学 Efficient MIMO channel feedback method and device based on binarization neural network

Similar Documents

Publication Publication Date Title
CN107491787A (en) Local binarization CNN processing method, device, storage medium and processor
CN107977704A (en) Weighted data storage method and the neural network processor based on this method
CN107316079A (en) Processing method, device, storage medium and the processor of terminal convolutional neural networks
CN107729872A (en) Facial expression recognition method and device based on deep learning
US20190005380A1 (en) Classifying features using a neurosynaptic system
CN106485324A (en) A kind of convolutional neural networks optimization method
CN110209825A (en) A kind of fast network representative learning algorithm based on width learning system
CN106980854A (en) Number-plate number recognition methods, device, storage medium and processor
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN108090433A (en) Face identification method and device, storage medium, processor
CN107909206A (en) A kind of PM2.5 Forecasting Methodologies based on deep structure Recognition with Recurrent Neural Network
CN108009638A (en) A kind of training method of neural network model, electronic equipment and storage medium
CN107808150A (en) The recognition methods of human body video actions, device, storage medium and processor
CN106488313A (en) A kind of TV station symbol recognition method and system
CN106570522A (en) Object recognition model establishment method and object recognition method
CN106600049A (en) Path generation method and apparatus thereof
CN108537747A (en) A kind of image repair method based on the convolutional neural networks with symmetrical parallel link
CN107654406A (en) Fan air-supply control device, fan air-supply control method and device
CN110222607A (en) The method, apparatus and system of face critical point detection
CN112381179A (en) Heterogeneous graph classification method based on double-layer attention mechanism
CN108197594A (en) The method and apparatus for determining pupil position
CN107463932A (en) A kind of method that picture feature is extracted using binary system bottleneck neutral net
CN107862380A (en) Artificial neural network computing circuit
CN112860856B (en) Intelligent problem solving method and system for arithmetic application problem
CN111199255A (en) Small target detection network model and detection method based on dark net53 network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171219