CN110288030A - Image-recognizing method, device and equipment based on lightweight network model - Google Patents
Image-recognizing method, device and equipment based on lightweight network model Download PDFInfo
- Publication number
- CN110288030A CN110288030A CN201910566189.3A CN201910566189A CN110288030A CN 110288030 A CN110288030 A CN 110288030A CN 201910566189 A CN201910566189 A CN 201910566189A CN 110288030 A CN110288030 A CN 110288030A
- Authority
- CN
- China
- Prior art keywords
- image
- node
- network model
- characteristic
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image-recognizing method based on lightweight network model, device and equipment.Described image recognition methods, comprising the following steps: S1 obtains target image to be identified;Target image is input in the lightweight network model trained by S2;S3 classifies to target image using the lightweight network model trained.Wherein, the process of the lightweight network model is obtained the following steps are included: S21, constructs the variant convolutional neural networks without full articulamentum;S22 is updated the weight of convolutional layer by softmax classifier to image classification;S23 extracts the feature of image using the updated variant convolutional neural networks of weight again, and is standardized to feature;Feature after standardization is generated characteristic node and enhancing node according to the building method of broadband network, determines final characteristic node, enhancing node number, construct lightweight network model by S24.
Description
Technical field
The present invention relates to mode identification technologies, and in particular to a kind of image recognition side based on lightweight network model
Method, device, equipment and readable storage medium storing program for executing.
Background technique
When deep neural network is applied to field of image recognition, since deep neural network is related to a large amount of hyper parameter and answers
Miscellaneous structure, this complexity make theoretically analysis depth structure extremely difficult, most of work be related to adjusting parameter or
More layers is stacked to obtain better precision, although therefore deep neural network precision high calculating time and training time
It is long.Article " Broad Learning System:An Effective and Efficient Incremental Learning
The broadband network (BLS) proposed in System Without the Need for Deep Architecture " is to be based on
The idea of RVFLNN and design, relative to " depth " structure, " width " structure due to coupling not between layers rather than
Often succinct, the training process of BLS reduces the dependence to computer and storage resource.Because connecting without multilayer, BLS is not needed
Weight is updated using gradient decline, BLS asks matrix pseudoinverse to find required connection weight by ridge regression, reaches in neural network accuracy
When less than requiring, by increasing " width " Lai Tisheng precision of network, network is quickly rebuild using Incremental Learning Algorithm, without weight
New training process, so calculating speed is significantly better than deep learning.Although the promotion of BLS speed is fairly obvious, it is applied to figure
As Classification and Identification field, nicety of grading is not high enough.
Summary of the invention
It is an object of the invention to overcome the above-mentioned deficiency in the presence of the prior art, provide a kind of based on lightweight network
Image-recognizing method, device, equipment and the readable storage medium storing program for executing of model, with realize image classification model size, efficiency, resource,
Balance above precision, and then realize the fast and accurate true classification of image, it solves depth network and relies on expensive hardware configuration and meter
Calculation and time consumption for training, and the problem that broadband network precision is not high.
In order to achieve the above-mentioned object of the invention, the present invention provides following technical schemes:
A kind of image-recognizing method based on lightweight network model, comprising the following steps:
S1 obtains target image to be identified;
Target image is input in the lightweight network model trained by S2;
S3 classifies to target image using the lightweight network model trained.
Wherein, obtain the process of the lightweight network model the following steps are included:
S21 constructs the variant convolutional neural networks without full articulamentum according to the make of convolutional neural networks, described
Variant convolutional neural networks include one and its above network layer, and the network layer includes a convolutional layer and a pond
Layer;
S22, the image to be classified marked obtain the characteristic image of image by variant convolutional neural networks, and this feature is defeated
Enter softmax classifier, by softmax classifier to image classification, foundation classification results and the true value for having marked image
It is updated by weight of the loss function to convolutional layer;
S23 extracts the feature of image using the updated variant convolutional neural networks of weight again, and marks to feature
Quasi-ization processing;
Feature after standardization is generated characteristic node and enhancing node according to the building method of broadband network, determined by S24
Final characteristic node, enhancing node number, construct lightweight network model.
Preferably, convolutional layer convolution operation process is as follows in the step S21:
Assuming thatFor raw image data collection, Xi is i-th of image, i=1,2 ... n;L indicates the network number of plies,
And the size of l layers of Convolution Filter is kl*kl, depth dl, the moving step length of Convolution Filter is sl;
I-th of image XiA series of convolution operations carried out indicate are as follows:
Wherein, l and l-1 indicates the convolutional layer number of plies, and l is current layer, and l-1 is preceding layer;Indicate l layers of convolutional layer output
Image,Indicate the image that i-th of image exports after the processing of pond layer in l-1 layer network layer, andFor original
Beginning image Xi;W is the weight of convolutional layer, and b is the biasing of convolutional layer, and W and b is randomly generated, i.e. W(l)For l layers of convolutional layer
Weight matrix, b(l)For the bias matrix of l layers of convolutional layer;For convolution operation.
Preferably, the pond pond Hua Ceng operating process is as follows in the step S21:
Using the pond mode of operation being maximized, the step-length of each pond layer is tl, therefore, the spy after a convolution operation
A channel poolization for levying picture operates unfolding calculation are as follows:
Wherein, C is the characteristic image exported after pondization operates.
Preferably, the detailed process of the step S22 is as follows:
The character image data of training sample by multilayer convolution, pondization operation output is expressed asIts
In, C ' is the characteristic pattern image set exported after multilayer convolution, pondization operation, and n is image number, vector U 'iFor i-th of image
By the characteristic image that variant CNN is extracted, and i ∈ [1, n];Image K class to be divided at this time will pass through multilayer convolution, Chi Huacao
The characteristic image of the output of work is connected to K node entirely, indicates are as follows:
Yy=WYC′+bY, y ∈ [1, K];
Wherein YyFor the output of y-th of node, WYFor the weight of node operation, bYFor the biasing of node operation;
After being connected to K node entirely, classified with softmax classifier to operation result, softmax algorithm are as follows:
Wherein, SyIndicate that image is divided into the probability of y-th of classification, ayThe value classified for y-th, akClassify for k-th
Value, k ∈ [1, K];ayThe output Y of y-th obtained of node is as connected entirelyy。
Using cross entropy L as loss function, cross entropy L is indicated are as follows:
Based on cross entropy loss function, weight update is carried out to variant convolutional neural networks by Adam algorithm.
Preferably, the detailed process of the step S23 is as follows:
The feature of image is extracted again using the updated variant convolutional neural networks of weight, and feature is standardized
Processing, standardized detailed process are as follows:
Wherein U 'iRefer to i-th of the image extracted by the updated variant convolutional neural networks trained of weight
Characteristic image, μiFor the mean value of the characteristic image of i-th of image of output, σ is the characteristic image of i-th of image of output
Variance;U″iFor the characteristic image of i-th of image after standardization;
Standardized feature set representations after the characteristic image standardization of all images are as follows:
Preferably, the detailed process of the step S24 is as follows:
Broadband network characteristic node indicates are as follows:
Wherein, ZrIndicate the characteristic node of broadband network;φ is arbitrary function;C " is the characteristic image after standardization;
For the random weight coefficient of the characteristic node with appropriate dimension;It is characterized the biasing of node, e is the feature of broadband network
The number of node;
Define ZE=[Z1, Z2..., Ze];
Then enhance node to be expressed as:
Define HF=[H1, H2..., Hf];
Wherein, HjIndicate the characteristic node of broadband network;ζ is activation primitive;For the enhancing section with appropriate dimension
The random weight coefficient of point;For the biasing for enhancing node, f is the number of the characteristic node of broadband network;
It is a kind of to be generally designated as based on lightweight network structure:
A kind of pattern recognition device based on lightweight network model, including target image acquisition module, target image are defeated
Enter module, Classification and Identification module and object module and obtains module;
Wherein, target image obtains module, for obtaining target image to be identified;
Target image input module, for target image to be input in target lightweight network model;The lightweight
Network model is obtained by the variant convolutional neural networks and broadband network construction for removing full articulamentum;
Classification and Identification module is known for carrying out classification processing to target image using target lightweight network model
Other result;
Object module obtains module, comprising:
Model construction unit, for constructing the variant convolutional neural networks without full articulamentum, and the structure based on broadband network
It builds step and obtains broadband network characteristic node and enhancing node, construct lightweight network model;
Loss function is inserted into unit, for the insertion loss function in variant convolutional neural networks;
Training unit instructs variant convolutional neural networks for utilizing loss function combination softmax classifier
Practice, undated parameter, obtains object module.
A kind of image recognition apparatus based on lightweight network model, comprising:
Memory, for storing computer program;
Processor realizes the above-mentioned image recognition side based on lightweight network model when for executing the computer program
The step of method.
A kind of readable storage medium storing program for executing, which is characterized in that computer program, the meter are stored on the readable storage medium storing program for executing
Calculation machine program realizes the step of above-mentioned image-recognizing method based on lightweight network model when being executed by processor.
Compared with prior art, characteristics of image, input beneficial effects of the present invention: are extracted by variant convolutional neural networks
The network structure of construction step invention based on broadband network, obtains lightweight network model, which can
The dependence to image recognition processes to computer storage resource is reduced, the training time can be effectively reduced, and improves image point
Class precision realizes image classification model size, efficiency, resource, the balance above precision.
Detailed description of the invention:
Fig. 1 is the lightweight network architecture figure of exemplary embodiment of the present 1;
Fig. 2 is the flow chart of the image-recognizing method based on lightweight network model of exemplary embodiment of the present 1;
Fig. 3 is the step S2's of the image-recognizing method based on lightweight network model of exemplary embodiment of the present 1
Detail flowchart;
Fig. 4 is the MNIST that the image-recognizing method in exemplary embodiment of the present 2 based on lightweight network model extracts
The characteristic effect figure of data set;
Fig. 5 is the structural representation of the pattern recognition device based on lightweight network model in exemplary embodiment of the present 3
Figure;
Fig. 6 is the structural representation of the image recognition apparatus based on lightweight network model in exemplary embodiment of the present 4
Figure;
Fig. 7 is the specific structure of the image recognition apparatus based on lightweight network model in exemplary embodiment of the present 4
Schematic diagram.
Specific embodiment
Below with reference to test example and specific embodiment, the present invention is described in further detail.But this should not be understood
It is all that this is belonged to based on the technology that the content of present invention is realized for the scope of the above subject matter of the present invention is limited to the following embodiments
The range of invention.
Embodiment 1
As shown in Figure 1 to Figure 3, the present embodiment provides a kind of image-recognizing methods based on lightweight network model, specifically
The following steps are included:
S1 obtains target image to be identified;
Target image is input in the lightweight network model trained by S2;
S3 classifies to target image using the lightweight network model trained.
Wherein, obtain the process of the lightweight network model the following steps are included:
S21 constructs the variant convolutional neural networks (CNN) without full articulamentum according to the make of convolutional neural networks,
The variant convolutional neural networks include one and its above network layer, and the network layer includes a convolutional layer and a pond
Change layer;
S22 inputs the image to be classified marked;The image to be classified marked is obtained by variant convolutional neural networks
The feature of image, this feature input softmax classifier, by softmax classifier to image classification, according to classification results and
The true value for having marked image is updated by weight of the loss function to convolutional layer;
S23 extracts the feature of image using the updated variant convolutional neural networks of weight again, and marks to feature
Quasi-ization processing;
Feature after standardization is generated a certain number of characteristic nodes according to the building method of broadband network by S24
With enhancing node, determines final characteristic node, enhancing node number, construct lightweight network model.
The convolutional neural networks for removing full articulamentum are used for image characteristics extraction by above-mentioned steps, pass through the power of convolution operation
Value is shared and pondization operates the order of magnitude for reducing network parameter, enhances the feature of broadband network BLS input data.Pass through combination
Variant convolutional neural networks and broadband network construct a kind of light-weighted network structure, realize fast and accurate true point of image
Class.
LeNet, AlexNet, VggNet, ResNet etc. are the common network structure model in convolutional neural networks field, it
Make have certain difference, and be adapted to data set of different sizes.The present embodiment is according to image to be classified data set
Size selects suitable convolutional layer and pond layer number, the size of convolutional layer filter, convolutional layer filter moving step length, Chi Hua
The mode of operation, to construct the variant convolutional neural networks without full articulamentum.Wherein, pondization operation is maximized and is averaged
Two kinds of modes of operation of value.
Convolutional layer convolution operation process is as follows in step S21:
Assuming thatFor raw image data collection, XiFor i-th of image, i=1,2 ... n;L indicates the network number of plies,
And the size of l layers of Convolution Filter is kl*kl, depth dl, the moving step length of Convolution Filter is sl;
I-th of image XiA series of convolution operations carried out indicate are as follows:
Wherein, l and l-1 indicates the convolutional layer number of plies, and l is current layer, and l-1 is preceding layer;Indicate the volume of l layer network layer
The image of lamination output,Indicate the image that i-th of image exports after the processing of pond layer in l-1 layer network layer,
AndFor original image Xi;W is the weight of convolutional layer, and b is the biasing of convolutional layer, and W and b is randomly generated, i.e. W(l)It is l layers
The weight matrix of convolutional layer, b(l)For the bias matrix of l layers of convolutional layer;For convolution operation.
One channel convolution operation unfolding calculation of one image are as follows:
Wherein, WpqFor the weight coefficient in weight matrix, wherein (1, c) p ∈, q ∈ (1, c), the size of convolution kernel at this time
For c*c;XghFor input convolutional layer image slices vegetarian refreshments value, wherein (1, m) g ∈, h ∈ (1, m), image dimension m*m;B is
The biasing of convolutional layer;S is the moving step length of Convolution Filter.Since before image recognition resolution sizes would generally be carried out to image
The processing such as change, enhancing and denoising, the image for inputting network is generally square, therefore image shape is square herein,
Resolution sizes are m*m, but the image of the application can be other shapes, and those skilled in the technology concerned are not departing from this
In the case where the principle and range of invention, various replacements, modification and the improvement made should be included in protection model of the invention
Within enclosing.
It is m/s that the height new_height and width new_width of image characteristic matrix are exported after convolution operation, in order to
Matrix convenience of calculation, needs to expand the matrix size of the image of input convolution operation, and pixel numerical value to be supplemented is denoted as 0;It is defeated
Entering image array in height needs the pixel quantity expanded are as follows:
Pad_needed_height=(new_height-1) * s+c-m;
So, the pixel quantity pad_top expanded and lower section is needed to need the pixel quantity expanded above input matrix
Pad_bottom calculation is as follows:
Pad_top=pad_neededd_height/2, pad_bottom=pad_needed_height-pad_top;
The pixel quantity pad_left and the right that the input matrix left side need to be expanded need the pixel quantity pad_ expanded
Right calculation is as follows:
Pad_left=pad_top, pad_right=pad_bottom.
Hua Ceng pond operating process in pond is as follows in step S21:
The operation of the present embodiment pondization is t using the mode of operation being maximized, the step-length of l layer network layer pond layerl, and the
The moving step length of l layer network Convolution Filter is sl, therefore, a channel poolization of the feature image after a convolution operation is grasped
Make unfolding calculation are as follows:
Wherein, C is the characteristic image exported after pondization operates.
The detailed process of step S22 is as follows.
The character image data of training sample by multilayer convolution, pondization operation output is expressed asIts
In, C ' is the characteristic pattern image set exported after multilayer convolution, pondization operation, and n is image number, vector U 'iFor i-th of image
The pixel value collection of each pixel of characteristic image by variant CNN extraction, and i ∈ [1, n].Image K class to be divided at this time, will be through
Cross multilayer convolution, the characteristic image for the output that pondization operates is connected to K node entirely, expression are as follows:
Yy=WYC′+bY, y ∈ [1, K],
Wherein YyFor the output of y-th of node, WYFor the weight of node operation, bYFor the biasing of node operation.
After being connected to K node entirely, classified with softmax classifier to operation result, softmax algorithm are as follows:
Wherein, SyIndicate that image is divided into the probability of y-th of classification, ayThe value classified for y-th, akClassify for k-th
Value, k ∈ [1, K];ayThe output Y of y-th obtained of node is as connected entirelyy。
Using cross entropy L as loss function, cross entropy L is indicated the loss function of this implementation are as follows:
Operation values based on loss function carry out weight update to variant convolutional neural networks by using Adam algorithm.
During constructing variant convolutional neural networks, weight and offset parameter are randomly generated, therefore not can guarantee convolution Chi Huacao
It extracts and is characterized in preferably after work, in order to solve this problem, network weight update is carried out using Adam algorithm.Adam is calculated
The single order moments estimation and second order moments estimation that method passes through stochastic gradient are the independent adaptivity learning rate of different parameter designings,
It is advantageous in non-convex optimization problem, Adam algorithm and other existing optimization algorithms (such as Gradient Descent,
Adadelta and Adagrad algorithm) it compares, effect of optimization is more preferable, is classified using the network image after Adam algorithm optimization smart
Degree is high.
The Adam algorithm of more neomorph convolutional neural networks weight are as follows:
zλ-1=β1zλ-2+(1-β1)f′(θλ-1),
vλ-1=β2vλ-2+(1-β2)f′(θλ-1)2;
Wherein λ indicates the number of iterations;α is hyper parameter learning rate;β1、β2For the exponential decay rate of hyper parameter moments estimation, use
In the attenuation rate for controlling mobile mean value;ε is smooth item;θ is aleatory variable;Z, v is the null vector being initialised, for variable θ
Not to zero offset, deviation correction is done to variable θ, passes through the z after calculating deviation correctionλ、vλTo offset these deviations;θλIndicate λ
The θ vector of secondary iteration.
In the present embodiment, parameter beta1、β2Value is 0.9,0.999 respectively, and smooth item ε value is 10-8, learning rate α is in net
It is finely adjusted when network training.Small lot sample training is chosen when weight updates, as far as possible reduction network training the number of iterations.
It is minimized using gradient descent algorithm undated parameter by gradient descent method come iterative solution step by step
Loss function and corresponding model parameter value, specific calculating process is as follows:
Derivative of the loss function about convolutional layer weight W are as follows:
Derivative of the loss function about convolutional layer biasing b are as follows:
WhereinCalculation formula is as follows:
For in l layers of weight matrix pth row q column weight coefficient,The image of convolutional layer is inputted for l layers
G row h column pixel value, b(l)For l layers of biasing,For the g row h of l layers of convolutional layer treated image
The value of the pixel of column.
According to the make of convolutional neural networks, the variant convolutional neural networks without full articulamentum, the variant are constructed
Convolutional neural networks include convolutional layer and pond layer, are shared by the weight of convolution operation and pondization operation reduces network parameter
The order of magnitude, preferably extraction characteristics of image;Then by softmax classifier to image classification, by using Adam algorithm base
Weight update is carried out to variant convolutional neural networks by gradient descent method in the operation values of cross entropy loss function.Weight updates
Variant convolutional neural networks feature extraction afterwards is more accurate.
Extract the feature of image in step S23 again using the updated variant convolutional neural networks of weight, and to feature
It is standardized, standardized detailed process is as follows:
Wherein U 'iRefer to the characteristic image by the updated variant CNN trained of weight i-th of the image extracted
The pixel value collection of each pixel, μiFor the mean value of the characteristic image of i-th of image of output, σ is the spy of i-th of image of output
Levy the variance of image;U″iFor the pixel value of each pixel of characteristic image of i-th of image after standardization.
Standardized feature set representations after the characteristic image standardization of all images are as follows:
Step S24 by the feature after standardization according to the building method of broadband network generate a certain number of characteristic nodes and
Enhance node, determines final characteristic node, enhancing node number with grid data service, construct lightweight network model.Width
The characteristic node of network is used to extract the feature of image;And enhance node and increase the non-linear of whole network, for classifying.In detail
Thin construction process is as follows:
Broadband network characteristic node indicates are as follows:
Wherein, ZrIndicate the characteristic node of broadband network;φ is arbitrary function;C " is the characteristic image after standardization;
For the random weight coefficient of the characteristic node with appropriate dimension;It is characterized the biasing of node, e is the feature of broadband network
The number of node.
Define ZE=[Z1, Z2..., Ze];
Then enhance node to be expressed as:
Define HF=[H1, H2..., Hf]
Wherein, HjIndicate the characteristic node of broadband network;ζ is activation primitive;For the enhancing section with appropriate dimension
The random weight coefficient of point;For the biasing for enhancing node.
It is a kind of to be generally designated as based on lightweight network structure:
By above-mentioned steps, characteristics of image is extracted by variant convolutional neural networks (CNN), feature input is based on width
The lightweight network model of the construction step invention of network, is rapidly performed by image recognition.Based on light weight described in the present embodiment
The image-recognizing method for changing network model can reduce the dependence to computer storage resource, can effectively reduce the training time,
And image classification accuracy is improved, efficiency, resource and precision three requirements are better balanced.
Embodiment 2
Image-recognizing method described in embodiment 1 based on lightweight network model can be widely used in image recognition neck
Domain, the present embodiment test using MNIST data set and training network, detailed process are as follows:
Variant convolutional neural networks (CNN) is constructed to extract characteristics of image.It is with reference to building variant with LeNet network structure
CNN, the first convolutional layer, the first pond layer, the second convolutional layer and the second pond layer connect in order in variant convolutional neural networks
It connects, the size of the Convolution Filter of the first convolutional layer and the second convolutional layer is all 5*5, and moving step length is all 1, wherein the first convolution
The depth of the filter of layer is 32, and the depth of the filter of the second convolutional layer is 64;First pond layer and the second pond layer are all adopted
With maximum Chi Huafa, and the moving step length of pond layer filter is all 2, and the last layer pond result is connected to certain amount
Node on, construct the variant CNN without full articulamentum, then classified with softmax classifier, cross entropy as loss letter
Number.
The image that 60000 sizes are 28*28*1 in MNIST data set is inputted as the training sample of variant CNN, note
ForIt is as follows according to convolution pond operation processing image, concrete processing procedure described in embodiment 1:
XiBy the calculating process expansion in one channel of filter of the first convolutional layer are as follows:
The result in above-mentioned channel becomes by the first pond layer filter:
Therefore, on the data set arbitrary image last output result are as follows:
The last layer pond result of all training sample images:
It is connected on 10 nodes entirely, corresponding output
The CNN variant without full articulamentum is constructed, is then classified with softmax classifier, cross entropy is as loss letter
Number, carries out network weight update with Adam algorithm.
Weight updates chooses 100 samples every time, and iteration 1000 times, the number of iterations here can be according to promotion precision
It needs and considers that time cost suitably increases or decreases.After iteration 1000 times, it is assumed that network weight is W ' at this time, is biased to b ',
Extract the feature of image, the feature set representations of all images of extraction again using the updated variant convolutional neural networks of weight
Are as follows:
The feature that variant CNN is extracted than simply projecting, direction, center of gravity will be more scientific.It can use different
The capability of fitting of the size control overall model of convolution, pond and the feature vector finally exported.It can be reduced in over-fitting
The dimension of feature vector the output dimension of convolutional layer can be improved in poor fitting, more compared to other feature extracting methods
Flexibly.
The set of image characteristics extracted to the updated variant convolutional neural networks of weight is standardized.
Image standardization is by data by going mean value to realize the processing of centralization, according to convex optimum theory and data probability
It is distributed relevant knowledge, data center meets data distribution rule, it is easier to obtain extensive effect after training.Each image
Characteristics of imageStandardized feature collection after the characteristic image standardization of all images are as follows:
The characteristics of image that the variant CNN trained is extracted constructs lightweight network structure as input.
Feature after standardization generates a certain number of characteristic nodes and enhancing according to the building method of broadband network (BLS)
Node determines that the number of characteristic node is 10 with grid data service, and the number for enhancing node is 11000, and characteristics of image is raw
At characteristic node are as follows:
Define ZE=[Z1, Z2..., Z10];
Then enhance node to be expressed as:
It is a kind of to be generally designated as based on lightweight network structure:
Retain training pattern, classifies to MNIST test set image.
Fig. 4 show the characteristic effect figure that variant CNN is extracted in MNIST data set, and specially handwritten numeral 7 is passed through respectively
The characteristic effect figure that first convolutional layer, the first pond layer, the second convolutional layer and the second pond layer obtain.
Efficiency, resource and essence is better balanced in image-recognizing method provided by the embodiment based on lightweight network model
Three requirements are spent, Detailed Experimental parameter is as shown in Table 1, respectively shows deep neural network (LeNet5), width nerve net
The experiment effect of network (BLS) and lightweight network application when MNIST data set carries out image recognition.Experiment effect includes essence
Degree, training time and testing time.
Table 1
Network model classification | Precision (%) | Training time (s) | Testing time (s) |
Deep neural network (LeNet5) | 98.96 | 598.21 | 4.92 |
Width neural network (BLS) | 98.85 | 142.91 | 3.67 |
Lightweight network | 99.27 | 359.22 | 4.11 |
By the experimental data of table 1 it is found that the image-recognizing method provided by the embodiment based on lightweight network model more
Balance efficiency, resource and precision three requirements, image recognition effect are more preferable well.
Embodiment 3
Corresponding to above method embodiment, the present embodiment additionally provides a kind of image knowledge based on lightweight network model
Other device, the pattern recognition device described below based on lightweight network model are based on lightweight network mould with above-described
The image-recognizing method of type can correspond to each other reference.
Shown in Figure 5, which comprises the following modules: target image obtains module 101, target image input module
102, Classification and Identification module 103 and object module obtain module 104;
Wherein, target image obtains module 101, for obtaining target image to be identified;
Target image input module 102, for target image to be input in target lightweight network model;The light weight
Change network model to obtain by the variant convolutional neural networks and broadband network construction for removing full articulamentum;
Classification and Identification module 103 is obtained for carrying out classification processing to target image using target lightweight network model
Recognition result;
Object module obtains module 104, comprising:
Model construction unit, for constructing the variant convolutional neural networks without full articulamentum, and the structure based on broadband network
It builds step and obtains broadband network characteristic node and enhancing node, construct lightweight network model;
Loss function is inserted into unit, for the insertion loss function in variant convolutional neural networks;
Training unit instructs variant convolutional neural networks for utilizing loss function combination softmax classifier
Practice, undated parameter, obtains object module.
Using device provided by the embodiment of the present invention, target image to be identified is obtained;Target image is input to point
Class identification module obtains recognition result.
Target image to be identified is obtained, target image is input in lightweight network model.The lightweight network mould
Type is obtained based on variant convolutional neural networks and broadband network without full articulamentum.That is, the lightweight network model is logical
The weight for crossing convolution operation is shared and the order of magnitude of pondization operation reduction network parameter, enhances broadband network BLS input data
Feature.Then, classification processing is carried out to target image using Classification and Identification module, the identification knot of target image can be obtained
Fruit.Since lightweight network model used by Classification and Identification module is based on the variant convolutional neural networks without full articulamentum
It is obtained with broadband network, the dependence to computer storage resource can be reduced, the training time can be effectively reduced, and improve figure
As nicety of grading, efficiency, resource and precision three requirements are better balanced.
In a kind of specific embodiment of the invention, loss function is inserted into unit, is specifically used in variant convolutional Neural
Cross entropy loss function is inserted into network.
Embodiment 4
Corresponding to above method embodiment, the present embodiment additionally provides a kind of image knowledge based on lightweight network model
Other equipment, a kind of image recognition apparatus based on lightweight network model described below and above-described one kind are based on light weight
The image-recognizing method for changing network model can correspond to each other reference.
Shown in Figure 6, being somebody's turn to do the image recognition apparatus based on lightweight network model includes:
Memory D1, for storing computer program;
Processor D2, realized when for executing computer program above method embodiment based on lightweight network model
The step of image-recognizing method.
Specifically, referring to FIG. 7, being the tool of the image recognition apparatus provided in this embodiment based on lightweight network model
Body structural schematic diagram, the image recognition apparatus based on lightweight network model can generate bigger because of configuration or performance difference
Difference, may include one or more processors (central processing units, CPU) 322 (for example, one
A or more than one processor) and memory 332, storage Jie of one or more storage application programs 342 or data 344
Matter 330 (such as one or more mass memory units).Wherein, memory 332 and storage medium 330 can be of short duration deposit
Storage or persistent storage.The program for being stored in storage medium 330 may include one or more modules (diagram does not mark), often
A module may include to the series of instructions operation in data processing equipment.Further, central processing unit 322 can be set
It is set to and is communicated with storage medium 330, storage medium 330 is executed on the image recognition apparatus 301 based on lightweight network model
In series of instructions operation.
Image recognition apparatus 301 based on lightweight network model can also include one or more power supplys 326, one
A or more than one wired or wireless network interface 350, one or more input/output interfaces 358, and/or, one or
More than one operating system 341.For example, Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM
Deng.
Step in image-recognizing method as described above based on lightweight network model can be by being based on lightweight
The structure of the image recognition apparatus of network model is realized.
Embodiment 5
Corresponding to above method embodiment, the present embodiment additionally provides a kind of readable storage medium storing program for executing, and described below one
Kind of readable storage medium storing program for executing can correspond to each other reference with a kind of above-described image-recognizing method based on lightweight network model.
A kind of readable storage medium storing program for executing is stored with computer program on readable storage medium storing program for executing, and computer program is held by processor
The step of image-recognizing method based on lightweight network model of above method embodiment is realized when row.
The readable storage medium storing program for executing be specifically as follows USB flash disk, mobile hard disk, read-only memory (Read-Only Memory,
ROM), the various program storage generations such as random access memory (Random Access Memory, RAM), magnetic or disk
The readable storage medium storing program for executing of code.
The above, the only detailed description of the specific embodiment of the invention, rather than limitation of the present invention.The relevant technologies
The technical staff in field is not in the case where departing from principle and range of the invention, various replacements, modification and the improvement made
It should all be included in the protection scope of the present invention.
Claims (9)
1. a kind of image-recognizing method based on lightweight network model, which comprises the following steps:
S1 obtains target image to be identified;
Target image is input in the lightweight network model trained by S2;
S3 classifies to target image using the lightweight network model trained;
Wherein, obtain the process of the lightweight network model the following steps are included:
S21 constructs the variant convolutional neural networks without full articulamentum, the variant according to the make of convolutional neural networks
Convolutional neural networks include one and its above network layer, and the network layer includes a convolutional layer and a pond layer;
S22, the image to be classified marked obtain the characteristic image of image, this feature input by variant convolutional neural networks
Softmax classifier is logical according to classification results and the true value for having marked image by softmax classifier to image classification
Loss function is crossed to be updated the weight of convolutional layer;
S23 extracts the feature of image using the updated variant convolutional neural networks of weight again, and is standardized to feature
Processing;
Feature after standardization is generated characteristic node and enhancing node according to the building method of broadband network, determined final by S24
Characteristic node, enhancing node number, construct lightweight network model.
2. the image-recognizing method according to claim 1 based on lightweight network model, which is characterized in that the step
Convolutional layer convolution operation process is as follows in S21:
Assuming thatFor raw image data collection, XiFor i-th of image, i=1,2 ... n.;The l expression network number of plies, and the
The size of l layers of Convolution Filter is kl*kl, depth dl, the moving step length of Convolution Filter is sl;
I-th of image XiA series of convolution operations carried out indicate are as follows:
Wherein, l and l-1 indicates the convolutional layer number of plies, and l is current layer, and l-1 is preceding layer;Indicate the figure of l layers of convolutional layer output
Picture,Indicate the image that i-th of image exports after the processing of pond layer in l-1 layer network layer, andFor original graph
As Xi;W is the weight of convolutional layer, and b is the biasing of convolutional layer, and W and b is randomly generated, i.e. W(l)For the weight of l layers of convolutional layer
Matrix, b(l)For the bias matrix of l layers of convolutional layer;For convolution operation.
3. the image-recognizing method according to claim 1 based on lightweight network model, which is characterized in that the step
Hua Ceng pond operating process in pond is as follows in S21:
Using the pond mode of operation being maximized, the step-length of each pond layer is tl, therefore, the characteristic pattern after a convolution operation
One channel poolization of piece operates unfolding calculation are as follows:
Wherein, C is the characteristic image exported after pondization operates.
4. the image-recognizing method according to claim 1 based on lightweight network model, which is characterized in that the step
The detailed process of S22 is as follows:
The character image data of training sample by multilayer convolution, pondization operation output is expressed asWherein, C '
For the characteristic pattern image set exported after multilayer convolution, pondization operation, n is image number, vector U 'iPass through for i-th of image
The characteristic image that variant CNN is extracted, and i ∈ [1, n];Image K class to be divided at this time will be operated by multilayer convolution, pondization
The characteristic image of output is connected to K node entirely, indicates are as follows:
Yy=WYC′+bY, y ∈ [1, K];
Wherein YyFor the output of y-th of node, WYFor the weight of node operation, bYFor the biasing of node operation;
After being connected to K node entirely, classified with softmax classifier to operation result, softmax algorithm are as follows:
Wherein, SyIndicate that image is divided into the probability of y-th of classification, ayThe value classified for y-th, akThe value classified for k-th, k ∈
[1, K];ayThe output Y of y-th obtained of node is as connected entirelyy;
Using cross entropy L as loss function, cross entropy L is indicated are as follows:
Based on cross entropy loss function, weight update is carried out to variant convolutional neural networks by Adam algorithm.
5. the image-recognizing method according to claim 1 based on lightweight network model, which is characterized in that the step
The detailed process of S23 is as follows:
It extracts the feature of image again using the updated variant convolutional neural networks of weight, and place is standardized to feature
Reason, standardized detailed process are as follows:
Wherein U 'iRefer to the feature of i-th of the image extracted by the updated variant convolutional neural networks trained of weight
Image, μiFor the mean value of the characteristic image of i-th of image of output, σ is the variance of the characteristic image of i-th of image of output;
U″iFor the characteristic image of i-th of image after standardization;
Standardized feature set representations after the characteristic image standardization of all images are as follows:
6. the image-recognizing method according to claim 1 based on lightweight network model, which is characterized in that the step
The detailed process of S24 is as follows:
Broadband network characteristic node indicates are as follows:
Wherein, ZrIndicate the characteristic node of broadband network;φ is arbitrary function;C " is the characteristic image after standardization;For tool
There is the random weight coefficient of the characteristic node of appropriate dimension;It is characterized the biasing of node, e is the characteristic node of broadband network
Number;
Define ZE=[Z1, Z2..., Ze];
Then enhance node to be expressed as:
Define HF=[H1, H2..., Hf];
Wherein, HjIndicate the characteristic node of broadband network;ζ is activation primitive;For with appropriate dimension enhancing node with
Machine weight coefficient;For the biasing for enhancing node, f is the number of the characteristic node of broadband network;
It is a kind of to be generally designated as based on lightweight network structure:
7. a kind of pattern recognition device based on lightweight network model, which is characterized in that obtain module, mesh including target image
Logo image input module, Classification and Identification module and object module obtain module;
Wherein, target image obtains module, for obtaining target image to be identified;
Target image input module, for target image to be input in target lightweight network model;The lightweight network
Model is obtained by the variant convolutional neural networks and broadband network construction for removing full articulamentum;
Classification and Identification module obtains identification knot for carrying out classification processing to target image using target lightweight network model
Fruit;
Object module obtains module, comprising:
Model construction unit, the building step for constructing the variant convolutional neural networks without full articulamentum, and based on broadband network
It is rapid to obtain broadband network characteristic node and enhancing node, construct lightweight network model;
Loss function is inserted into unit, for the insertion loss function in variant convolutional neural networks;
Training unit is trained variant convolutional neural networks, more for utilizing loss function combination softmax classifier
New parameter obtains object module.
8. a kind of image recognition apparatus based on lightweight network model characterized by comprising
Memory, for storing computer program;
Processor is realized when for executing the computer program and is based on lightweight network as described in claim l to 6 any one
The step of image-recognizing method of model.
9. a kind of readable storage medium storing program for executing, which is characterized in that be stored with computer program, the calculating on the readable storage medium storing program for executing
The image recognition based on lightweight network model as described in any one of claim 1 to 6 is realized when machine program is executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910566189.3A CN110288030B (en) | 2019-06-27 | 2019-06-27 | Image identification method, device and equipment based on lightweight network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910566189.3A CN110288030B (en) | 2019-06-27 | 2019-06-27 | Image identification method, device and equipment based on lightweight network model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288030A true CN110288030A (en) | 2019-09-27 |
CN110288030B CN110288030B (en) | 2023-04-07 |
Family
ID=68007719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910566189.3A Active CN110288030B (en) | 2019-06-27 | 2019-06-27 | Image identification method, device and equipment based on lightweight network model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288030B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110837890A (en) * | 2019-10-22 | 2020-02-25 | 西安交通大学 | Weight value fixed-point quantization method for lightweight convolutional neural network |
CN110909796A (en) * | 2019-11-22 | 2020-03-24 | 浪潮电子信息产业股份有限公司 | Image classification method and related device |
CN110929652A (en) * | 2019-11-26 | 2020-03-27 | 天津大学 | Handwritten Chinese character recognition method based on LeNet-5 network model |
CN110956202A (en) * | 2019-11-13 | 2020-04-03 | 重庆大学 | Image training method, system, medium and intelligent device based on distributed learning |
CN111476138A (en) * | 2020-03-31 | 2020-07-31 | 万翼科技有限公司 | Construction method and identification method of building drawing component identification model and related equipment |
CN111614358A (en) * | 2020-04-30 | 2020-09-01 | 北京的卢深视科技有限公司 | Method, system, device and storage medium for feature extraction based on sub-channel quantization |
CN112070100A (en) * | 2020-09-11 | 2020-12-11 | 深圳力维智联技术有限公司 | Image feature recognition method and device based on deep learning model and storage medium |
CN112733585A (en) * | 2019-10-29 | 2021-04-30 | 杭州海康威视数字技术股份有限公司 | Image recognition method |
CN112861896A (en) * | 2019-11-27 | 2021-05-28 | 北京沃东天骏信息技术有限公司 | Image identification method and device |
CN112906829A (en) * | 2021-04-13 | 2021-06-04 | 成都四方伟业软件股份有限公司 | Digital recognition model construction method and device based on Mnist data set |
CN113205177A (en) * | 2021-04-25 | 2021-08-03 | 广西大学 | Electric power terminal identification method based on incremental collaborative attention mobile convolution |
CN113642592A (en) * | 2020-04-27 | 2021-11-12 | 武汉Tcl集团工业研究院有限公司 | Training method of training model, scene recognition method and computer equipment |
CN114363477A (en) * | 2021-12-30 | 2022-04-15 | 上海网达软件股份有限公司 | Method and system for video self-adaptive sharpening based on sliding window weight regression |
CN114444622A (en) * | 2022-04-11 | 2022-05-06 | 中国科学院微电子研究所 | Fruit detection system and method based on neural network model |
CN114781650A (en) * | 2022-04-28 | 2022-07-22 | 北京百度网讯科技有限公司 | Data processing method, device, equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529578A (en) * | 2016-10-20 | 2017-03-22 | 中山大学 | Vehicle brand model fine identification method and system based on depth learning |
CN107657233A (en) * | 2017-09-28 | 2018-02-02 | 东华大学 | Static sign language real-time identification method based on modified single multi-target detection device |
CN108304821A (en) * | 2018-02-14 | 2018-07-20 | 广东欧珀移动通信有限公司 | Image-recognizing method and device, image acquiring method and equipment, computer equipment and non-volatile computer readable storage medium storing program for executing |
CN108470320A (en) * | 2018-02-24 | 2018-08-31 | 中山大学 | A kind of image stylizing method and system based on CNN |
CN108564555A (en) * | 2018-05-11 | 2018-09-21 | 中北大学 | A kind of digital image noise reduction method based on NSST and CNN |
CN108717680A (en) * | 2018-03-22 | 2018-10-30 | 北京交通大学 | Spatial domain picture steganalysis method based on complete dense connection network |
CN109086806A (en) * | 2018-07-16 | 2018-12-25 | 福州大学 | A kind of IOT portable device visual identity accelerated method based on low resolution, compressed image |
CN109492766A (en) * | 2018-11-07 | 2019-03-19 | 西安交通大学 | A kind of width learning method based on minimum P norm |
US20190124045A1 (en) * | 2017-10-24 | 2019-04-25 | Nec Laboratories America, Inc. | Density estimation network for unsupervised anomaly detection |
CN109886971A (en) * | 2019-01-24 | 2019-06-14 | 西安交通大学 | A kind of image partition method and system based on convolutional neural networks |
-
2019
- 2019-06-27 CN CN201910566189.3A patent/CN110288030B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529578A (en) * | 2016-10-20 | 2017-03-22 | 中山大学 | Vehicle brand model fine identification method and system based on depth learning |
CN107657233A (en) * | 2017-09-28 | 2018-02-02 | 东华大学 | Static sign language real-time identification method based on modified single multi-target detection device |
US20190124045A1 (en) * | 2017-10-24 | 2019-04-25 | Nec Laboratories America, Inc. | Density estimation network for unsupervised anomaly detection |
CN108304821A (en) * | 2018-02-14 | 2018-07-20 | 广东欧珀移动通信有限公司 | Image-recognizing method and device, image acquiring method and equipment, computer equipment and non-volatile computer readable storage medium storing program for executing |
CN108470320A (en) * | 2018-02-24 | 2018-08-31 | 中山大学 | A kind of image stylizing method and system based on CNN |
CN108717680A (en) * | 2018-03-22 | 2018-10-30 | 北京交通大学 | Spatial domain picture steganalysis method based on complete dense connection network |
CN108564555A (en) * | 2018-05-11 | 2018-09-21 | 中北大学 | A kind of digital image noise reduction method based on NSST and CNN |
CN109086806A (en) * | 2018-07-16 | 2018-12-25 | 福州大学 | A kind of IOT portable device visual identity accelerated method based on low resolution, compressed image |
CN109492766A (en) * | 2018-11-07 | 2019-03-19 | 西安交通大学 | A kind of width learning method based on minimum P norm |
CN109886971A (en) * | 2019-01-24 | 2019-06-14 | 西安交通大学 | A kind of image partition method and system based on convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
JUNWEI JIN 等: "Discriminative graph regularized broad learning system for image recognition", 《SCIENCE CHINA(INFORMATION SCIENCES)》 * |
李传朋等: "基于深度卷积神经网络的图像去噪研究", 《计算机工程》 * |
贾晨 等: "基于宽度学习方法的多模态信息融合", 《智能系统学报》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110837890A (en) * | 2019-10-22 | 2020-02-25 | 西安交通大学 | Weight value fixed-point quantization method for lightweight convolutional neural network |
CN112733585B (en) * | 2019-10-29 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | image recognition method |
CN112733585A (en) * | 2019-10-29 | 2021-04-30 | 杭州海康威视数字技术股份有限公司 | Image recognition method |
CN110956202A (en) * | 2019-11-13 | 2020-04-03 | 重庆大学 | Image training method, system, medium and intelligent device based on distributed learning |
CN110909796B (en) * | 2019-11-22 | 2022-05-17 | 浪潮电子信息产业股份有限公司 | Image classification method and related device |
CN110909796A (en) * | 2019-11-22 | 2020-03-24 | 浪潮电子信息产业股份有限公司 | Image classification method and related device |
CN110929652A (en) * | 2019-11-26 | 2020-03-27 | 天津大学 | Handwritten Chinese character recognition method based on LeNet-5 network model |
CN110929652B (en) * | 2019-11-26 | 2023-08-01 | 天津大学 | Handwriting Chinese character recognition method based on LeNet-5 network model |
CN112861896A (en) * | 2019-11-27 | 2021-05-28 | 北京沃东天骏信息技术有限公司 | Image identification method and device |
CN111476138A (en) * | 2020-03-31 | 2020-07-31 | 万翼科技有限公司 | Construction method and identification method of building drawing component identification model and related equipment |
CN111476138B (en) * | 2020-03-31 | 2023-08-18 | 万翼科技有限公司 | Construction method, identification method and related equipment for building drawing component identification model |
CN113642592A (en) * | 2020-04-27 | 2021-11-12 | 武汉Tcl集团工业研究院有限公司 | Training method of training model, scene recognition method and computer equipment |
CN111614358A (en) * | 2020-04-30 | 2020-09-01 | 北京的卢深视科技有限公司 | Method, system, device and storage medium for feature extraction based on sub-channel quantization |
CN111614358B (en) * | 2020-04-30 | 2023-08-04 | 合肥的卢深视科技有限公司 | Feature extraction method, system, equipment and storage medium based on multichannel quantization |
CN112070100A (en) * | 2020-09-11 | 2020-12-11 | 深圳力维智联技术有限公司 | Image feature recognition method and device based on deep learning model and storage medium |
CN112906829A (en) * | 2021-04-13 | 2021-06-04 | 成都四方伟业软件股份有限公司 | Digital recognition model construction method and device based on Mnist data set |
CN113205177B (en) * | 2021-04-25 | 2022-03-25 | 广西大学 | Electric power terminal identification method based on incremental collaborative attention mobile convolution |
CN113205177A (en) * | 2021-04-25 | 2021-08-03 | 广西大学 | Electric power terminal identification method based on incremental collaborative attention mobile convolution |
CN114363477A (en) * | 2021-12-30 | 2022-04-15 | 上海网达软件股份有限公司 | Method and system for video self-adaptive sharpening based on sliding window weight regression |
CN114444622A (en) * | 2022-04-11 | 2022-05-06 | 中国科学院微电子研究所 | Fruit detection system and method based on neural network model |
CN114444622B (en) * | 2022-04-11 | 2022-06-17 | 中国科学院微电子研究所 | Fruit detection system and method based on neural network model |
CN114781650A (en) * | 2022-04-28 | 2022-07-22 | 北京百度网讯科技有限公司 | Data processing method, device, equipment and storage medium |
CN114781650B (en) * | 2022-04-28 | 2024-02-27 | 北京百度网讯科技有限公司 | Data processing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110288030B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110288030A (en) | Image-recognizing method, device and equipment based on lightweight network model | |
CN110020682B (en) | Attention mechanism relation comparison network model method based on small sample learning | |
US20210049423A1 (en) | Efficient image classification method based on structured pruning | |
US20190228268A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
CN110348399B (en) | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network | |
Thai et al. | Image classification using support vector machine and artificial neural network | |
CN107169504B (en) | A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network | |
CN103927531B (en) | It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network | |
CN109299741B (en) | Network attack type identification method based on multi-layer detection | |
CN108614997B (en) | Remote sensing image identification method based on improved AlexNet | |
CN103955702A (en) | SAR image terrain classification method based on depth RBF network | |
CN111882040A (en) | Convolutional neural network compression method based on channel number search | |
CN110197205A (en) | A kind of image-recognizing method of multiple features source residual error network | |
CN104616029B (en) | Data classification method and device | |
CN105243139A (en) | Deep learning based three-dimensional model retrieval method and retrieval device thereof | |
CN105512681A (en) | Method and system for acquiring target category picture | |
CN104318515B (en) | High spectrum image wave band dimension reduction method based on NNIA evolution algorithms | |
CN112949738B (en) | Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm | |
CN103440493A (en) | Hyperspectral image blur classification method and device based on related vector machine | |
CN112800927B (en) | Butterfly image fine-granularity identification method based on AM-Softmax loss | |
Wang et al. | Energy based competitive learning | |
CN107545279A (en) | Image-recognizing method based on convolutional neural networks Yu Weighted Kernel signature analysis | |
Li et al. | Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes | |
CN112766283A (en) | Two-phase flow pattern identification method based on multi-scale convolution network | |
CN111222545B (en) | Image classification method based on linear programming incremental learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |