CN104102919B - Image classification method capable of effectively preventing convolutional neural network from being overfit - Google Patents
Image classification method capable of effectively preventing convolutional neural network from being overfit Download PDFInfo
- Publication number
- CN104102919B CN104102919B CN201410333924.3A CN201410333924A CN104102919B CN 104102919 B CN104102919 B CN 104102919B CN 201410333924 A CN201410333924 A CN 201410333924A CN 104102919 B CN104102919 B CN 104102919B
- Authority
- CN
- China
- Prior art keywords
- image
- training
- convolutional neural
- neural networks
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to an image classification method capable of effectively preventing a convolutional neural network from being overfit. The image classification method comprises the following steps: obtaining an image training set and an image test set; training a convolutional neural network model; and carrying out image classification to the image test set by adopting the trained convolutional neural network model. The step of training the convolutional neural network model comprises the following steps: carrying out pretreatment and sample amplification to image data in the image training set to form a training sample; carrying out forward propagation to the training sample to extract image features; calculating the classification probability of each sample in a Softmax classifier; according to the probability yi, calculating to obtain a training error; successively carrying out forward counterpropagation from the last layer of the convolutional neural network by the training error; and meanwhile, revising a network weight matrix W by SGD (Stochastic Gradient Descent). Compared with the prior art, the invention has the advantages of being high in classification precision, high in rate of convergence and high in calculation efficiency.
Description
Technical field
The present invention relates to image processing field, more particularly, to a kind of image for effectively preventing convolutional neural networks over-fitting
Sorting technique.
Background technology
With the extensive use of multimedia technology and computer network, occur great amount of images data on network.In order to
Effectively manage these image files, provide the user preferably experience service, be automatically identified image content become more
Come more important.
The constantly improve of random machine learning method and development, deep learning algorithm are increasingly taken seriously, wherein convolution
Neutral net is exactly a kind of important algorithm in deep learning, and the research heat of speech analysis and field of image recognition has been turned at present
Point.Convolutional Neural has broken the mode of neuron in traditional neural network between layers connection entirely, and its weights share net
Network structure is allowed to be more closely similar to biological neural network, reduces the complexity of network model, reduces the quantity of weights.The advantage
It is that image is becoming apparent for performance in the input of network, allows image directly as the input of network, it is to avoid traditional knowledge
Complicated feature extraction and data reconstruction processes in other algorithm.Convolutional network is of the particular design for identification two-dimensional shapes
Multilayer perceptron, deformation of this network structure to translation, proportional zoom, inclination or other forms has height consistency.
Image Classfication Technology based on convolutional neural networks can the effectively automatic characteristic information extraction from image, carry
The feature for taking has an extraordinary image expression ability, thus the technology achieved in some image classification problems it is satisfactory
Experimental result.Even so, also there is following defect at present in the technology:
First, because the data of tape label in image data base are limited, as the scale of convolutional neural networks is continuous
, it is necessary to the weights of training can also be continuously increased, this certainly will cause that over-fitting occurs in neutral net for increase, that is, when training point
Nicety of grading when class precision is far better than test.
Second, in order to obtain more preferable feature representation ability to obtain more preferable nicety of grading, some researchers adopt
Method with increasing network depth, expanding network size.But, this method will greatly increase computation complexity, traditional
CPU arithmetic speeds can not meet such computation complexity.
The content of the invention
The purpose of the present invention is exactly to provide that a kind of nicety of grading is high for the defect for overcoming above-mentioned prior art to exist, receive
Hold back the image classification method for effectively preventing convolutional neural networks over-fitting that speed is fast, computational efficiency is high.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, the method is operated in GPU, including:
Step one, obtains training set of images and image measurement collection;
Step 2, the training of convolutional neural networks model, specifically includes following steps:
A) structure and frequency of training upper limit N of setting convolutional neural networks, initialize neural network weight matrix W, described
Structure includes the quantity of characteristic pattern in the number of plies of convolutional neural networks and every layer;
B) view data is obtained from described image training set to be pre-processed, and carry out sample amplification, form training sample
This;
C) propagated forward is carried out to the training sample and extracts characteristics of image, the propagated forward includes convolutional layer, non-thread
Property pooling layers of the calculating of normalization layer and mixing;
D) class probability of various kinds sheet is calculated in Softmax graders:
In formula, siRepresent the output valve of Softmax i-th neuron of grader, si=F η, F are certain training sample
Image feature vector, η is corresponding weights, and n is the categorical measure for needing classification;
E) according to probability yiIt is calculated training error
As i=k, θik=1, i represent i-th classification, when be originally inputted belong to classification i when,
F) utilize last layer successively forward backpropagation of the training error from convolutional neural networks, while using with
Machine gradient descent method SGD changes network weight matrix W;
Whether g) judgment models training completes, if so, being held after then preserving convolutional neural networks model and Softmax graders
Row step 3, if it is not, then return to step b);
Step 3, image classification is carried out using the convolutional neural networks model after training to image measurement collection.
The step 2 a) in, the span of the element of initial weight matrix W is [- 0.01,0.01].
The b of the step 2) it is specially:
B1) for as broad as long image, zoomed in and out using the cvResize functions in OPENCV, the picture after scaling
Size is N × N;
B2) to the unequal image of length and width, fixed short side S is constant, the continuous S pixel in the middle of interception side long, and formation S ×
The image of S sizes, repeats step b1) ultimately form the image of N × N sizes;
B3 the pixel value sum of all images) is calculated, and quantity divided by image obtains an average image, in each width
The average image is subtracted in image and obtains input sample;
B4 data amplification) is carried out to the input sample, final training sample is formed.
The c of step 2) in, the calculating of the convolutional layer is specially:
yk=max { wk*x,0}
Wherein, x represents the input of the output of preceding layer, i.e. current layer, ykRepresent k-th output of characteristic pattern, wkRepresent with
K-th weight matrix that the output of preceding layer is connected, " * " represents the inner product operation of two dimension;
The calculating of the non-linear normalizing layer is specially:
Wherein, xkijThe output of k-th characteristic pattern of preceding layer when being calculated for non-linear normalizing layer, accumulating operation is in kth
Completed in the same position (i, j) of the adjacent N number of characteristic pattern of individual characteristic pattern, α and β is default normalized parameter, ykijFor new
The characteristic pattern of generation;
The calculating of pooling layers of the mixing is specially:
Wherein, λ is the random parameter that value is 0 or 1, xkpqK-th spy of preceding layer during for pooling layers of calculating of mixing
Levy the output of figure, RijTo treat down-sampled region.
In the step g), the criterion whether judgment models training completes is:Reach the frequency of training upper limit.
Compared with prior art, the present invention has advantages below:
First, mix down-sampled Mixed Pooling present invention firstly provides being used when convolutional neural networks are down-sampled
Method, can effectively prevent the over-fitting of neutral net, be finally reached the effect for improving nicety of grading, and have
The characteristics of robustness is good.
Second, accelerate treatment meter present invention uses graphics processing unit GPU (Graphic Processing Units)
The mode of calculation so that convolutional neural networks can quickly be calculated and restrained when larger data amount is processed.
3rd, recognition accuracy of the invention is better than the main flow algorithm on CIFAR10, CIFAR100, SVHN data set,
And with computational efficiency higher.
Brief description of the drawings
Fig. 1 is the schematic diagram of model training process of the present invention;
Fig. 2 is the schematic diagram of image classification process of the present invention.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention
Premised on implemented, give detailed implementation method and specific operating process, but protection scope of the present invention is not limited to
Following embodiments.
As Figure 1-Figure 2, a kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, the method is first
Image set M is obtained from image data base and be divided into training set MtWith test set My, then according to training set MtSet up convolution
Neural network model, finally with the convolutional neural networks model that trains to test set MyCarry out image classification.
As shown in figure 1, the training of convolutional neural networks model specifically includes following steps:
In step S101:The structure and frequency of training upper limit N of convolutional neural networks are set, and initializes neutral net
Weight matrix W, specially:
1a) scale according to problem presets the level of convolutional neural networks and every layer of characteristic pattern quantity, and experiment is adopted
With Input1-Conv64-LRN64-Pooling64-Conv64-LRN64-Pooling64-Sof tmax, (Input table shows input
Layer, the digital n of institute's band represents the feature map quantity of this layer thereafter, and Conv represents convolutional layer, and LRN represents non-linear normalizing
Change layer, Pooling represents down-sampled layer, Softmax represents the grader layer of last layer) structure;
1b) to being attached between layers and neural network weight matrix W is initialized, the method for initialization is for W
In each element, randomly generate the floating number of [- 0.01,0.01] and assignment.
In step s 102:From training set MtMiddle acquisition view data, the picture number to obtaining is pre-processed and data
Amplification:
2a) for as broad as long image, directly zoomed in and out using the cvResize functions in OPENCV so that scaling
Picture afterwards is N × N pixel sizes, in the present embodiment, N=32;
2b) to the unequal image of length and width, fixed short side S is constant, intercepts S pixel in the middle of side long, so far forms one
The image of width S × S sizes, repeats 2a) the step of ultimately form the image of 32 × 32 sizes;
Each position pixel value sum 2c) is calculated using all images, and except the quantity of epigraph obtains an average figure
Picture, finally subtracts the input sample that the average image is obtained in every piece image;
2d) input sample to each 32 × 32 size carries out data amplification, sample is regarded as 32 × 32 matrix, cuts
Take the element of upper left 24 × 24, the element of lower-left 24 × 24,24 × 24 elements of upper right, the element of bottom right 24 × 24 and center 24
× 24 input samples as new input sample and new to this 5 continue flip horizontal, re-form the new input of 5 width
Sample, so originally 32 × 32 sample by just having obtained the new samples of 10 24 × 24 after data amplification technique.
In step s 103:The new samples obtained using step S102 are entered as the input layer Input1 of convolutional neural networks
Row propagated forward so as to extract characteristics of image, propagated forward the need for by Conv64-LRN64-Pooling64-Conv64-
The process of LRN64-Pooling64, that is, the extraction of feature is carried out by two convolution in stage, comprise the following steps that:
3a) convolutional layer (Conv) is calculated:
yk=max { wk*x,0}
Wherein, x represents the output (the namely input of this layer) of preceding layer, is exactly Input1 in first Conv64 layers
The output valve of layer, is exactly first output of Pooling64, y in second Conv64 layerskRepresent k-th of Conv64 layers
Characteristic pattern exports (namely k-th output component of preceding layer), wkRepresent k-th weights square being connected with preceding layer output
Battle array, " * " represents the inner product operation of two dimension.
3b) non-linear normalizing (LRN) is calculated:
Wherein, xkijIt is the output (namely k-th output component of preceding layer) of Conv64 layers of k-th characteristic pattern, tires out
Plus computing is completed in the same position (i, j) of 5 adjacent characteristic patterns of k-th characteristic pattern, α=0.001 and β=0.75
It is default normalized parameter, so far LRN layers of newly-generated characteristic pattern can be expressed as ykij。
3c) mix pooling layers of calculating:
Wherein, λ is the random parameter that value is 0 or 1, xkpqFor previous LRN64 layers of k-th characteristic pattern output (
It is exactly k-th output component of preceding layer), RijTo treat down-sampled region, the down-sampled region of selection is 3 × 3 sizes.
Above-mentioned three kinds of calculating is performed successively, until completing all convolution stages.
In step S104:The image feature vector F obtained using step S103 calculates the sample in Softmax graders
Originally it is assigned to the probability y of each classi:
Wherein, siThe output valve of Softmax i-th neuron of grader is represented, it is corresponding by image feature vector F dot products
Weight computing obtain, i.e. si=F η, η are corresponding weights, and n is the categorical measure for needing classification.Assuming that needing classification
The total n classes of picture, then ultimately form a Y=y1,y1,…,ynReality output vector, it is assumed that a certain picture belongs to i-th
Class, then its desired output vector is thenNamely i-th vectorial element is 1, and remaining is
0, according toThe error vector δ, i-th training error δ of the sample can be calculatediComputing formula be:
As i=k, θik=1, i represent i-th classification, when be originally inputted belong to classification i when,
In step S105:Using training error from last layer backpropagation forward successively of convolutional neural networks, from
Softmax layers starts successively forward error propagation to pooling layers, non-linear normalizing layer and convolutional layer, at the same using with
Machine gradient descent method SGD changes network weight matrix W.
In step s 106:Whether training of judgement number of times has reached the training set in step 1 after backpropagation is completed
Number of times upper limit N, if reached with regard to deconditioning preservation model;Continue return to step S102 if not up to continue to train.
In step s 107:Preserve model and Softmax graders that training is obtained.
As shown in Fig. 2 with the convolutional neural networks model that trains to test set MyImage classification is carried out to concretely comprise the following steps:
In step s 201:From test set MyMiddle extraction test sample is pre-processed and expanded, and is made with the data for obtaining
It is network inputs;
In step S202:Carry out pooling layers of convolutional layer, non-linear normalizing layer and mixing successively by step S103
Calculate, until completing all convolution stages;
In step S203:Obtain the characteristic vector of test sample;
In step S204:The probability of each class is assigned to the Softmax classifier calculateds samples using characteristic vector
yi, calculate { y1,y1,…,ynMaximum element, it is assumed that the element is yj, then the final classification for judging of the sample is jth class.
In order to verify performance of the invention, the present embodiment three public data collection (CIFAR-10, CIFAR-100,
SVHN tested on), and to employ the down-sampled convolutional neural networks of mixing with only with common down-sampled volume
Product neural net method has carried out com-parison and analysis.Experiment is trained and test according to the experiment regulation of corresponding data collection.From
As can be seen that the down-sampled situation relatively large in training error of mixing can still be obtained in contrast in table 1, table 2, table 3
To the test error better than traditional down-sampled method, it is down-sampled to preventing the mistake of convolutional neural networks that this fully demonstrates mixing
Play the role of in fitting important.On three above-mentioned data sets, test error of the invention is respectively 10.80%,
38.07%th, 3.01%.Experimental result due to the current main flow algorithm announced, with discrimination higher.
The preferred embodiments of the present invention are the foregoing is only, is not intended to limit the invention.Present invention additionally comprises by the above
Technical characteristic is combined constituted technical scheme.
The experimental data of the CIFAR-10 data sets of table 1
Down-sampled mode | Training error | Test error |
Traditional maximum is down-sampled | 3.01% | 11.36% |
Traditional average is down-sampled | 4.52% | 13.75% |
Mix down-sampled Mix Pooling | 6.25% | 10.80% |
The experimental data of the CIFAR-100 data sets of table 2
Down-sampled mode | Training error | Test error |
Traditional maximum is down-sampled | 5.42% | 40.09% |
Traditional average is down-sampled | 14.61% | 44.01% |
Mix down-sampled Mix Pooling | 25.71% | 38.07% |
The experimental data of the SVHN data sets of table 3
Claims (5)
1. it is a kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, it is characterised in that the method operates in GPU
In, including:
Step one, obtains training set of images and image measurement collection;
Step 2, the training of convolutional neural networks model, specifically includes following steps:
A) structure and frequency of training upper limit N of setting convolutional neural networks, initialize neural network weight matrix W, the structure
Quantity including characteristic pattern in the number of plies of convolutional neural networks and every layer;
B) view data is obtained from described image training set to be pre-processed, and carry out sample amplification, form training sample;
C) propagated forward is carried out to the training sample and extracts characteristics of image, the propagated forward includes convolutional layer, non-linear returns
One calculating for changing pooling layers of layer and mixing;
D) class probability of various kinds sheet is calculated in Softmax graders:
In formula, siRepresent the output valve of Softmax i-th neuron of grader, si=F η, F are the image of certain training sample
Characteristic vector, η is corresponding weights, and n is the categorical measure for needing classification;
E) according to probability yiIt is calculated training error
As i=k, θik=1, i represent i-th classification, when be originally inputted belong to classification i when,
F) last layer successively forward backpropagation of the training error from convolutional neural networks is utilized, while using boarding steps
Degree descent method SGD modification network weight matrix Ws;
Whether g) judgment models training completes, if so, performing step after then preserving convolutional neural networks model and Softmax graders
Rapid three, if it is not, then return to step b);
Step 3, image classification is carried out using the convolutional neural networks model after training to image measurement collection.
2. it is according to claim 1 a kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, its feature
Be, the step 2 a) in, the span of the element of initial weight matrix W is [- 0.01,0.01].
3. it is according to claim 1 a kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, its feature
It is, the b of the step 2) it is specially:
B1) for as broad as long image, zoomed in and out using the cvResize functions in OPENCV, the picture size after scaling
It is N × N;
B2) to the unequal image of length and width, fixed short side S is constant, intercepts the continuous S pixel in the middle of side long, forms S × S big
Small image, repeats step b1) ultimately form the image of N × N sizes;
B3 the pixel value sum of all images) is calculated, and quantity divided by image obtains an average image, in every piece image
In subtract the average image and obtain input sample;
B4 data amplification) is carried out to the input sample, final training sample is formed.
4. it is according to claim 1 a kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, its feature
It is, the c of step 2) in, the calculating of the convolutional layer is specially:
yk=max { wk*x,0}
Wherein, x represents the input of the output of preceding layer, i.e. current layer, ykRepresent k-th output of characteristic pattern, wkRepresent with it is previous
K-th weight matrix that the output of layer is connected, " * " represents the inner product operation of two dimension;
The calculating of the non-linear normalizing layer is specially:
Wherein, xkijThe output of k-th characteristic pattern of preceding layer when being calculated for non-linear normalizing layer, accumulating operation is special at k-th
Levy what is completed in the same position (i, j) of the adjacent N number of characteristic pattern of figure, α and β is default normalized parameter, ykijFor newly-generated
Characteristic pattern;
The calculating of pooling layers of the mixing is specially:
Wherein, λ is the random parameter that value is 0 or 1, xkpqK-th characteristic pattern of preceding layer during for pooling layers of calculating of mixing
Output, RijTo treat down-sampled region.
5. it is according to claim 1 a kind of effectively to prevent the image classification method of convolutional neural networks over-fitting, its feature
It is that in the step g), the criterion whether judgment models training completes is:Reach the frequency of training upper limit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410333924.3A CN104102919B (en) | 2014-07-14 | 2014-07-14 | Image classification method capable of effectively preventing convolutional neural network from being overfit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410333924.3A CN104102919B (en) | 2014-07-14 | 2014-07-14 | Image classification method capable of effectively preventing convolutional neural network from being overfit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104102919A CN104102919A (en) | 2014-10-15 |
CN104102919B true CN104102919B (en) | 2017-05-24 |
Family
ID=51671059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410333924.3A Active CN104102919B (en) | 2014-07-14 | 2014-07-14 | Image classification method capable of effectively preventing convolutional neural network from being overfit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104102919B (en) |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10650508B2 (en) * | 2014-12-03 | 2020-05-12 | Kla-Tencor Corporation | Automatic defect classification without sampling and feature selection |
CN106156807B (en) * | 2015-04-02 | 2020-06-02 | 华中科技大学 | Training method and device of convolutional neural network model |
CN106056529B (en) * | 2015-04-03 | 2020-06-02 | 阿里巴巴集团控股有限公司 | Method and equipment for training convolutional neural network for picture recognition |
CN104850836B (en) * | 2015-05-15 | 2018-04-10 | 浙江大学 | Insect automatic distinguishing method for image based on depth convolutional neural networks |
US10614339B2 (en) | 2015-07-29 | 2020-04-07 | Nokia Technologies Oy | Object detection with neural network |
CN105117739A (en) * | 2015-07-29 | 2015-12-02 | 南京信息工程大学 | Clothes classifying method based on convolutional neural network |
CN105117330B (en) * | 2015-08-07 | 2018-04-03 | 百度在线网络技术(北京)有限公司 | CNN code test methods and device |
CN105184313B (en) * | 2015-08-24 | 2019-04-19 | 小米科技有限责任公司 | Disaggregated model construction method and device |
CN106485259B (en) * | 2015-08-26 | 2019-11-15 | 华东师范大学 | A kind of image classification method based on high constraint high dispersive principal component analysis network |
CN105426930B (en) * | 2015-11-09 | 2018-11-02 | 国网冀北电力有限公司信息通信分公司 | A kind of substation's attribute dividing method based on convolutional neural networks |
CN105426908B (en) * | 2015-11-09 | 2018-11-02 | 国网冀北电力有限公司信息通信分公司 | A kind of substation's attributive classification method based on convolutional neural networks |
CN105512681A (en) * | 2015-12-07 | 2016-04-20 | 北京信息科技大学 | Method and system for acquiring target category picture |
CN106874296B (en) * | 2015-12-14 | 2021-06-04 | 阿里巴巴集团控股有限公司 | Method and device for identifying style of commodity |
CN106874924B (en) * | 2015-12-14 | 2021-01-29 | 阿里巴巴集团控股有限公司 | Picture style identification method and device |
CN106875203A (en) * | 2015-12-14 | 2017-06-20 | 阿里巴巴集团控股有限公司 | A kind of method and device of the style information for determining commodity picture |
CN107220641B (en) * | 2016-03-22 | 2020-06-26 | 华南理工大学 | Multi-language text classification method based on deep learning |
WO2017173605A1 (en) * | 2016-04-06 | 2017-10-12 | Xiaogang Wang | Method and system for person recognition |
CN107341547B (en) * | 2016-04-29 | 2021-04-20 | 中科寒武纪科技股份有限公司 | Apparatus and method for performing convolutional neural network training |
CN107346448B (en) | 2016-05-06 | 2021-12-21 | 富士通株式会社 | Deep neural network-based recognition device, training device and method |
CN105957086B (en) * | 2016-05-09 | 2019-03-26 | 西北工业大学 | A kind of method for detecting change of remote sensing image based on optimization neural network model |
CN106023154B (en) * | 2016-05-09 | 2019-03-29 | 西北工业大学 | Multidate SAR image change detection based on binary channels convolutional neural networks |
CN107622272A (en) * | 2016-07-13 | 2018-01-23 | 华为技术有限公司 | A kind of image classification method and device |
CN106250931A (en) * | 2016-08-03 | 2016-12-21 | 武汉大学 | A kind of high-definition picture scene classification method based on random convolutional neural networks |
CN106297297B (en) * | 2016-11-03 | 2018-11-20 | 成都通甲优博科技有限责任公司 | Traffic jam judging method based on deep learning |
CN106709421B (en) * | 2016-11-16 | 2020-03-31 | 广西师范大学 | Cell image identification and classification method based on transform domain features and CNN |
CN106686472B (en) * | 2016-12-29 | 2019-04-26 | 华中科技大学 | A kind of high frame-rate video generation method and system based on deep learning |
CN106682697B (en) * | 2016-12-29 | 2020-04-14 | 华中科技大学 | End-to-end object detection method based on convolutional neural network |
CN106778910B (en) * | 2017-01-12 | 2020-06-16 | 张亮 | Deep learning system and method based on local training |
US10546242B2 (en) | 2017-03-03 | 2020-01-28 | General Electric Company | Image analysis neural network systems |
CN107229968B (en) * | 2017-05-24 | 2021-06-29 | 北京小米移动软件有限公司 | Gradient parameter determination method, gradient parameter determination device and computer-readable storage medium |
CN107067043B (en) * | 2017-05-25 | 2020-07-24 | 哈尔滨工业大学 | Crop disease and insect pest detection method |
CN107358176A (en) * | 2017-06-26 | 2017-11-17 | 武汉大学 | Sorting technique based on high score remote sensing image area information and convolutional neural networks |
CN107316066B (en) * | 2017-07-28 | 2021-01-01 | 北京工商大学 | Image classification method and system based on multi-channel convolutional neural network |
TWI647658B (en) * | 2017-09-29 | 2019-01-11 | 樂達創意科技有限公司 | Device, system and method for automatically identifying image features |
CN109685756A (en) * | 2017-10-16 | 2019-04-26 | 乐达创意科技有限公司 | Image feature automatic identifier, system and method |
CN109753978B (en) * | 2017-11-01 | 2023-02-17 | 腾讯科技(深圳)有限公司 | Image classification method, device and computer readable storage medium |
CN108009638A (en) * | 2017-11-23 | 2018-05-08 | 深圳市深网视界科技有限公司 | A kind of training method of neural network model, electronic equipment and storage medium |
CN108596206A (en) * | 2018-03-21 | 2018-09-28 | 杭州电子科技大学 | Texture image classification method based on multiple dimensioned multi-direction spatial coherence modeling |
CN110147873B (en) * | 2018-05-18 | 2020-02-18 | 中科寒武纪科技股份有限公司 | Convolutional neural network processor and training method |
CN109325514A (en) * | 2018-08-02 | 2019-02-12 | 成都信息工程大学 | Image classification method based on the simple learning framework for improving CNN |
CN111274422A (en) * | 2018-12-04 | 2020-06-12 | 北京嘀嘀无限科技发展有限公司 | Model training method, image feature extraction method and device and electronic equipment |
CN110033035A (en) * | 2019-04-04 | 2019-07-19 | 武汉精立电子技术有限公司 | A kind of AOI defect classification method and device based on intensified learning |
CN110222733B (en) * | 2019-05-17 | 2021-05-11 | 嘉迈科技(海南)有限公司 | High-precision multi-order neural network classification method and system |
CN110490842B (en) * | 2019-07-22 | 2023-07-04 | 同济大学 | Strip steel surface defect detection method based on deep learning |
CN110599496A (en) * | 2019-07-30 | 2019-12-20 | 浙江工业大学 | Sun shadow displacement positioning method based on deep learning |
CN112182214B (en) * | 2020-09-27 | 2024-03-19 | 中国建设银行股份有限公司 | Data classification method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622585A (en) * | 2012-03-06 | 2012-08-01 | 同济大学 | Back propagation (BP) neural network face recognition method based on local feature Gabor wavelets |
CN103914711A (en) * | 2014-03-26 | 2014-07-09 | 中国科学院计算技术研究所 | Improved top speed learning model and method for classifying modes of improved top speed learning model |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7146050B2 (en) * | 2002-07-19 | 2006-12-05 | Intel Corporation | Facial classification of static images using support vector machines |
-
2014
- 2014-07-14 CN CN201410333924.3A patent/CN104102919B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622585A (en) * | 2012-03-06 | 2012-08-01 | 同济大学 | Back propagation (BP) neural network face recognition method based on local feature Gabor wavelets |
CN103914711A (en) * | 2014-03-26 | 2014-07-09 | 中国科学院计算技术研究所 | Improved top speed learning model and method for classifying modes of improved top speed learning model |
Also Published As
Publication number | Publication date |
---|---|
CN104102919A (en) | 2014-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104102919B (en) | Image classification method capable of effectively preventing convolutional neural network from being overfit | |
WO2021134871A1 (en) | Forensics method for synthesized face image based on local binary pattern and deep learning | |
CN109345507B (en) | Dam image crack detection method based on transfer learning | |
CN105046277B (en) | Robust mechanism study method of the feature significance in image quality evaluation | |
CN107679477A (en) | Face depth and surface normal Forecasting Methodology based on empty convolutional neural networks | |
CN105657402B (en) | A kind of depth map restoration methods | |
CN108345911A (en) | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics | |
CN106845529A (en) | Image feature recognition methods based on many visual field convolutional neural networks | |
CN108510012A (en) | A kind of target rapid detection method based on Analysis On Multi-scale Features figure | |
CN107341506A (en) | A kind of Image emotional semantic classification method based on the expression of many-sided deep learning | |
CN108573491A (en) | A kind of three-dimensional ultrasound pattern dividing method based on machine learning | |
CN106446942A (en) | Crop disease identification method based on incremental learning | |
CN107492095A (en) | Medical image pulmonary nodule detection method based on deep learning | |
CN107622272A (en) | A kind of image classification method and device | |
CN107506761A (en) | Brain image dividing method and system based on notable inquiry learning convolutional neural networks | |
CN106022273A (en) | Handwritten form identification system of BP neural network based on dynamic sample selection strategy | |
CN107480649A (en) | A kind of fingerprint pore extracting method based on full convolutional neural networks | |
CN105528638A (en) | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network | |
CN108229580A (en) | Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features | |
CN107229942A (en) | A kind of convolutional neural networks rapid classification method based on multiple graders | |
CN106780434A (en) | Underwater picture visual quality evaluation method | |
CN107330480A (en) | Hand-written character Computer Identification | |
CN106682649A (en) | Vehicle type recognition method based on deep learning | |
CN108629369A (en) | A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD | |
CN109558902A (en) | A kind of fast target detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230517 Address after: Unit 1001, 369 Weining Road, Changning District, Shanghai, 200336 (9th floor of actual floor) Patentee after: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd. Address before: 200092 Siping Road 1239, Shanghai, Yangpu District Patentee before: TONGJI University |