CN109978069A - The method for reducing ResNeXt model over-fitting in picture classification - Google Patents

The method for reducing ResNeXt model over-fitting in picture classification Download PDF

Info

Publication number
CN109978069A
CN109978069A CN201910263146.8A CN201910263146A CN109978069A CN 109978069 A CN109978069 A CN 109978069A CN 201910263146 A CN201910263146 A CN 201910263146A CN 109978069 A CN109978069 A CN 109978069A
Authority
CN
China
Prior art keywords
network
resnext
cropout
feature
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910263146.8A
Other languages
Chinese (zh)
Other versions
CN109978069B (en
Inventor
路通
侯文博
王文海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910263146.8A priority Critical patent/CN109978069B/en
Publication of CN109978069A publication Critical patent/CN109978069A/en
Application granted granted Critical
Publication of CN109978069B publication Critical patent/CN109978069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses the methods for reducing ResNeXt model over-fitting in picture classification, include the following steps: step 1, and the training picture concentrated to public data pre-processes;Step 2, it is based on ResNeXt network establishment network model, and carries out the modification of Cropout method to ResNeXt network;Step 3, the ResNeXt network after the training modification of stochastic gradient descent method is used, trained network model is obtained;Step 4, a given picture to be sorted is inputted, is classified using network model trained in step 3 to it, obtains result to the end.

Description

Method for reducing overfitting phenomenon of ResNeXt model in image classification
Technical Field
The invention relates to the technical field of deep learning, in particular to a method for reducing an overfitting phenomenon of a ResNeXt model in image classification.
Background
In recent years, deep neural networks play a great role in the field of multimedia research such as picture classification, however, a general problem facing people is how to make the training of deep neural networks more stable. In order to solve this problem and further improve the effect of the neural network, people usually design different rules to constrain the network, and the most common techniques are Batch Normalization (BN) and Dropout (random inactivation Dropout is a method for optimizing an artificial neural network with a deep structure, and in the learning process, the interdependence co-dependency between nodes is reduced by randomly zeroing the partial weight or output of an implicit layer, so as to implement regularization of the neural network and reduce the structural risk thereof). The overfitting phenomenon is still a problem for the deep network, and can cause the generalization capability of the deep network model to be very poor. In practical multimedia applications, the overfitting phenomenon is more serious because a large amount of data required for training the deep network is not easy to obtain and the manual labeling cost is too high.
Disclosure of Invention
In order to solve the overfitting problem still existing in the picture classification problem in the prior art, the invention provides a new method for reducing the overfitting phenomenon in the picture classification task on the basis of a ResNeXt network model, which is called Cropout (the Cropout belongs to the name taken by the method in the invention, and only has English names).
The invention specifically discloses a method for reducing the overfitting phenomenon of a ResNeXt model in image classification, which comprises the following steps:
step 1, preprocessing a training picture in a public data set;
step 2, building a network model based on the ResNeXt network, and modifying the ResNeXt network;
step 3, training the modified ResNeXt network by using a random gradient descent method to obtain a trained network model;
and 4, inputting a given picture to be classified, and classifying the picture by using the network model trained in the step 3 to obtain a final classification result.
The step 1 comprises the following steps: common data enhancement operations are performed on training pictures in the public dataset, such as: randomly cutting, horizontally turning, randomly scaling and the like, specifically, randomly scaling a training picture according to the proportion of 0.8, 0.9, 1.1 and 1.2, randomly horizontally turning the training picture or randomly rotating the training picture according to the angles of-30 degrees, -15 degrees, 30 degrees and the like, and randomly cutting a sample with the size of 32 multiplied by 32 from the training picture to be used as a final training picture.
The step 2 comprises the following steps:
step 2-1, according to the method in the literature, extracting the features of the training picture by using the convolution part of the ResNeXt network with the cardinal number of G to obtain G conversion paths after packet convolution, marking the feature diagram of the conversion paths as x, wherein the size of the conversion paths is H multiplied by W, and H, W respectively represents the length and the width of the feature diagram;
step 2-2, the Cropout method is to bind a random clipping operation to each conversion path randomly, and specifically includes: filling k zero elements in the feature map x along each edge, expanding the feature map x from original H × W to a feature map y with a size of (H + k) x (W + k), randomly cutting out a feature map x' with a size of H × W on the expanded feature map y, and defining the operation of randomly cutting out after supplementing k zero elements on the feature map x as pkThen the random clipping transformation on the feature map x can be represented by the following formula:
x′=Ρk(x),
wherein x' is a feature map after random clipping transformation.
The Cropout method includes an aggregate transformation (usually implemented in the form of packet convolution, i.e., packet convolution in step 2-1) based on the resenext network, and the original aggregate transformation of the resenext network is expressed by the following formula:
wherein,in effect, a convolution function that maps the feature map x into a low-dimensional vector space, ∑ is the stitching operation, G is the number of conversion paths of resenext, i represents the ith conversion path,is a characteristic diagram after aggregation transformation.
Since all the transformation paths share the same topology, and the Cropout method proposed by the present invention will slightly break the homogenous form of the aggregate transformation, the aggregate transformation modified by the Cropout method can be expressed as:
whereinThe new characteristic diagram after the polymerization transformation modified by the Cropout method;
in the Cropout method, random clipping operation bound on each conversion path is only constructed during network initialization, and then the binding mode is kept unchanged in the training and testing processes of the network.
Step 2-3, synthesizing G characteristic graphs x' on the aggregation conversion path modified by the method of the invention together through splicing operation to form a new characteristic graph as input data of a next layer network of ResNeXt;
compared with the prior art, the method provided by the invention has the following advantages:
the overfitting phenomenon of the ResNeXt network in the picture classification task is effectively reduced;
the invention is very easy to realize on the premise of not changing the size and the depth of the original network.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is an overall architecture diagram of the present invention;
fig. 2a is a design of the bottompiece cell of resenext without using packet convolution.
Fig. 2b is a design of a bottomtech unit of resenext using packet convolution.
Fig. 3 is a sample picture of a portion of the public data set CIFAR-10.
Detailed Description
Example 1
The invention will be further explained by using the public data sets CIFAR-10 and CIFAR-100 as examples in conjunction with the drawings and the embodiments.
The data set CIFAR-10 is composed of 60000 color images 32 x 32 color images with 10 classifications, each classification comprises 6000 images, and the whole data set comprises 50000 training pictures and 10000 testing pictures; the data set CIFAR-100 is a color picture comprising 100 classes, each class containing 600 pictures, and divided into 50000 training data and 10000 testing data. A sample image of a portion of the CIFAR-10 dataset is shown in fig. 3.
Step 1, respectively preprocessing 50000 training data in two public data sets CIFAR-10 and CIFAR-100, including carrying out common data enhancement operations such as random cutting, horizontal turning, random scaling and the like on the training data, specifically, firstly randomly scaling a training picture according to the proportion of 0.8, 0.9, 1.1 and 1.2, then randomly horizontally turning the training picture or randomly rotating the training picture according to the angles of-30 degrees, -15 degrees, 30 degrees and the like, and finally randomly cutting out a sample with the size of 32 multiplied by 32 from the training picture as a final training picture.
Step 2, building a network model, using a pytorch version of a ResNeXt network in https:// github.com/prlz77/ResNeXt.pytorch as an example model, wherein the model is a ResNeXt-29 network with a base number of 8 and a depth of 64, written as ResNeXt-29 and 8 x 64D, and modifying the Cropout method in the invention by using the network, and specifically comprising the following steps:
firstly, extracting features of a training picture by using a convolution part of a ResNeXt-29,8 × 64D network according to a method in documents of Aggregated residual transformation for deep neural networks to obtain 8 conversion paths after packet convolution, wherein a feature diagram of the conversion paths is x, and the size of the conversion paths is H × W;
then, randomly binding a random clipping operation to each conversion path, specifically, filling k zero elements to each edge of the feature graph x, and expanding the feature graph x from original H × W to a feature graph y with the size of (H + k) x (W + k);
finally, randomly cutting out a characteristic diagram x' with the size of H multiplied by W on the expanded characteristic diagram y;
the present invention defines the random clipping operation with the above maximum zero element padding number k as pkTherefore, the random clipping transformation on the feature map x can be expressed by the following formula:
x′=Ρk(x),
wherein x' is a feature map after random clipping transformation.
Cropout is primarily designed based on the aggregate transformation of resenext (usually implemented in the form of a packet convolution), which can be expressed by the following formula:
in the present invention, in the case of the present invention,in effect, a convolution function that maps the feature map x into a low-dimensional vector space, ∑ is the stitching operation, G is the number of conversion paths of resenext, i represents the ith conversion path,is a characteristic diagram after aggregation transformation. .
Since all the transformation paths share the same topology, and the Cropout method proposed by the present invention will slightly break the homogenous form of the aggregate transformation, the aggregate transformation modified by the Cropout method can be expressed as:
fig. 1 depicts the Cropout concept. In the design of the invention, the cutting operation is randomly completed in the network initialization stage, and the binding relationship between the cutting operation and the conversion path is fixed and unchangeable after the network is initialized. Therefore, the network structure at training and the network structure at test are identical.
The details of the modified model are shown in table 1, where a hyperparameter P ═ P is designed for Cropout in table 10,p1,p2Repeatedly verifying that when the superparameter of Cropout is set as P ═ 1,1,1}, the data set CIFAR-10 picture classification task performs best; and when the super parameter is set to be P ═ {0,1,0}, the performance is best in the data set CIFAR-100 picture classification task.
TABLE 1
Fig. 2a and 2b illustrate details of the bottleeck design of the repronext modified by the Cropout method, because the repronext network adopts the bottleeck design, and the Cropout method is implemented on each conversion path, as shown in fig. 2a, it can be seen from the figure that after the convolution feature map of the previous layer is subjected to the packet convolution with the packet number of 8, random clipping occurs after the convolution layer with the convolution kernel size of 1 × 1 in each stage and before the convolution layer with the convolution kernel size of 3 × 3, and then after the convolution layer with the convolution kernel size of 3 × 3, the feature maps on the 8 conversion paths form new feature maps as the input of the network of the next layer of the repronext after the concatenation operation (i.e., "concatenate" operation in the figure). The structure shown in fig. 2b is more efficient than the structure in fig. 2a due to the use of the block convolution and is almost the same as fig. 2a except that the order of the convolution of 3 × 3 and Cropout is different, so the structure of fig. 2b is employed in practical use.
Step 3, training the network model, respectively taking the pictures in the two data sets enhanced in the step 1 as training data to perform supervised training on the ResNeXt-29 and 8 × 64D models modified in the step 2 by using a random gradient descent method to obtain training models on the two data sets, and respectively using R1And R2To indicate. Typical training parameter settings are as follows in table 2:
TABLE 2
Step 4, picture classification, namely using the network model R trained in step 3 and corresponding to different data sets for a given picture to be classified, namely any one of 10000 test data in the data set CIFAR-10 or CIFAR-1001And R2And classifying the obtained data to obtain a final classification result. After all the test data in the two data sets are classified, respectively counting the accuracy of the classification conditions of the two data sets to obtain two results:
(1) when the Cropout parameter is P ═ 1,1,1, the classification error rate on CIFAR-10 is 3.38%, which is reduced by 0.27% compared with the model error rate modified without using the Cropout method;
(2) when the Cropout parameter is P ═ {0,1,0}, the classification error rate on CIFAR-100 is 16.89%, which is 0.88% lower than the model error rate without Cropout method modification.
The above results further reduce the error rate in the case of very low classification error rate today, proving that the method of the present invention indeed reduces the overfitting phenomenon of resenext in the image classification task.
The present invention provides a method for reducing the overfitting phenomenon of the resenext model in the image classification, and a plurality of methods and approaches for implementing the technical scheme, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (3)

1. The method for reducing the overfitting phenomenon of the ResNeXt model in the picture classification is characterized by comprising the following steps of:
step 1, preprocessing a training picture in a public data set;
step 2, building a network model based on the ResNeXt network, and modifying the ResNeXt network by using a Cropout method;
step 3, training the modified ResNeXt network by using a random gradient descent method to obtain a trained network model;
and 4, inputting a given picture to be classified, and classifying the picture by using the network model trained in the step 3 to obtain a final classification result.
2. The method of claim 1, wherein step 1 comprises: and performing data enhancement operation including random cutting, horizontal turning and random scaling on the training pictures in the public data set.
3. The method of claim 2, wherein step 2 comprises the steps of:
step 2-1, performing feature extraction on the training picture by using a convolution part of a ResNeXt network with a base number of G to obtain G conversion paths after packet convolution, marking a feature diagram of the conversion paths as x, wherein the size of the feature diagram is H multiplied by W, and H, W respectively represents the length and the width of the feature diagram;
step 2-2, the Cropout method is to bind a random clipping operation to each conversion path randomly, and specifically includes: filling k zero elements in the feature graph x along each edge, expanding the feature graph x from original H multiplied by W to a feature graph y with the size of (H + k) x (W + k), randomly cutting out a feature graph x' with the size of H multiplied by W on the expanded feature graph y, and defining that the operation of randomly cutting after supplementing k zero elements on the feature graph x is PkThen, the random clipping transform on the feature map x is expressed by the following formula:
x′=Pk(x),
wherein x' is a feature graph after random cutting transformation;
the Cropout method comprises aggregation transformation based on ResNeXt network, and the original aggregation transformation of the ResNeXt network is represented by the following formula:
wherein,for a convolution function mapping the feature map x to a low-dimensional vector spaceThe number, ∑ is the splicing operation, G is the number of conversion paths of resenext, i represents the ith conversion path,is a feature map after polymerization transformation;
the polymerization transformation modified via the Cropout method is then expressed as:
whereinThe new characteristic diagram after the polymerization transformation modified by the Cropout method;
and 2-3, synthesizing the characteristic diagrams x' on the aggregation switching paths modified by the Cropout method together through splicing operation to form a new characteristic diagram as input data of a next layer network of ResNeXt.
CN201910263146.8A 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification Active CN109978069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910263146.8A CN109978069B (en) 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910263146.8A CN109978069B (en) 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification

Publications (2)

Publication Number Publication Date
CN109978069A true CN109978069A (en) 2019-07-05
CN109978069B CN109978069B (en) 2020-10-09

Family

ID=67082485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910263146.8A Active CN109978069B (en) 2019-04-02 2019-04-02 Method for reducing overfitting phenomenon of ResNeXt model in image classification

Country Status (1)

Country Link
CN (1) CN109978069B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110522440A (en) * 2019-08-12 2019-12-03 广州视源电子科技股份有限公司 Electrocardiosignal recognition device based on grouping convolution neural network
CN112598045A (en) * 2020-12-17 2021-04-02 中国工商银行股份有限公司 Method for training neural network, image recognition method and image recognition device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734058B1 (en) * 2005-08-24 2010-06-08 Qurio Holding, Inc. Identifying, generating, and storing cropping information for multiple crops of a digital image
US9311523B1 (en) * 2015-07-29 2016-04-12 Stradvision Korea, Inc. Method and apparatus for supporting object recognition
CN106157307A (en) * 2016-06-27 2016-11-23 浙江工商大学 A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF
CN106778701A (en) * 2017-01-20 2017-05-31 福州大学 A kind of fruits and vegetables image-recognizing method of the convolutional neural networks of addition Dropout
CN107563495A (en) * 2017-08-04 2018-01-09 深圳互连科技有限公司 Embedded low-power consumption convolutional neural networks method
CN108510004A (en) * 2018-04-04 2018-09-07 深圳大学 A kind of cell sorting method and system based on depth residual error network
CN108629288A (en) * 2018-04-09 2018-10-09 华中科技大学 A kind of gesture identification model training method, gesture identification method and system
CN108985386A (en) * 2018-08-07 2018-12-11 北京旷视科技有限公司 Obtain method, image processing method and the corresponding intrument of image processing model
CN109063719A (en) * 2018-04-23 2018-12-21 湖北工业大学 A kind of image classification method of co-ordinative construction similitude and category information
CN109087375A (en) * 2018-06-22 2018-12-25 华东师范大学 Image cavity fill method based on deep learning
CN109472352A (en) * 2018-11-29 2019-03-15 湘潭大学 A kind of deep neural network model method of cutting out based on characteristic pattern statistical nature

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734058B1 (en) * 2005-08-24 2010-06-08 Qurio Holding, Inc. Identifying, generating, and storing cropping information for multiple crops of a digital image
US9311523B1 (en) * 2015-07-29 2016-04-12 Stradvision Korea, Inc. Method and apparatus for supporting object recognition
CN106157307A (en) * 2016-06-27 2016-11-23 浙江工商大学 A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF
CN106778701A (en) * 2017-01-20 2017-05-31 福州大学 A kind of fruits and vegetables image-recognizing method of the convolutional neural networks of addition Dropout
CN107563495A (en) * 2017-08-04 2018-01-09 深圳互连科技有限公司 Embedded low-power consumption convolutional neural networks method
CN108510004A (en) * 2018-04-04 2018-09-07 深圳大学 A kind of cell sorting method and system based on depth residual error network
CN108629288A (en) * 2018-04-09 2018-10-09 华中科技大学 A kind of gesture identification model training method, gesture identification method and system
CN109063719A (en) * 2018-04-23 2018-12-21 湖北工业大学 A kind of image classification method of co-ordinative construction similitude and category information
CN109087375A (en) * 2018-06-22 2018-12-25 华东师范大学 Image cavity fill method based on deep learning
CN108985386A (en) * 2018-08-07 2018-12-11 北京旷视科技有限公司 Obtain method, image processing method and the corresponding intrument of image processing model
CN109472352A (en) * 2018-11-29 2019-03-15 湘潭大学 A kind of deep neural network model method of cutting out based on characteristic pattern statistical nature

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHUNLEI ZHANG,KAZUHITO KOISHIDA: "END-TO-END TEXT-INDEPENDENT SPEAKER VERIFICATION WITH FLEXIBILITY IN UTTERANCE DURATION", 《2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU)》 *
KENSHO HARA, HIROKATSU KATAOKA, YUTAKA SATOH: "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
RYO TAKAHASHI, TAKASHI MATSUBARA: "Data Augmentation using Random Image Cropping and Patching for Deep CNNs", 《JOURNAL OF LATEX CLASS FILES》 *
SAINING XIE,ROSS GIRSHICK,PIOTR DOLLAR,ZHUOWEN TU,KAIMING HE: "Aggregated Residual Transformations for Deep Neural Networks", 《ARXIV:1611.05431V2 [CS.CV]》 *
杨念聪,任琼,张成喆,周子煜: "基于卷积神经网络的图像特征识别研究", 《信息与电脑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
TWI749423B (en) * 2019-07-18 2021-12-11 大陸商北京市商湯科技開發有限公司 Image processing method and device, electronic equipment and computer readable storage medium
US11481574B2 (en) 2019-07-18 2022-10-25 Beijing Sensetime Technology Development Co., Ltd. Image processing method and device, and storage medium
CN110522440A (en) * 2019-08-12 2019-12-03 广州视源电子科技股份有限公司 Electrocardiosignal recognition device based on grouping convolution neural network
CN112598045A (en) * 2020-12-17 2021-04-02 中国工商银行股份有限公司 Method for training neural network, image recognition method and image recognition device

Also Published As

Publication number Publication date
CN109978069B (en) 2020-10-09

Similar Documents

Publication Publication Date Title
Thai et al. Image classification using support vector machine and artificial neural network
CN109978069B (en) Method for reducing overfitting phenomenon of ResNeXt model in image classification
CN111079795B (en) Image classification method based on CNN (content-centric networking) fragment multi-scale feature fusion
CN106326288B (en) Image search method and device
CN109063112B (en) Rapid image retrieval method, model and model construction method based on multitask learning deep semantic hash
CN111275107A (en) Multi-label scene image classification method and device based on transfer learning
CN103258210B (en) A kind of high-definition image classification method based on dictionary learning
WO2022042043A1 (en) Machine learning model training method and apparatus, and electronic device
CN108960422B (en) Width learning method based on principal component analysis
CN108960301B (en) Ancient Yi-nationality character recognition method based on convolutional neural network
CN109948714A (en) Chinese scene text row recognition methods based on residual error convolution sum recurrent neural network
CN109960763A (en) A kind of photography community personalization friend recommendation method based on user's fine granularity photography preference
CN106339719A (en) Image identification method and image identification device
CN104933445A (en) Mass image classification method based on distributed K-means
CN111126347B (en) Human eye state identification method, device, terminal and readable storage medium
CN108564166A (en) Based on the semi-supervised feature learning method of the convolutional neural networks with symmetrical parallel link
CN109344709A (en) A kind of face generates the detection method of forgery image
Jumutc et al. Fixed-size Pegasos for hinge and pinball loss SVM
CN108345633A (en) A kind of natural language processing method and device
CN109325513A (en) A kind of image classification network training method based on magnanimity list class single image
CN107491782A (en) Utilize the image classification method for a small amount of training data of semantic space information
CN113283524A (en) Anti-attack based deep neural network approximate model analysis method
CN113554100A (en) Web service classification method for enhancing attention network of special composition picture
Yu et al. Deep metric learning with dynamic margin hard sampling loss for face verification
CN104573726B (en) Facial image recognition method based on the quartering and each ingredient reconstructed error optimum combination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant