CN111242063A - Small sample classification model construction method based on transfer learning and iris classification application - Google Patents

Small sample classification model construction method based on transfer learning and iris classification application Download PDF

Info

Publication number
CN111242063A
CN111242063A CN202010053032.3A CN202010053032A CN111242063A CN 111242063 A CN111242063 A CN 111242063A CN 202010053032 A CN202010053032 A CN 202010053032A CN 111242063 A CN111242063 A CN 111242063A
Authority
CN
China
Prior art keywords
model
small sample
training
classification
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010053032.3A
Other languages
Chinese (zh)
Other versions
CN111242063B (en
Inventor
陈健美
王玉玺
王国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202010053032.3A priority Critical patent/CN111242063B/en
Publication of CN111242063A publication Critical patent/CN111242063A/en
Application granted granted Critical
Publication of CN111242063B publication Critical patent/CN111242063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a small sample classification model construction method based on transfer learning and application of a small sample classification model constructed by the method in iris image classification, wherein an ICP-VGG model is constructed based on VGG16 model transfer learning; configuring an activation function of a full connection layer in a custom network according to an iris image task, and configuring a Dropout ratio of a Dropout layer; fine-tuning a network, and setting model training related parameters; acquiring a small sample iris data set, and performing data preprocessing and data enhancement on the data set; training and verifying the model, and outputting an identification result image; the method provided by the invention can better apply the deep learning model in the field of small sample irises, reduce overfitting and improve the identification accuracy.

Description

Small sample classification model construction method based on transfer learning and iris classification application
Technical Field
The invention belongs to the technical field of computer images, and particularly relates to a small sample classification model construction method based on transfer learning and iris image classification application.
Background
With the human being's crossing into the big data era and the rapid development of computing devices, deep learning has entered a period of rapid development. In recent years, by means of strong expression capability of a deep learning model and a large-scale training data set, deep learning obtains remarkable results in the fields of computer vision, voice recognition and the like, particularly in the field of image classification, explosive growth is shown, and the image classification accuracy on the large-scale data set is continuously improved. Large-scale data sets are a cornerstone where deep learning achieves significant results in various domains, but in practical applications, the acquisition of large-scale data sets is expensive in terms of manpower and material resources or due to limitations in some domains. Small sample datasets are therefore more common in practical applications than large scale datasets. For example, in the field of irises, due to the particularity of physiological structures, the difficulty of acquiring iris images is high, and therefore, the data set of the iris images with labels is relatively small.
In a small sample iris data set, due to the lack of training data, an ideal result is difficult to obtain by applying a traditional deep learning model, and an overfitting phenomenon often occurs, namely the model can be well represented on the training data set, errors tend to be zero, but the model is very poor represented on a test data set, and the accuracy is low. This is because when the depth model is very complex, it is easy to consider the noise of the training dataset of the small-sample iris dataset as the feature of the whole sample, and the learned model performs poorly on the test dataset. In order to better apply a deep learning model to the field of small sample irises, reduce overfitting, reduce errors of iris image classification detection results and improve identification accuracy, the invention provides an iris image identification method based on convolutional neural network transfer learning.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a small sample classification model construction method based on transfer learning and iris image classification application, which can better apply a deep learning model in the field of small sample irises, reduce overfitting and improve the identification accuracy.
The technical scheme adopted by the invention is as follows:
a small sample classification model construction method based on transfer learning comprises the following steps:
step 1, removing three full connection layers in a pre-trained P-VGG16 model, and reserving 5 rolling blocks; adding a custom network after the convolution block to construct an ICP-VGG model; the self-defined network sequentially comprises a Flatten layer, a first full connection layer, a Dropout layer and a second full connection layer;
step 2, configuring the number of neurons of a full connection layer, an activation function of the full connection layer and a Dropout ratio of a Dropout layer in the user-defined network; the activation function in the first fully-connected layer and the second fully-connected layer is selected from Sigmoid, Tanh, ReLU, PReLu, ELU or SoftMax; the number of the neurons of the first full connection layer is 512 or 1024, and the number of the neurons of the second full connection layer is consistent with the number of classes in the sample set to be detected; setting Dropout ratio in Dropout layer;
step 3, training an ICP-VGG model network; determining a loss function in the IC-VGG model according to the type of the problem to be solved; setting relevant parameters of model training; determining an optimizer of the ICP-VGG model; and then obtaining a convolutional neural network model suitable for image recognition.
Further, the optimizer selects RMSProp and sets the learning rate of RMSProp
Figure BDA0002371874370000021
Wherein,
Figure BDA0002371874370000022
for the initial learning rate, Epoch is the number of times the model runs on the data set, and rate is the attenuation value;
further, the method for training the ICP-VGG model comprises the following steps: keeping the weights of the networks in the first 4 convolution blocks of the model unchanged, and only training the last convolution block and the user-defined network of the model;
further, the loss function is selected as: for the binary classification problem, a binary cross entropy loss function is used; for multi-classification problems, a classification cross entropy loss function is used; for the regression problem, the mean square error loss function is used; for sequence learning problems, a join-sense time-series classification function is used;
further, the parameters related to model training comprise the running times of the model, the batch size (batch size) of the model;
the invention also provides an iris image recognition method, which is based on the convolutional neural network model suitable for image recognition constructed by the method, and is characterized in that a small sample iris image data set is obtained, the data set is divided into a training set and a test set, and the training set and the test set are respectively subjected to data preprocessing and data enhancement; and training and testing the processed training set and the test set on the convolutional neural network model which is constructed by the method and is suitable for image recognition to obtain a classification result of the iris image.
Further, the data preprocessing method comprises the following steps: modifying the size of the iris image in the data set to 224 x 224; simultaneously converting the single-channel image into a three-channel image;
further, the data enhancement method comprises the following steps: data enhancement of data in training and test sets was performed using ImageDataGenerator method in Keras; setting a parameter rescale as 1./255; setting a parameter shear _ range to be 0.2; setting a parameter zoom _ range to be 0.2; the parameter horizontal-flip ═ True is set.
The invention has the beneficial effects that:
(1) the invention constructs a small sample classification model based on the transfer learning, so that overfitting can be reduced, the identification accuracy is improved, and an ideal result is obtained.
(2) The pre-trained VGG16 model is adopted for migration, an ICP-VGG model applied to small sample iris data sets is constructed, a network with a good classification basis can be fully utilized, knowledge can be transferred through shared parameters of a source domain learning model and a target domain learning model, and the method is a very efficient method for applying deep learning to small image data sets.
(3) And configuring an activation function of a full connection layer in the custom network according to the iris image task, and configuring a Dropout ratio of a Dropout layer. The first full-connection layer activation function selects a ReLU activation function, and the fitting capability of the ICP-VGG model is enhanced. The second full connectivity layer activation function is classified with a SoftMax function. The Dropout ratio in the Dropout layer is set to be 0.5, and half features of the first fully-connected layer are randomly discarded in the training process, so that the overfitting of the ICP-VGG model is further reduced.
(4) And (5) fine-tuning the network, and setting relevant parameters of model training. The ICP-VGG16 model is finely adjusted, partial network weight is frozen, difficulty in training from the beginning can be avoided, overfitting of the network is effectively prevented, compared with training from the beginning, a large amount of computing resources and computing time are saved through fine adjustment, and computing efficiency and accuracy are improved. And the classification cross entropy is selected as a loss function, so that the probability distribution of the network output ground and the real distribution of the label ground can be better balanced. Selecting RMSProp as the optimizer for the model and assigning a dynamic learning rate, the weights of the network can be better updated based on the training data and the loss function.
(5) And acquiring a small sample iris data set, and performing data preprocessing and data enhancement on the data set. The data is preprocessed, so that the original data is more suitable for being processed by a neural network, and the algorithm effect is improved. By using the data enhancement technology, more training data are generated from the existing training samples, so that the model observes more data, and the generalization capability is better.
Drawings
FIG. 1 is a flow chart of the method implementation principle of the present invention;
FIG. 2 is a model architecture diagram of the migration model VGG16 of the present invention;
FIG. 3 is a simplified network architecture diagram of the CP-VGG16 model of the present invention;
FIG. 4 is a simplified network architecture diagram of the ICP-VGG model of the present invention;
FIG. 5 is a simplified network structure diagram for fine tuning of the ICP-VGG model of the present invention;
FIG. 6 is an image of the recognition result output by the ICP-VGG model of the invention on a small sample iris data set.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to better apply a deep learning model in the field of small sample irises, reduce overfitting and improve identification accuracy, as shown in fig. 1, the invention provides a small sample classification model construction method based on transfer learning, which comprises the following steps:
step 1: constructing an ICP-VGG model based on VGG16 model transfer learning
(1) The model VGG16 that has been pre-trained on a large data set ImageNet (containing 140 pictures) is referred to as P-VGG16, the initial input size of the model is 224 × 224, the network structure is shown in fig. 2, and the model is composed of convolution blocks and three fully connected layers (fully connected), wherein the convolution blocks are composed of 13 convolution layers (convolution) and 3 pooling layers (maxporoling) and are divided into five groups, the first group and the second group are composed of two convolution layers and one pooling layer, and the third group, the fourth group and the fifth group are composed of three convolution layers and pooling layers.
(2) The last three fully connected layers in the P-VGG16 are removed, and the convolution basis (Convolutional basis) in the P-VGG16 is retained as CP-VGG16, as shown in FIG. 3.
(3) And migrating the CP-VGG16 to a small sample Iris (Iris) data set as a convolution base, and adding four layers of custom networks behind the CP-VGG16 to construct an ICP-VGG model, wherein the total number of the layers is 22, as shown in FIG. 4. The self-defined network comprises a Flatten layer, a first full connection layer Dropout layer and a second full connection layer in sequence. Among them, the role of the Flatten layer: flattening the input, i.e., one-dimensionalizing the multi-dimensional input. The function of the full connection layer is as follows: and mapping the learned features to sample labels for classification. The role of Dropout layer: in order to reduce the overfitting of the model in the small sample dataset, some of the output features of the layer were randomly discarded during the training process.
Step 2: and configuring the number of neurons and an activation function of a full connection layer in a custom network and a Dropout ratio of a Dropout layer according to the iris image classification task.
2.1, determining an activation function in the full connection layer. In the model, an activation function defines a mapping relation from an input neuron to an output neuron, and the activation function mainly has the function of adding some nonlinear factors to a neural network, so that the neural network has better fitting capability and can solve more complex problems. Common activation functions are Sigmoid, Tanh, ReLU, prilu, ELU, and SoftMax. Generally, a ReLU activation function or an ELU activation function is used for the classification problem, and the invention researches the iris image classification problem, so that the activation function in the first fully-connected layer uses ReLU and the activation function in the second fully-connected layer uses Softmax for classification.
2.2, the number of neurons in the first fully-connected layer is set to 1024, and the number of neurons in the second fully-connected layer is set to be the same as the number of classes in the iris dataset, which is 407 in this embodiment.
And 2.3, setting the Dropout ratio in the Dropout layer to be 0.5, and randomly discarding half features of the first fully-connected layer in the training process to further reduce the overfitting of the ICP-VGG model.
And step 3: fine tuning network, setting model training related parameters
3.1, freeze the first 4 volume blocks in ICP-VGG16, i.e. let these 4 volume blocks keep their weights unchanged during model training, train only the model's 5 th volume block and the custom network (three volume layers, one pooling layer, one Flatten layer, one first fully connected layer, one Dropout layer, one second fully connected layer), as shown in FIG. 5.
3.2, determining a loss function in the IC-VGG model according to the type of the problem to be solved. For example, for a binary classification problem, a binary cross entropy (binary cross entropy) loss function may be used; for multi-class problems, a class cross entropy (category cross entropy) loss function may be used; for the regression problem, a mean-squared-error (mean-squared-error) loss function may be used; for the sequence learning problem, a Connection Temporal Classification (CTC) function may be used. The problem studied by the present invention is the iris image classification problem, and the loss function chosen is therefore the classification cross entropy (canonical cross entropy).
3.3, the number of runs (Epoch) of the model is set to 50, i.e. the model runs 50 times over the entire data set. The batch size (batch size) of the model is set to 10, i.e. the number of samples of the model trained at one time is 10.
And 3.4, determining an optimizer of the ICP-VGG model. In the model, an optimizer may be used to update and influence the network parameters of the model training and model output to approximate or reach optimal values, thereby minimizing the value of the loss function. The invention aims to solve the problem of iris image classification, and RMSProp is selected as an optimizer of an ICP-VGG model. The learning rate lr in the RMSProp optimizer is set as a dynamic learning rate according to equation (1).
Figure BDA0002371874370000051
Where Epoch is the number of times the model runs on the data set,
Figure BDA0002371874370000052
for the initial learning rate, the present invention is set to 0.0001, rate is the attenuation value, the present invention takes 1.8 × 10-8. The method selects RMSProp as an optimizer of the model, assigns dynamic learning rate, and can better update the weight of the network based on training data and a loss function.
Based on the small sample classification model based on the transfer learning constructed by the method, the invention also provides an iris image recognition method,
1.1, downloading an Iris data set CASIA-Iris-Lamp on an automated research institute website of Chinese academy of sciences, wherein the data set comprises a left eye and a right eye, the right eye is taken as a research object, and the data set comprises 407 types of pictures which are 8050 in total. 3/5 number of pictures in each class are divided into training sets, and 2/5 number of pictures in each class are divided into test sets.
And 2.2, preprocessing the pictures in the training set and the test set. The original pictures in the data set have a size of 640 x 480 pixels and are single-channel images, and the size of the pictures in the data set is modified to 224 x 224 for network training, so that the single-channel images are converted into three-channel images.
3.3, to reduce overfitting, data enhancement was performed on the data in the training and test sets using the imagedata generator method in Keras, since there was relatively little training data in the iris sample set. The specific parameter settings in the ImageDataGenerator are as follows:
a, setting a parameter rescale as 1./255, namely scaling pixels 0-255 in a picture to be between 0-1;
b, setting a parameter shear _ range to be 0.2, namely setting the angle of the image random miscut transformation to be 0.2;
c, setting the parameter zoom _ range to be 0.2, namely setting the range of image random zooming to be 0.2;
d, setting the parameter horizontal-flip True, namely randomly turning half of the image horizontally.
In order to verify the effect of the convolutional neural network model applied to image recognition on iris image recognition, the constructed ICP-VGG model is trained and tested on a small sample iris data set as shown in FIG. 6, an accuracy curve is output according to a classification result, and a loss value curve graph is output. The accuracy on the test set is maintained at about 97.63%, the loss function loss value is about 0.1898, and the verification result proves that the model effectively avoids the over-fitting phenomenon on the small sample iris data set, reduces the error of the iris image classification detection result and greatly improves the accuracy.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (10)

1. A small sample classification model construction method based on transfer learning is characterized by comprising the following steps:
step 1, removing three full connection layers in a pre-trained P-VGG16 model, and reserving 5 rolling blocks; adding a custom network after the convolution block to construct an ICP-VGG model;
step 2, configuring the number of neurons of a full connection layer, an activation function of the full connection layer and a Dropout ratio of a Dropout layer in the user-defined network;
step 3, training an ICP-VGG model network; determining a loss function in the IC-VGG model according to the type of the problem to be solved; setting relevant parameters of model training; determining an optimizer of the ICP-VGG model; and then obtaining a convolutional neural network model suitable for image recognition.
2. The small sample classification model construction method based on the transfer learning of claim 1, wherein the custom network comprises a Flatten layer, a first fully connected layer, a Dropout layer and a second fully connected layer in sequence.
3. The method for constructing the small sample classification model based on the migration learning of claim 2, wherein the activation functions in the first full connection layer and the second full connection layer are selected from Sigmoid, Tanh, ReLU, pralu, ELU, or SoftMax; the number of the neurons of the first full connection layer is 512 or 1024, and the number of the neurons of the second full connection layer is consistent with the number of classes in the sample set to be detected.
4. The method for constructing the small sample classification model based on the transfer learning of claim 1, wherein the method for training the ICP-VGG model comprises the following steps: keeping the weights of the networks in the first 4 convolution blocks of the model unchanged, and only training the last convolution block and the user-defined network of the model.
5. The method for constructing the small sample classification model based on the transfer learning of claim 1, wherein the optimizer selects RMSProp and sets the learning rate of RMSProp
Figure FDA0002371874360000011
Wherein,
Figure FDA0002371874360000012
for the initial learning rate, Epoch is the number of times the model runs on the data set, and rate is the attenuation value.
6. The method for constructing the small sample classification model based on the migration learning of claim 1, wherein the parameters related to the model training comprise the running times of the model and the batch processing size of the model.
7. The method for constructing the small sample classification model based on the transfer learning according to any one of claims 1 to 6, wherein the loss function is selected as: for the binary classification problem, a binary cross entropy loss function is used; for multi-classification problems, a classification cross entropy loss function is used; for the regression problem, the mean square error loss function is used; for the sequence learning problem, a joint sense time series classification function is used.
8. An iris image recognition method based on the small sample classification model constructed by the method of claim 7 is characterized in that a small sample iris image data set is obtained and divided into a training set and a testing set, and the training set and the testing set are respectively subjected to data preprocessing and data enhancement; and training and testing the processed training set and the test set on the convolutional neural network model which is constructed by the method and is suitable for image recognition to obtain a classification result of the iris image.
9. An iris image recognition method of claim 8, wherein the data preprocessing method comprises: modifying the size of the iris image in the data set to 224 x 224; while converting a single channel image to a three channel image.
10. An iris image recognition method according to claim 8, wherein the data enhancement method is: data enhancement of data in training and test sets was performed using ImageDataGenerator method in Keras; setting a parameter rescale as 1./255; setting a parameter shear _ range to be 0.2; setting a parameter zoom _ range to be 0.2; the parameter horizontal-flip ═ True is set.
CN202010053032.3A 2020-01-17 2020-01-17 Small sample classification model construction method based on transfer learning and iris classification application Active CN111242063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010053032.3A CN111242063B (en) 2020-01-17 2020-01-17 Small sample classification model construction method based on transfer learning and iris classification application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010053032.3A CN111242063B (en) 2020-01-17 2020-01-17 Small sample classification model construction method based on transfer learning and iris classification application

Publications (2)

Publication Number Publication Date
CN111242063A true CN111242063A (en) 2020-06-05
CN111242063B CN111242063B (en) 2023-08-25

Family

ID=70871263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010053032.3A Active CN111242063B (en) 2020-01-17 2020-01-17 Small sample classification model construction method based on transfer learning and iris classification application

Country Status (1)

Country Link
CN (1) CN111242063B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507648A (en) * 2020-06-30 2020-08-07 航天宏图信息技术股份有限公司 Territorial space planning evaluation system
CN111783571A (en) * 2020-06-17 2020-10-16 陕西中医药大学 Cervical cell automatic classification model establishment and cervical cell automatic classification method
CN112115265A (en) * 2020-09-25 2020-12-22 中国科学院计算技术研究所苏州智能计算产业技术研究院 Small sample learning method in text classification
CN112529094A (en) * 2020-12-22 2021-03-19 中国医学科学院北京协和医院 Medical image classification and identification method and system
CN112949454A (en) * 2021-02-26 2021-06-11 西安工业大学 Iris identification method based on small sample learning
CN113116363A (en) * 2021-04-15 2021-07-16 西北工业大学 Method for judging hand fatigue degree based on surface electromyographic signals
CN113627271A (en) * 2021-07-18 2021-11-09 武汉大学 Mobile rock mineral rapid intelligent identification method
CN113627501A (en) * 2021-07-30 2021-11-09 武汉大学 Animal image type identification method based on transfer learning
CN114295967A (en) * 2021-07-26 2022-04-08 桂林电子科技大学 Analog circuit fault diagnosis method based on migration neural network
CN115937153A (en) * 2022-12-13 2023-04-07 北京瑞医博科技有限公司 Model training method and device, electronic equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks
CN109508650A (en) * 2018-10-23 2019-03-22 浙江农林大学 A kind of wood recognition method based on transfer learning
CN110084156A (en) * 2019-04-12 2019-08-02 中南大学 A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508650A (en) * 2018-10-23 2019-03-22 浙江农林大学 A kind of wood recognition method based on transfer learning
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks
CN110084156A (en) * 2019-04-12 2019-08-02 中南大学 A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VINEET KUMAR: "Iris Localization Based on Integro-Differential Operator for Unconstrain ed Infrared Iris Images" *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783571A (en) * 2020-06-17 2020-10-16 陕西中医药大学 Cervical cell automatic classification model establishment and cervical cell automatic classification method
CN111507648A (en) * 2020-06-30 2020-08-07 航天宏图信息技术股份有限公司 Territorial space planning evaluation system
CN112115265A (en) * 2020-09-25 2020-12-22 中国科学院计算技术研究所苏州智能计算产业技术研究院 Small sample learning method in text classification
CN112529094A (en) * 2020-12-22 2021-03-19 中国医学科学院北京协和医院 Medical image classification and identification method and system
CN112949454A (en) * 2021-02-26 2021-06-11 西安工业大学 Iris identification method based on small sample learning
CN112949454B (en) * 2021-02-26 2023-04-18 西安工业大学 Iris recognition method based on small sample learning
CN113116363A (en) * 2021-04-15 2021-07-16 西北工业大学 Method for judging hand fatigue degree based on surface electromyographic signals
CN113627271A (en) * 2021-07-18 2021-11-09 武汉大学 Mobile rock mineral rapid intelligent identification method
CN114295967A (en) * 2021-07-26 2022-04-08 桂林电子科技大学 Analog circuit fault diagnosis method based on migration neural network
CN113627501A (en) * 2021-07-30 2021-11-09 武汉大学 Animal image type identification method based on transfer learning
CN115937153A (en) * 2022-12-13 2023-04-07 北京瑞医博科技有限公司 Model training method and device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN111242063B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN111242063A (en) Small sample classification model construction method based on transfer learning and iris classification application
CN110309798B (en) Face spoofing detection method based on domain self-adaptive learning and domain generalization
CN109345507B (en) Dam image crack detection method based on transfer learning
CN110188824B (en) Small sample plant disease identification method and system
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN108648188B (en) No-reference image quality evaluation method based on generation countermeasure network
CN112699956B (en) Neuromorphic visual target classification method based on improved impulse neural network
CN112528830B (en) Lightweight CNN mask face pose classification method combined with transfer learning
WO2021051987A1 (en) Method and apparatus for training neural network model
CN113361623B (en) Medical image classification method combining lightweight CNN with transfer learning
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
Ying et al. Human ear recognition based on deep convolutional neural network
CN109034184A (en) A kind of grading ring detection recognition method based on deep learning
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
Yang A CNN-based broad learning system
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN112487938A (en) Method for realizing garbage classification by utilizing deep learning algorithm
CN114399018B (en) Efficient ientNet ceramic fragment classification method based on sparrow optimization of rotary control strategy
CN110287759B (en) Eye fatigue detection method based on simplified input convolutional neural network O-CNN
Yang et al. Research on digital camouflage pattern generation algorithm based on adversarial autoencoder network
CN114647760A (en) Intelligent video image retrieval method based on neural network self-temperature cause and knowledge conduction mechanism
Song et al. A Novel Face Recognition Algorithm for Imbalanced Small Samples.
Song et al. Apple disease recognition based on small-scale data sets
CN112991257B (en) Heterogeneous remote sensing image change rapid detection method based on semi-supervised twin network
Ma et al. A novel algorithm of image enhancement based on pulse coupled neural network time matrix and rough set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant