CN114444600A - Small sample image classification method based on memory enhanced prototype network - Google Patents
Small sample image classification method based on memory enhanced prototype network Download PDFInfo
- Publication number
- CN114444600A CN114444600A CN202210105376.3A CN202210105376A CN114444600A CN 114444600 A CN114444600 A CN 114444600A CN 202210105376 A CN202210105376 A CN 202210105376A CN 114444600 A CN114444600 A CN 114444600A
- Authority
- CN
- China
- Prior art keywords
- sample
- image
- prototype
- query
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention discloses a small sample image classification method based on a memory enhanced prototype network. Meta learning is the mainstream learning paradigm for solving the small sample image classification task at present, and a prototype network is the most classical model, so that excellent classification performance is obtained in the small sample image classification task. The prototype network uses a segmented learning strategy to double sample tasks and data on a meta-training data set to obtain prior knowledge that quickly adapts to new tasks. However, the random extraction method makes the prototype network not fully utilize all information in the meta-training data set, so the memory element is added in the prototype network to memorize the typical sample representation in the meta-training data set, and the prior information in the meta-training data set can be fully utilized end to correct the prototype.
Description
Technical Field
The invention relates to a small sample image classification method based on a memory enhanced prototype network, and belongs to the field of small sample image classification.
Background
In recent years, with the continuous development of deep learning technology, great breakthroughs are made in various research fields of artificial intelligence. However, this great success relies on training large capacity deep convolutional neural networks using massive amounts of tag data. This training strategy for deep learning techniques greatly limits their application in many practical situations because in many cases only a limited small number of label samples are provided. In this context, small sample learning is becoming a new research focus in the fields of computer vision and machine learning. The technology is a very challenging research topic, and aims to complete classification judgment of a new image model by using only a small amount of samples.
Meta-learning, also called how to learn by learning, decomposes a data set into different tasks in a meta-training stage, takes the generalization performance of a test sample as a learning target, and learns the common part in the different tasks of the learning, which has gradually become a mainstream method for solving the problem of small sample learning. The depth element learning method based on the measurement obtains good performance in a small sample image classification task, the method uses a depth neural network to project an image sample to a certain embedding space, calculates the similarity of the sample in the embedding space, and classifies the similarity into the same category. The classical model is a prototype network proposed by Snell et al (Snell J, Swersky K, Zemel R. prototypical networks for raw-shot learning [ C ]// Proceedings of the 31st annular Conference on Neural Information Processing Systems, Long Beach, CA, USA: NIPS,2017: 4077-4.), the method calculates the mean value of all supported sample features as the prototype of the sample, and classifies the query sample into categories to which the most recently supported sample prototype belongs. How to calculate the more optimal prototypes in each category was studied by subsequent work. For example, for the purposes of Fort et al (for S. multipurpose predictive networks for raw-shot searching on arbitrary [ J ]. arXiv prediction arXiv:1708.02735,2017. it is proposed to use the centroid of each type of sample as a prototype representation and to use the Mahalanobis distance function to calculate the similarity between samples, to calculate the covariance of each type of sample as a variance matrix in the distance function Hilliard et al (Hilliard N, Phillips L, Howland S, et al. Few-thinned with statistical-statistical systematic approach [ J ]. arXiv.: 1802.04376,2018.) to add a correlation network module behind the feature extraction network, which consists of fully connected layers, to further map each type of sample into a 128-dimensional vector, to use a parametric learning method to obtain a prototype representation of each type of sample [ M, sample ] et al (sample assignment) expression of a nonlinear simulation [ J.: sample J.: 1803.00676,2018. sample expression, but the confidence of the prototype produced by this method is greatly reduced since a large number of non-annotated samples come from different classes.
Although the improved method can correct the prototype calculated based on a small number of support samples, the meta-learning paradigm of segmented learning is still used, and the information of the base class sample data cannot be sufficiently mined, so that the invention discloses a small sample image classification method based on a memory enhanced prototype network.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: according to the small sample image classification method based on the memory enhanced prototype network, a memory element is added in the prototype network to memorize typical sample representatives in the base class data set, and the prior information in the base class data set can be fully utilized end to correct the prototypes.
The invention adopts the following technical scheme for solving the technical problems:
the small sample image classification method based on the memory enhanced prototype network comprises the following steps:
step 1, processing an input image data set. The input image data set is denoted as I and is randomly divided into two image subsets, which are respectively meta-training image data sets ItrainAnd meta test image dataset Itest.. In data set ItrainZhongrandAnd extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and finishing the construction of a C-way-K-shot classification task on a meta-training data set. The set of c-th class-supported image samples is denoted asWhereinRepresenting the k-th supported sample image,indicating the category label to which the image corresponds. The set of query image samples of the c-th category is represented asWhereinRepresenting the q-th query sample image,indicating the category label to which the image corresponds. C-way-K-shot classification tasks are also constructed on the meta-test data set, and the set of the C-th class supporting image samples is expressed asWhereinRepresenting the k-th supported sample image,indicating the category label to which the image corresponds. The set of query image samples of the c-th category is represented asWhereinRepresenting the q-th query sample image.
And 2, extracting the characteristics of the image sample and initializing a memory element in the meta-training process. Convolutional neural network f with parameter theta as trunk network in assumed modelθAnd inputting the kth training support sample into the backbone network to obtain the D-dimensional characteristic of the kth training support sample expressed asThe q query sample image is input into the backbone network to obtain D-dimensional characteristic representation thereofThe circumscribed memory elements are represented as a matrix M whose jth element M (j) is a vector of the same dimension as the feature, which are initialized to [0,1]]Random numbers within a range.
And 3, calculating an initial prototype. Assuming that in the feature extraction step after step 2, the set of the c-th class supporting image sample features is expressed asWhereinRepresenting the k-th supported sample image,indicating the category label to which the image corresponds. In the meta-training process, an initial prototype representation in each type of support sample set, namely a set of c-th class support image sample features is calculatedThe initial prototype of the c-th class is represented as
Step 4, the read operation and the write operation of the memory device M. The read operation of the memory device requires the completion of an initial prototypeAnd matching with the memory matrix M, and obtaining a reading weight vector by adopting a method for calculating the similarity between the initial prototype and the matrix elements. At the same time, the initial prototype of the c-th class is calculatedAnd updating the matching of the elements in the memory matrix M to complete the writing operation.
And 5, correcting linear synthesis of the prototype. Obtaining and initializing prototypes in memory devices based on read weight vectorsThe matched matrix elements are expressed asThe corrected prototype of the c-th class is the initial prototypeAnd withIs calculated as a linear weighted sum of.
And 6, calculating a training loss function. Inputting query image samples into a backbone network fθThe characteristics of the q query sample are expressed asCalculating similarity scores between the correction prototypes and each class of correction prototypes by using Euclidean distance, and converting the similarity scores into probability output values belonging to the c-th support class by using a softmax functionCalculating a probability output value and a true tag valueAnd optimizing a parameter theta in the backbone network by using a gradient descent algorithm to complete meta-training by using a cross entropy loss function between the two.
Step 7, backbone network f in the fixed element training processθThe k test support sample is input into the backbone network to obtain the D dimension characteristic thereof, which is expressed asInputting the q-th test query sample image into the backbone network to obtain the D-dimensional characteristic thereof expressed asAnd (4) correcting the initial prototype in each type of support sample set by using the steps 3 and 4, calculating the similarity between the characteristics of the query sample and each type of sample, and classifying the similarity into the most similar type.
As a preferred embodiment of the present invention, the detailed description of the steps described in step 1 is as follows:
(1) the input image data set is denoted as I and is randomly divided into two image subsets, which are respectively meta-training image data sets ItrainAnd meta test image dataset Itest.。
(2) In data set ItrainAnd randomly extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and finishing the construction of a C-way-K-shot classification task on a meta-training data set. The set of the c-th class-supported image samples is represented asWhereinRepresenting the k-th supported sample image,indicating the category label to which the image corresponds. The set of query image samples of the c-th category is represented asWhereinRepresenting the q-th query sample image,indicating the category label to which the image corresponds.
(3) In data set ItrainRandomly extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and completing the task of constructing a C-way-K-shot classification on a meta-test data set. The set of the c-th class-supported image samples is represented asWhereinRepresenting the k-th supported sample image,indicating the category label to which the image corresponds. The set of query image samples of the c-th category is represented asWhereinRepresenting the q-th query sample image.
As a preferred embodiment of the present invention, the detailed description of the steps described in step 2 is as follows:
(1) in a modelConvolutional neural network f with backbone network as parameter thetaθThen, the kth training support sample is input into the backbone network to obtain a D-dimensional feature expression of the kth training support sample as follows:
inputting the q-th query sample image into a backbone network to obtain the D-dimensional characteristic expression of the q-th query sample image as follows:
(2) creating an empty matrix M as a memory element, wherein each element in the matrix is a D vector with the same dimension as the feature of the image sample, and the jth element represents M (j) and has a random value in a range of [0,1], that is:
M(j)=randperm[0,1]
as a preferred embodiment of the present invention, the detailed description of the steps described in step 3 is as follows:
(1) the set of c-th class supporting image sample features is expressed asThen the formula for computing the initial prototype features of the c-th class training support sample set is:
(2) and sequentially calculating the initial prototype features of the C training sample sets according to the formula.
As a preferred embodiment of the present invention, the detailed description of the steps described in step 4 is as follows:
(1) computing initial prototypes for the c-th classObtaining a reading weight vector w by the similarity between the matrix element and the jth matrix elementn(j) The expression is:
(2) initial prototype of the c-th classMatching the jth element in the memory matrix M, the memory element is updated to complete the write operation W, i.e.:
where W represents a write operation, an average calculation may be employed or substituted.
As a preferred embodiment of the present invention, the detailed description of the steps described in step 5 is as follows:
obtaining and initializing prototypes in memory devices based on read weight vectorsThe matched matrix elements are denoted as rnThen the corrected prototype of category c is the initial prototypeAndis expressed as:
wherein alpha is an adjustable parameter.
As a preferred embodiment of the present invention, the detailed description of the step described in step 6 is as follows:
(1) inputting query image samples into a backbone network fθOf the feature of (1), qThe D-dimensional features of each query sample are represented as:
(2) qth query sample feature and the c-th correction prototypeThe similarity calculation formula is as follows:
where D (-) represents the Euclidean distance function.
(3) Converting the similarity calculation value into an output value by using a Softmax function, wherein the expression is as follows:
(4) probability output value and true tag valueThe cross entropy loss function between the two is calculated by the formula:
(5) the iterative formula for optimizing and calculating the parameter theta in the network is as follows:
where β is the learning rate.
As a preferred embodiment of the present invention, the detailed description of the steps described in step 7 is as follows:
(1) backbone network f in fixed element training processθAnd (4) inputting the test support image sample set and the query image sample set into the backbone network to extract features according to the parameter theta. Inputting the kth test support sample into the backbone network to obtain the D-dimensional characteristic of the kth test support sample as:
inputting the q-th query sample image into a backbone network to obtain the D-dimensional characteristic expression of the q-th query sample image as follows:
(2) the computing formula of the initial prototype feature of the c-th class training support sample set is as follows:
(3) computing initial prototypes for the c-th classObtaining a reading weight vector w by the similarity between the matrix element and the jth matrix elementn(j) Determining an initial prototype from the read weightsThe matched matrix elements are expressed asThe corrected prototype for the c-th test support sample set is then:
(4) qth query sample feature and the c-th correction prototypeThe similarity calculation formula is as follows:
where D (-) represents the Euclidean distance function.
(5) Converting the similarity calculation value into an output value by using a Softmax function, wherein the expression is as follows:
compared with the prior art, the technical scheme adopted by the invention has the following technical effects:
a memory element is added in the prototype network to memorize the typical sample representation in the base class data set, and the prior information in the base class data set can be fully utilized end to correct the prototype, so that the prior information in the base class data set is fully utilized end to improve the representability of the prototype.
Drawings
FIG. 1 is a flow chart of a small sample image classification method based on a memory enhanced prototype network according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
According to the invention, the memory element is added in the prototype network to memorize the typical sample representative in the base class data set, so that the prior information in the base class data set can be fully utilized end to correct the prototype.
Fig. 1 is a flowchart of a small sample image classifier based on a memory enhanced prototype network according to the present invention, which comprises the following specific steps as shown in fig. 1:
step 1: the input image dataset is processed. Assume an input image data set tableShown as I, which is randomly divided into two image subsets, respectively meta-training image data sets ItrainAnd meta test image dataset Itest.. In data set ItrainAnd ItestAnd randomly extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and finishing the construction of a C-way-K-shot classification task on a meta-training data set and a meta-test set.
Step 2: and feature extraction of the image sample and initialization of a memory element in the meta-training process. Inputting the training support sample set and the query sample set into the convolutional neural network f with the parameter thetaθExtracting features, and expressing the features of the kth training support sample and the qth query sample image asAndcreating a blank matrix M as a memory element M, and setting each element [0,1] therein]Random numbers within a range.
And step 3: calculation of the initial prototype. Calculating the average value of the sample features of the set supporting the image sample features in each category as an initial prototype, and expressing the initial prototype of the c-th category as
And 4, step 4: the read operation and the write operation of the memory device M. From an initial prototypeThe matching with the elements in the memory matrix M completes the read operation and the write operation.
And 5: linear synthesis of the calibration prototype. Obtaining and initializing prototypes in memory devices based on read weight vectorsThe matched matrix elements are expressed asThe corrected prototype of the c-th class is the initial prototypeAnd withIs calculated as a linear weighted sum of.
And 6: and (4) calculating a training loss function. Query image sample feature calculation using euclidean distanceSimilarity scores with each type of corrected prototype, and converting the similarity scores into probability output values belonging to the c-th support category by utilizing a softmax functionCalculating a probability output value and a true tag valueAnd optimizing a parameter theta in the backbone network by using a gradient descent algorithm to complete meta-training by using a cross entropy loss function between the two.
And 7: the meta test procedure is completed. Backbone network f in fixed element training processθExtracting the characteristics of the test support sample and the query image sample, wherein the characteristics of the kth support sample and the characteristics of the qth query sample are respectively expressed asAndand (5) correcting the initial prototype in each type of support sample set by utilizing the steps 3 and 4, calculating the similarity between the characteristics of the query sample and each type of sample, and classifying the similarity into the most similar type.
Claims (8)
1. The small sample image classification method based on the memory enhanced prototype network is characterized by comprising the following steps of:
step 1, processing an input image data set, wherein the input image data set is represented as I and is randomly divided into two image subsets which are respectively a meta-training image data set ItrainAnd meta test image dataset Itest.(ii) a In data set ItrainAnd ItestRandomly extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and finishing the construction of a C-way-K-shot classification task on a meta-training data set and a meta-test set;
step 2, extracting the characteristics of the image sample and initializing a memory element in the meta-training process; inputting the training support sample set and the query sample set into the convolutional neural network f with the parameter thetaθExtracting features, and expressing the features of the kth training support sample and the qth query sample image asAndcreating a blank matrix as memory element M, and setting each element [0,1] therein]A random number within a range;
step 3, calculating an initial prototype; calculating the average value of the sample features of the set supporting the image sample features in each category as an initial prototype, and expressing the initial prototype of the c-th category as
Step 4, performing the read operation and the write operation of the memory element M; from an initial prototypeMatching with elements in the memory matrix M to complete read operation and write operation;
step 5, linear synthesis of a correction prototype; obtaining and initializing prototypes in memory devices based on read weight vectorsThe matched matrix elements are expressed asThe corrected prototype of the c-th class is the initial prototypeAnda linear weighted sum of;
step 6, calculating a training loss function; query image sample feature calculation using euclidean distanceSimilarity scores with each type of corrected prototype, and converting the similarity scores into probability output values belonging to the c-th support category by utilizing a softmax functionCalculating a probability output value and a true tag valueA cross entropy loss function between the two functions is used for optimizing a parameter theta in the backbone network by using a gradient descent algorithm to complete element training;
step 7, completing the meta-test process; backbone network f in fixed element training processθExtracting the characteristics of the test support sample and the query image sample, wherein the characteristics of the kth support sample and the characteristics of the qth query sample are respectively expressed asAndand (4) correcting the initial prototype in each type of support sample set by using the steps 3 and 4, calculating the similarity between the characteristics of the query sample and each type of sample, and classifying the similarity into the most similar type.
2. The method for classifying small sample images based on a memory enhanced prototype network according to claim 1, wherein the detailed description of step 1 is as follows:
(1) the input image data set is denoted as I and is randomly divided into two image subsets, which are respectively meta-training image data sets ItrainAnd meta test image dataset Itest.;
(2) In data set ItrainRandomly extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and completing the construction of a C-way-K-shot classification task on a meta-training data set; the set of the c-th class-supported image samples is represented asWhereinRepresenting the k-th supported sample image,a category label representing the image; the set of query image samples of the c-th category is represented asWhereinRepresenting the q-th query sample image,a category label representing the image;
(3) in data set ItestRandomly extracting images of C categories, randomly extracting K image samples as support samples in all the images in each category, randomly extracting Q image samples as query samples in the rest images, and completing the construction of a C-way-K-shot classification task on a meta-test data set; the set of c-th class-supported image samples is denoted asWhereinRepresenting the k-th supported sample image,a category label representing the image; the set of query image samples of the c-th category is represented asWhereinRepresenting the q-th query sample image.
3. The method for classifying small sample images based on a memory enhanced prototype network as claimed in claim 1, wherein the detailed description of step 2 is as follows:
(1) convolutional neural network f with parameter theta as trunk network in modelθThen, the kth training support sample is input into the backbone network to obtain a D-dimensional feature expression of the kth training support sample as follows:
inputting the q-th query sample image into a backbone network to obtain the D-dimensional characteristic expression of the q-th query sample image as follows:
(2) creating an empty matrix M as a memory element, wherein each element in the matrix is a D vector with the same dimension as the feature of the image sample, and the jth element represents M (j) and has a random value in a range of [0,1], that is:
M(j)=randperm[0,1]。
4. the method for classifying small sample images based on a memory enhanced prototype network as claimed in claim 1, wherein step 3 is described in detail as follows:
(1) the set of c-th class supporting image sample features is expressed asThen the computing formula of the initial prototype feature of the c-th class training support sample set is:
(2) and sequentially calculating the initial prototype features of the C training sample sets according to the formula.
5. The method for classifying small sample images based on a memory enhanced prototype network as claimed in claim 1, wherein step 4 is described in detail as follows:
(1) computing initial prototypes for the c-th classObtaining a reading weight vector w by the similarity between the matrix element and the jth matrix elementn(j) The expression is:
(2) initial prototype of the c-th classMatching the jth element in the memory matrix M, the memory element is updated to complete the write operation W, i.e.:
where W represents a write operation, an average calculation may be employed or instead.
6. The method for classifying small sample images based on a memory enhanced prototype network as claimed in claim 1, wherein the detailed description of step 5 is as follows:
obtaining and initializing prototypes in memory devices based on read weight vectorsThe matched matrix elements are denoted as rnThen the corrected prototype of category c is the initial prototypeAndis expressed as:
wherein a is an adjustable parameter.
7. The method for classifying small sample images based on a memory enhanced prototype network according to claim 1, wherein step 6 is described in detail as follows:
(1) inputting query image samples into a backbone network fθThe D-dimensional features of the qth query sample are represented as:
(2) qth query sample feature and the c-th correction prototypeThe similarity calculation formula is as follows:
the expression represents a Euclidean distance function;
(3) converting the similarity calculation value into an output value by using a Softmax function, wherein the expression is as follows:
(4) probability output value and true tag valueThe cross entropy loss function between the two is calculated as:
(5) the iterative formula for optimizing and calculating the parameter theta in the network is as follows:
where β is the learning rate.
8. The method for classifying small sample images based on a memory enhanced prototype network as claimed in claim 1, wherein step 7 is described in detail as follows:
(1) backbone network f in fixed element training processθInputting the test support image sample set and the query image sample set into a backbone network to extract features according to the parameter theta; inputting the kth test support sample into the backbone network to obtain the D-dimensional characteristic expression of the kth test support sample as follows:
inputting the q-th query sample image into a backbone network to obtain the D-dimensional characteristic expression of the q-th query sample image as follows:
(2) the computing formula of the initial prototype feature of the c-th class training support sample set is as follows:
(3) computing initial prototypes for the c-th classObtaining a reading weight vector w by the similarity between the matrix element and the jth matrix elementn(j) Determining an initial prototype from the read weightsThe matched matrix elements are expressed asThe corrected prototype for the c-th test support sample set is then:
wherein a is an adjustable parameter.
(4) Characteristics of q query sample and c correction prototypeThe similarity calculation formula is as follows:
wherein D (-) represents a Euclidean distance function;
(5) converting the similarity calculation value into an output value by using a Softmax function, wherein the expression is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210105376.3A CN114444600A (en) | 2022-01-28 | 2022-01-28 | Small sample image classification method based on memory enhanced prototype network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210105376.3A CN114444600A (en) | 2022-01-28 | 2022-01-28 | Small sample image classification method based on memory enhanced prototype network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114444600A true CN114444600A (en) | 2022-05-06 |
Family
ID=81370238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210105376.3A Withdrawn CN114444600A (en) | 2022-01-28 | 2022-01-28 | Small sample image classification method based on memory enhanced prototype network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114444600A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115830401A (en) * | 2023-02-14 | 2023-03-21 | 泉州装备制造研究所 | Small sample image classification method |
CN115984621A (en) * | 2023-01-09 | 2023-04-18 | 宁波拾烨智能科技有限公司 | Small sample remote sensing image classification method based on restrictive prototype comparison network |
CN116168255A (en) * | 2023-04-10 | 2023-05-26 | 武汉大学人民医院(湖北省人民医院) | Retina OCT (optical coherence tomography) image classification method with robust long tail distribution |
CN116168257A (en) * | 2023-04-23 | 2023-05-26 | 安徽大学 | Small sample image classification method, device and storage medium based on sample generation |
CN116521875A (en) * | 2023-05-09 | 2023-08-01 | 江南大学 | Prototype enhanced small sample dialogue emotion recognition method for introducing group emotion infection |
CN116563638A (en) * | 2023-05-19 | 2023-08-08 | 广东石油化工学院 | Image classification model optimization method and system based on scene memory |
-
2022
- 2022-01-28 CN CN202210105376.3A patent/CN114444600A/en not_active Withdrawn
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115984621A (en) * | 2023-01-09 | 2023-04-18 | 宁波拾烨智能科技有限公司 | Small sample remote sensing image classification method based on restrictive prototype comparison network |
CN115984621B (en) * | 2023-01-09 | 2023-07-11 | 宁波拾烨智能科技有限公司 | Small sample remote sensing image classification method based on restrictive prototype comparison network |
CN115830401A (en) * | 2023-02-14 | 2023-03-21 | 泉州装备制造研究所 | Small sample image classification method |
CN115830401B (en) * | 2023-02-14 | 2023-05-09 | 泉州装备制造研究所 | Small sample image classification method |
CN116168255A (en) * | 2023-04-10 | 2023-05-26 | 武汉大学人民医院(湖北省人民医院) | Retina OCT (optical coherence tomography) image classification method with robust long tail distribution |
CN116168255B (en) * | 2023-04-10 | 2023-12-08 | 武汉大学人民医院(湖北省人民医院) | Retina OCT (optical coherence tomography) image classification method with robust long tail distribution |
CN116168257A (en) * | 2023-04-23 | 2023-05-26 | 安徽大学 | Small sample image classification method, device and storage medium based on sample generation |
CN116168257B (en) * | 2023-04-23 | 2023-07-04 | 安徽大学 | Small sample image classification method, device and storage medium based on sample generation |
CN116521875A (en) * | 2023-05-09 | 2023-08-01 | 江南大学 | Prototype enhanced small sample dialogue emotion recognition method for introducing group emotion infection |
CN116521875B (en) * | 2023-05-09 | 2023-10-31 | 江南大学 | Prototype enhanced small sample dialogue emotion recognition method for introducing group emotion infection |
CN116563638A (en) * | 2023-05-19 | 2023-08-08 | 广东石油化工学院 | Image classification model optimization method and system based on scene memory |
CN116563638B (en) * | 2023-05-19 | 2023-12-05 | 广东石油化工学院 | Image classification model optimization method and system based on scene memory |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107480261B (en) | Fine-grained face image fast retrieval method based on deep learning | |
CN114444600A (en) | Small sample image classification method based on memory enhanced prototype network | |
CN110298037B (en) | Convolutional neural network matching text recognition method based on enhanced attention mechanism | |
CN111695467B (en) | Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion | |
CN110321967B (en) | Image classification improvement method based on convolutional neural network | |
CN111125411B (en) | Large-scale image retrieval method for deep strong correlation hash learning | |
CN110942091B (en) | Semi-supervised few-sample image classification method for searching reliable abnormal data center | |
CN113657450B (en) | Attention mechanism-based land battlefield image-text cross-modal retrieval method and system | |
CN109063112B (en) | Rapid image retrieval method, model and model construction method based on multitask learning deep semantic hash | |
CN114241273B (en) | Multi-modal image processing method and system based on Transformer network and hypersphere space learning | |
CN108875933B (en) | Over-limit learning machine classification method and system for unsupervised sparse parameter learning | |
Hasan | An application of pre-trained CNN for image classification | |
CN112733866A (en) | Network construction method for improving text description correctness of controllable image | |
CN105894050A (en) | Multi-task learning based method for recognizing race and gender through human face image | |
CN112949740B (en) | Small sample image classification method based on multilevel measurement | |
Bansal et al. | mRMR-PSO: a hybrid feature selection technique with a multiobjective approach for sign language recognition | |
CN112232395B (en) | Semi-supervised image classification method for generating countermeasure network based on joint training | |
Ren et al. | Convolutional neural network based on principal component analysis initialization for image classification | |
CN114299362A (en) | Small sample image classification method based on k-means clustering | |
CN110991500A (en) | Small sample multi-classification method based on nested integrated depth support vector machine | |
CN116110089A (en) | Facial expression recognition method based on depth self-adaptive metric learning | |
CN114579794A (en) | Multi-scale fusion landmark image retrieval method and system based on feature consistency suggestion | |
CN113032613B (en) | Three-dimensional model retrieval method based on interactive attention convolution neural network | |
Xu et al. | Shape retrieval using deep autoencoder learning representation | |
CN113095229B (en) | Self-adaptive pedestrian re-identification system and method for unsupervised domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220506 |