CN114092747A - Small sample image classification method based on depth element metric model mutual learning - Google Patents
Small sample image classification method based on depth element metric model mutual learning Download PDFInfo
- Publication number
- CN114092747A CN114092747A CN202111440323.9A CN202111440323A CN114092747A CN 114092747 A CN114092747 A CN 114092747A CN 202111440323 A CN202111440323 A CN 202111440323A CN 114092747 A CN114092747 A CN 114092747A
- Authority
- CN
- China
- Prior art keywords
- depth
- meta
- model
- sample
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000013016 learning Effects 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000005259 measurement Methods 0.000 claims abstract description 23
- 238000012360 testing method Methods 0.000 claims description 44
- 238000004364 calculation method Methods 0.000 claims description 36
- 230000006870 function Effects 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 5
- 238000013145 classification model Methods 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 239000002775 capsule Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 208000035977 Rare disease Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000001667 episodic effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000031836 visual learning Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a small sample image classification method based on depth element metric model mutual learning. The method is characterized by comprising two element measurement models with different parameters, wherein each model is used for predicting a query sample and improving a regularization term for the other network, and the regularization term is obtained by calculating a KL divergence value between the two outputs. The method can be fused with the unit-scale model with any depth, so that the overfitting problem is avoided, and the generalization performance of the extracted features is improved; and the classification decision of any depth meta-metric model can be further drawn to the optimal classification decision boundary through a mutual learning technology.
Description
Technical Field
The invention belongs to the field of small sample image classification, and particularly relates to a small sample image classification method based on depth element metric model mutual learning.
Background
With the emergence of big data and the rapid development of computer hardware, deep learning makes breakthrough progress in the task of image classification. On large-scale reference databases such as ImageNet, classification models based on deep convolutional neural networks reach even the human recognition level. However, the great success of deep learning relies entirely on large-scale data, which severely limits its application in many scenarios. Because it is very difficult to collect a large amount of tag data, it is very labor-intensive, sometimes even impossible, to collect medical data of rare diseases, collect multi-user manual annotation data, and the like. In contrast, humans need only a few images to recognize a large number of objects and have the ability to quickly understand and generalize new concepts. The high-efficiency learning ability of human beings directly motivates students to carry out extensive research on the problem of classifying small sample images.
The small sample image classification task is to complete classification decision of a test image under the condition that the number of samples of each category is very small. Deep meta learning is a popular learning paradigm to solve this problem. The depth metric model has the advantages of high training efficiency, good classification effect and the like, and is the most effective method for solving the problem of small sample image classification at present. The basic idea is to project an image sample into a certain feature space by using a deep neural network, calculate the similarity of the sample in the feature space, and classify the similarity into the same category. Wherein the classical model comprises a Matching network (Matching Networks) proposed by Vinyals et al (Vinyals O, Blundell C, Lillicrap T, et al. Matching Networks for one shot learning [ C ]// Processing of the 30th annular Conference on Neural Information Processing Systems, Barcelona, Spain: NIPS,2016:1-8.) is an end-to-end differentiable KNN network, extracting features using attention LSTM and bidirectional LSTM for the support sample set and the query sample set respectively, and the output of the final classifier is a weighted sum of predicted values between the support sample set and the query sample set; snell et al (Snell J, Swersky K, Zemel R.prototypical Networks for raw-shot learning [ C ]// Proceedings of the 31st annular Conference on Neural Information Processing Systems, Long Beach, CA, USA: NIPS,2017:4077-4087.) propose prototype Networks (Prototypical Networks) which assume that there is an embedding of each class around a certain prototype expression and calculate the mean value of the supporting sample set in the embedding space as the expression of the prototype, translating the classification problem into the nearest neighbor in the embedding space. The network model uses fixed measurement modes on similarity measurement, such as cosine similarity, Euclidean distance and the like, and the learning part is embodied in the aspect of feature embedding. Sung et al (Sun F, Yang Y, Zhang L, et al. learning to compare: relationship Network for raw-shot learning [ C ]// Proceedings of the 31st Conference on Computer Vision and Pattern registration. Piscataway, NJ, USA: IEEE Press,2018: 1199-.
Extracting effective features is a key step for image classification, and how to improve the feature characterization capability in the depth meta-metric model is also a very much concern of scholars. For example, Gidarid et al (Gidaris S, Bursuc A, Komodakis N.boosting few-shot visual learning with self-persistence [ C ]// Proceedings of the 17th International Conference on Computer Vision. Piscataway, NJ, USA: IEEE Press,2019: 8059-; li et al (Li H, edge D, Dodge S, et al. Finding task-replace features for raw-sharing by category transactions of the 32nd IEEE Conference on Computer Vision and Pattern registration. Piscataway, NJ, USA: IEEE,2019:1-10.) consider information between categories, propose a category traversal module consisting of an integrator and a mapper, wherein the integrator extracts common features in each category and the mapper removes irrelevant features. Simon et al (Simon C, Koniusz P, Nock R, Harandi M.Adaptive subspaces for the raw-shot learning [ C ]// Proceedings of the 33rd IEEE Conference on Computer Vision and Pattern recognition. Piscataway, NJ, USA: IEEE Press,2020: 4135-; li and the like (Li A, Huang W, Lan X, Feng J, Li Z, Wang L.boosting few-shot learning with adaptive margin loss [ C ]// Proceedings of the 33rd IEEE Conference on Computer Vision and Pattern recognition. Piscataway, NJ, USA: IEEE Press,2020: 12573-; wu et al (Wu F, Smith J S, Lu W. Experimental protocol type now-shot with capsule network-based encoding [ C ]// Proceedings of the16th European Conference on Computer Vision. Berlin, German: Springer,2020: 237-shot 253.) propose to use capsule network to encode the relative spatial relationship between features and to use a novel triplet loss to enhance the semantic features of the embedded network, thereby achieving the purpose of closer distance between similar samples and farther distance between different types of samples.
Although the method achieves good effect in small sample image classification, the key problem in the task of small sample image classification is that the quantity of training samples is too small to depict the distribution of each type of image sample. The above method requires a large number of similar tasks to be sampled for meta-training, and the commonly used network structure is relatively simple to avoid over-fitting. In order to improve the representation capability of the network structure, the methods gradually adopt a more complex network structure as a base learner in the meta-training process. However, as the complexity of the network increases, the search space for the network parameters also expands, which easily results in overfitting.
Disclosure of Invention
The invention aims to provide a small sample image classification method based on depth meta-metric model mutual learning, and aims to solve the technical problem that overfitting occurs when the depth meta-metric gradually uses a backbone network with a complex structure to extract features at present.
In order to solve the technical problems, the specific technical scheme of the invention is as follows:
a small sample image classification method based on depth element metric model mutual learning comprises the following steps:
further, the step 1 specifically comprises the following steps:
step 1.1, for a given data set D, it is divided into three subsets, i.e. meta-training set DtrainVerification set of YuanvalAnd meta test set DtestThe classification category in each subset is different;
step 1.2 from DtrainRandomly extracting N classes, and randomly extracting K samples from each class to obtain a support sample setExtracting a batch of samples from the residual data in the N categories to obtain a query sample set, and recording the query sample set as a query sample set
Further, the depth metric model I described in step 21And I2The output is calculated as follows:
step 2.1, support sample set S for meta-trainingtrainThe nth class supports prototypes of samplesAndthe calculation formula is as follows:
step 2.2, inquiring a sample set Q for meta-trainingtrainThe similarity calculation formula between the jth query sample and the nth prototype is as follows:
step 2.3, in the meta-training stage, a depth meta-metric model I1And I2The calculation formula of the output value of (a) is:
further, the depth metric model I described in step 31And I2Total loss function ofAndthe calculation process of (2) is as follows:
step 3.1, depth element measurement model I1And depth component measurement model I2Cross entropy loss function of LCE1And LCE2The calculation formula of (2) is as follows:
step 3.2, depth element measurement model I1Depth-to-depth meta metric model I2The calculation formula of the KL divergence value is as follows:
depth element measurement model I2Depth-to-depth meta metric model I1The calculation formula of the KL divergence value is as follows:
step 3.3, depth metric model I1Total loss function ofAnd depth metric moduleForm I2Total loss function ofThe calculation formula of (2) is as follows:
further, the iterative formula of the optimization calculation described in step 4 is:
Further, the meta-test procedure described in step 5 is described as follows:
step 5.1, utilizing the trained depth element measurement model I1And I2Feature extraction module pair support samples in (1)And query samplesExtracting the features to obtainAnd
step 5.2, support the sample set S for meta-testtestThe nth class supports prototypes of samplesAndthe calculation formula is as follows:
step 5.3, query sample set Q for meta-testtestThe similarity calculation formula between the jth query sample and the nth prototype is as follows:
step 5.4, finally obtaining the output category of the query sample to be tested, namely the depth meta-metric model I1And I2The calculation formula of the output value of (a) is:
the small sample image classification method based on the depth element metric model mutual learning has the following advantages:
1. the invention randomly initializes the model of any depth element measurement, and the mutual learning method can be fused with the model of any depth element measurement. And the KL divergence between the outputs of the two element metric models can provide a regularization term, so that the over-fitting problem of the element metric models in the learning process can be avoided, and the generalization performance of the extracted features is improved.
2. The KL divergence between the outputs of the two meta-metric models in the invention can further pull the classification decision of any depth meta-metric model to the optimal classification decision boundary.
Drawings
FIG. 1 is a flowchart of a small sample image classification method based on depth metric model mutual learning according to the present invention;
Detailed Description
In order to better understand the purpose, structure and function of the present invention, the method for classifying small sample images based on depth metric model mutual learning according to the present invention is described in further detail below with reference to the accompanying drawings.
As shown in fig. 1, the small sample image classification method based on depth metric model mutual learning includes the following steps:
The steps described are described as follows:
step 1.1, the given dataset D may be any small sample image classification dataset, such as the MiniImageNet dataset or the Caltech-UCSD Bird-200-. The former includes 100 classes, each of which has 600 pictures, and 60000 color pictures, each of which has a size of 84 × 84. Of which 64 classes, 16 classes and 20 classes are used for meta training, meta verification and meta testing, respectively. Alternatively, image data for 200 different bird species were provided, for a total of 11788 pictures. Each image has 1 annotation box, 15 part keypoints, and 312 annotation attributes, 100 categories, 50 categories, and 50 categories being used for meta-training, meta-verification, and meta-testing, respectively.
Step 2.2, for a given data set D, it is divided into three subsets, i.e. meta-training set DtrainVerification set of YuanvalAnd meta test set DtestThe classification categories in each subset are not the same, namely: Dtrain∪Dval∪Dtest=D。
step 2.3, from DtrainZhongrandExtracting N categories, and randomly extracting K samples from each category to obtain a support sample set abbreviated asSnSet of supporting samples representing the nth class, i.e. Indicates the ith sample to be supported,indicating its corresponding label. Then extracting a batch of samples from the residual data in the N categories to obtain a query sample set, which is abbreviated as
described degree of depth metric model I1And I2The output is calculated as follows:
step 2.1, support sample set S for meta-trainingtrainThe nth class supports prototypes of samplesAndthe calculation formula is as follows:
Step 2.2, inquiring a sample set Q for meta-trainingtrainThe similarity calculation formula between the jth query sample and the nth prototype is as follows:
step 2.3, in the meta-training stage, a depth meta-metric model I1And I2The calculation formula of the output value of (a) is:
described degree of depth metric model I1And I2Total loss function ofAndthe calculation process of (2) is as follows:
step 3.1, depth element measurement model I1And I2Cross entropy loss function of LCE1And LCE2The calculation formula of (2) is as follows:
step 3.2, depth element measurement model I1Depth-to-depth meta metric model I2The calculation formula of the KL divergence value is as follows:
depth element measurement model I2Depth-to-depth meta metric model I1The calculation formula of the KL divergence value is as follows:
step 3.3, depthElement measurement model I1And I2Total loss function ofAndthe calculation formula of (2) is as follows:
and 4, respectively optimizing the two models by using a gradient descent algorithm according to the loss function to complete the meta-training process.
The iterative formula of the described optimization calculation is:
And 5, in the meta-test stage, constructing a plurality of N-way-K-shot classification tasks in the same way. I.e. in the classification task, from DtestRandomly extracting N classes, and randomly extracting K samples from each class to obtain a support sample set (abbreviated as support set) Set of supporting samples representing the nth class in the test set, i.e. Indicates the ith sample to be supported,indicating its corresponding label. Then extracting a batch of samples from the residual data in the N categories to obtain a query sample set, which is abbreviated as Representing the jth test query sample,indicating its corresponding label. Utilizing a trained depth element metric model I1And I2Respectively testing the meta-test set to obtain the jth query sampleProbability output value belonging to nth classAnd
the described meta-test procedure is described as follows:
step 5.1, utilizing the trained depth element measurement model I1And I2Feature extraction module pair support samples in (1)And query samplesExtracting the features to obtainAnd
step 5.2, support the sample set S for meta-testtestThe nth class supports prototypes of samplesAndthe calculation formula is as follows:
step 5.3, query sample set Q for meta-testtestThe similarity calculation formula between the jth query sample and the nth prototype is as follows:
step 5.4, finally obtaining the output category of the query sample to be tested, namely the depth meta-metric model I1And I2The calculation formula of the output value of (a) is:
it is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (6)
1. A small sample image classification method based on depth element metric model mutual learning is characterized by comprising the following steps:
step 1, constructing a classification task on each segment for a given data set D; the classification task is that a classification model distinguishes N classes by using K supporting samples of each class; each classification task is supported by a sample setAnd querying the sample setIs composed of (a) whereinSet of supporting samples representing the nth class, i.e. Indicates the ith sample to be supported,indicates its corresponding tag;the j-th query sample is represented,indicates its corresponding tag;
step 2, randomly initializing two depth meta-metric models for mutual learning, wherein each depth meta-metric model comprises parameters ofFeature extraction module ofSimilarity measurement module g with a parameter thetaθThen, the depth metric models for mutual learning are respectively recorded as:andin each classification task, a sample set S is supportedtrainAnd query sample set QtrainRespectively input into a depth metric model I1And I2(ii) a Wherein the ith support sample imageAnd j query sample imageInput into model I1The features obtained after passing through the feature extraction module are expressed asThe ith viewSupporting sample imagesAnd j query sample imageInput into model I2The features obtained after passing through the feature extraction module are expressed asAnd then calculating prototypes corresponding to K supporting samples in each type of sample, wherein the prototype of the nth type of each model is represented as Pn1And Pn2Then, the similarity between each query sample and each type of prototype is calculated, the probability value of the query sample belonging to the nth type is calculated by utilizing a Softmax function, and the output of each model is obtainedAnd
step 3, respectively calculating the cross entropy loss function L corresponding to each modelCE1And LCE2And mutual information loss function D between themKL(p2|p1) And DKL(p1|p2) To obtain the total loss functionAnd
step 4, respectively optimizing the two models by using a gradient descent algorithm according to the loss function to complete the meta-training process;
step 5, constructing a classification task in a meta-test stage; the classification task is from meta training set DtestMiddle random drawingTaking N categories, randomly extracting K samples from each category to obtain a support sample set, which is abbreviated as Set of supporting samples representing the nth class in the test set, i.e. Indicates the ith sample to be supported,indicates its corresponding tag; extracting a batch of samples from the residual data in the N categories to obtain a query sample set, and recording the query sample set as a query sample set Representing the jth test query sample,indicates its corresponding tag; utilizing a trained depth element metric model I1And I2Respectively testing the meta-test set to obtain the jth test query sampleProbability output value belonging to nth classAnd
2. the small sample image classification method based on depth element metric model mutual learning according to claim 1, characterized in that step 1 specifically comprises the following steps:
step 1.1, for a given data set D, it is divided into three subsets, i.e. meta-training set DtrainVerification set of YuanvalAnd meta test set DtestThe classification category in each subset is different;
3. The method for classifying small sample images based on depth element metric model mutual learning according to claim 2, wherein the depth element metric model I described in step 21And I2The output is calculated as follows:
step 2.1, support sample set S for meta-trainingtrainThe nth class supports prototypes of samplesAndthe calculation formula is as follows:
step 2.2, inquiring a sample set Q for meta-trainingtrainThe similarity calculation formula between the jth query sample and the nth prototype is as follows:
step 2.3, in the meta-training stage, a depth meta-metric model I1And I2The calculation formula of the output value of (a) is:
4. the method for classifying small sample images based on depth element metric model mutual learning as claimed in claim 3, wherein the depth element metric model I described in step 31And I2Total loss function ofAndthe calculation process of (2) is as follows:
step 3.1, depth element measurement model I1And depth component measurement model I2Cross entropy loss function of LCE1And LCE2The calculation formula of (2) is as follows:
step 3.2, depth element measurement model I1Depth-to-depth meta metric model I2The calculation formula of the KL divergence value is as follows:
depth element measurement model I2Depth-to-depth meta metric model I1The calculation formula of the KL divergence value is as follows:
step 3.3, depth metric model I1Total loss function ofAnd depth component measurement model I2Total loss function ofThe calculation formula of (2) is as follows:
6. The method for classifying small sample images based on depth meta-metric model mutual learning according to claim 5, wherein the meta-test process described in step 5 is described as follows:
step 5.1, utilizing the trained depth element measurement model I1And I2Feature extraction module pair support samples in (1)And query samplesExtracting the features to obtainAnd
step 5.2, support the sample set S for meta-testtestThe nth class supports prototypes of samplesAndthe calculation formula is as follows:
step 5.3, query sample set Q for meta-testtestThe similarity calculation formula between the jth query sample and the nth prototype is as follows:
step 5.4, finally obtaining the output category of the query sample to be tested, namely the depth meta-metric model I1And I2The calculation formula of the output value of (a) is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111440323.9A CN114092747A (en) | 2021-11-30 | 2021-11-30 | Small sample image classification method based on depth element metric model mutual learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111440323.9A CN114092747A (en) | 2021-11-30 | 2021-11-30 | Small sample image classification method based on depth element metric model mutual learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114092747A true CN114092747A (en) | 2022-02-25 |
Family
ID=80305699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111440323.9A Pending CN114092747A (en) | 2021-11-30 | 2021-11-30 | Small sample image classification method based on depth element metric model mutual learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114092747A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114529479A (en) * | 2022-03-02 | 2022-05-24 | 北京大学 | Mutual information loss function-based unsupervised one-pot multi-frame image denoising method |
CN114580571A (en) * | 2022-04-01 | 2022-06-03 | 南通大学 | Small sample power equipment image classification method based on migration mutual learning |
CN114936615A (en) * | 2022-07-25 | 2022-08-23 | 南京大数据集团有限公司 | Small sample log information anomaly detection method based on characterization consistency correction |
CN114943253A (en) * | 2022-05-20 | 2022-08-26 | 电子科技大学 | Radio frequency fingerprint small sample identification method based on meta-learning model |
CN117152538A (en) * | 2023-10-26 | 2023-12-01 | 之江实验室 | Image classification method and device based on class prototype cleaning and denoising |
CN117407796A (en) * | 2023-12-15 | 2024-01-16 | 合肥工业大学 | Cross-component small sample fault diagnosis method, system and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467564A (en) * | 2010-11-12 | 2012-05-23 | 中国科学院烟台海岸带研究所 | Remote sensing image retrieval method based on improved support vector machine relevance feedback |
CN110516718A (en) * | 2019-08-12 | 2019-11-29 | 西北工业大学 | The zero sample learning method based on depth embedded space |
CN112288013A (en) * | 2020-10-30 | 2021-01-29 | 中南大学 | Small sample remote sensing scene classification method based on element metric learning |
CN112633382A (en) * | 2020-12-25 | 2021-04-09 | 浙江大学 | Mutual-neighbor-based few-sample image classification method and system |
CN112861936A (en) * | 2021-01-26 | 2021-05-28 | 北京邮电大学 | Graph node classification method and device based on graph neural network knowledge distillation |
CN113095179A (en) * | 2021-03-30 | 2021-07-09 | 西安交通大学 | Metametric learning driven helicopter planetary gearbox fault diagnosis method |
-
2021
- 2021-11-30 CN CN202111440323.9A patent/CN114092747A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467564A (en) * | 2010-11-12 | 2012-05-23 | 中国科学院烟台海岸带研究所 | Remote sensing image retrieval method based on improved support vector machine relevance feedback |
CN110516718A (en) * | 2019-08-12 | 2019-11-29 | 西北工业大学 | The zero sample learning method based on depth embedded space |
CN112288013A (en) * | 2020-10-30 | 2021-01-29 | 中南大学 | Small sample remote sensing scene classification method based on element metric learning |
CN112633382A (en) * | 2020-12-25 | 2021-04-09 | 浙江大学 | Mutual-neighbor-based few-sample image classification method and system |
CN112861936A (en) * | 2021-01-26 | 2021-05-28 | 北京邮电大学 | Graph node classification method and device based on graph neural network knowledge distillation |
CN113095179A (en) * | 2021-03-30 | 2021-07-09 | 西安交通大学 | Metametric learning driven helicopter planetary gearbox fault diagnosis method |
Non-Patent Citations (3)
Title |
---|
SAI YANG ET AL.: "Comparative Analysis on Classical Meta-Metric Models for Few-Shot Learning", 《IEEE ACCESS》 * |
WINYCG: "few shot learning-小样本学习入门", 《CSDN》 * |
YING ZHANG ET AL.: "Deep Mutual Learning", 《IEEE XPLORE》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114529479A (en) * | 2022-03-02 | 2022-05-24 | 北京大学 | Mutual information loss function-based unsupervised one-pot multi-frame image denoising method |
CN114580571A (en) * | 2022-04-01 | 2022-06-03 | 南通大学 | Small sample power equipment image classification method based on migration mutual learning |
CN114943253A (en) * | 2022-05-20 | 2022-08-26 | 电子科技大学 | Radio frequency fingerprint small sample identification method based on meta-learning model |
CN114936615A (en) * | 2022-07-25 | 2022-08-23 | 南京大数据集团有限公司 | Small sample log information anomaly detection method based on characterization consistency correction |
CN114936615B (en) * | 2022-07-25 | 2022-10-14 | 南京大数据集团有限公司 | Small sample log information anomaly detection method based on characterization consistency correction |
CN117152538A (en) * | 2023-10-26 | 2023-12-01 | 之江实验室 | Image classification method and device based on class prototype cleaning and denoising |
CN117152538B (en) * | 2023-10-26 | 2024-04-09 | 之江实验室 | Image classification method and device based on class prototype cleaning and denoising |
CN117407796A (en) * | 2023-12-15 | 2024-01-16 | 合肥工业大学 | Cross-component small sample fault diagnosis method, system and storage medium |
CN117407796B (en) * | 2023-12-15 | 2024-03-01 | 合肥工业大学 | Cross-component small sample fault diagnosis method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114092747A (en) | Small sample image classification method based on depth element metric model mutual learning | |
Bansal et al. | Zero-shot object detection | |
CN109165306B (en) | Image retrieval method based on multitask Hash learning | |
CN109614979B (en) | Data augmentation method and image classification method based on selection and generation | |
Liu et al. | Learning spatio-temporal representations for action recognition: A genetic programming approach | |
WO2022037233A1 (en) | Small sample visual target identification method based on self-supervised knowledge transfer | |
WO2020164278A1 (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN111753189A (en) | Common characterization learning method for few-sample cross-modal Hash retrieval | |
CN110472652B (en) | Small sample classification method based on semantic guidance | |
CN111339343A (en) | Image retrieval method, device, storage medium and equipment | |
CN111651762A (en) | Convolutional neural network-based PE (provider edge) malicious software detection method | |
CN106033426A (en) | A latent semantic min-Hash-based image retrieval method | |
CN110110128B (en) | Fast supervised discrete hash image retrieval system for distributed architecture | |
CN113177132A (en) | Image retrieval method based on depth cross-modal hash of joint semantic matrix | |
CN110751072B (en) | Double-person interactive identification method based on knowledge embedded graph convolution network | |
CN110046568B (en) | Video action recognition method based on time perception structure | |
WO2023124278A1 (en) | Image processing model training method and apparatus, and image classification method and apparatus | |
CN110188827A (en) | A kind of scene recognition method based on convolutional neural networks and recurrence autocoder model | |
Li et al. | Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes | |
CN112214623A (en) | Image-text sample-oriented efficient supervised image embedding cross-media Hash retrieval method | |
US8150212B2 (en) | System and method for automatic digital image orientation detection | |
CN110147414B (en) | Entity characterization method and device of knowledge graph | |
Kishore et al. | A Multi-class SVM Based Content Based Image Retrieval System Using Hybrid Optimization Techniques. | |
CN111310820A (en) | Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration | |
CN112883216B (en) | Semi-supervised image retrieval method and device based on disturbance consistency self-integration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220225 |