CN112288013A - Small sample remote sensing scene classification method based on element metric learning - Google Patents

Small sample remote sensing scene classification method based on element metric learning Download PDF

Info

Publication number
CN112288013A
CN112288013A CN202011188570.XA CN202011188570A CN112288013A CN 112288013 A CN112288013 A CN 112288013A CN 202011188570 A CN202011188570 A CN 202011188570A CN 112288013 A CN112288013 A CN 112288013A
Authority
CN
China
Prior art keywords
remote sensing
meta
model
test
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011188570.XA
Other languages
Chinese (zh)
Inventor
李海峰
崔振琦
彭剑
黄浩哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202011188570.XA priority Critical patent/CN112288013A/en
Publication of CN112288013A publication Critical patent/CN112288013A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a small sample remote sensing scene classification method based on element metric learning, which comprises the following steps: establishing a deep neural network classification model for the remote sensing image, wherein the deep neural network classification model comprises an embedding module and a measuring module; training the deep neural network classification model by adopting a meta-learning mode, wherein the meta-learning mode is trained through meta-task organization; and carrying out remote sensing image scene classification by using the trained deep neural network classification model. The method can be directly applied to solving the problem of small sample classification of the remote sensing image; through meta-task organization training, the learning level is improved from data to a task, and a balance loss function is used, so that the classification effect of the small sample remote sensing scene is better.

Description

Small sample remote sensing scene classification method based on element metric learning
Technical Field
The invention relates to the technical field of point remote sensing image recognition, in particular to a small sample remote sensing scene classification method based on element metric learning.
Background
Scene classification is an important content of optical remote sensing image processing analysis, and is widely applied to the national economic construction fields of disaster detection, environmental monitoring, urban planning, land utilization and the like. According to different used characteristics, the optical remote sensing image scene classification can be divided into a method based on artificial design characteristics and a method based on depth characteristics.
The artificial design features for optical remote sensing image scene classification can be roughly classified into 3 types: the method mainly comprises a Visual Bag-Of-Words (BOVW) Model, a Probabilistic Topic Model (PTM) and sparse coding. However, in practical applications, since it is difficult for the artificially designed features to describe rich semantic information contained in the remote sensing image, the performance is greatly limited by the artificially designed features.
In recent years, due to the availability of large-scale training data and the development of high-performance computing units, methods based on Deep feature learning have attracted more research attention, and the essence of the methods is to use Deep Neural networks such as Auto Encoders (AE), Deep Belief Networks (DBN), and Convolutional Neural Networks (CNN) to extract features end to end. These methods are data hungry in nature because they are all extensive incremental model updates to the data from scratch, independently, fitting a deep neural network over the data. Therefore, when a new scene does not exist in a closed training data set and has few labels, the existing method cannot learn new data distribution well due to overfitting. Therefore, the method has fundamental challenges for the limited and rapid adaptation of remote sensing data to the problem. For example, the classical Resnet, Googlenet, etc. model can achieve 90% classification accuracy on AID, UCMercered _ LandUse, etc. datasets, and less than 40% accuracy when only one labeled sample is available.
The meta-learning is learned from a group of tasks, not a group of data, each task is composed of a training set with a label and a testing set with a label to simulate a small sample learning problem, so that the training problem is more faithful to a real environment. Another important issue is how to measure the similarity of tasks, or how to learn more distinctive feature representations with small intra-class scatter but large inter-class separation.
Disclosure of Invention
In view of the above, the present invention provides a method for classifying a small sample remote sensing scene based on meta-metric learning.
The invention aims to realize the method for classifying the remote sensing scenes of the small samples based on the element metric learning, which comprises the following steps:
step 1, establishing a deep neural network classification model facing to a remote sensing image, wherein the deep neural network classification model comprises an embedding module and a measuring module;
step 2, training the deep neural network classification model by adopting a meta-learning mode;
step 3, using the trained deep neural network classification model to classify the remote sensing image scene;
the meta-learning mode in the step 2 is organized and trained through meta-tasks, and learns indexes based on the tasks, and the organizing process comprises the following steps: each time from training set DtrainDynamically constructing small-batch plots by using medium-sized non-repeated sampling, wherein the plots are formed by a meta-training set MtrainAnd meta test set MtestComposition allowing M in different episodestrainAnd MtestThere is an intersection where MtrainEach time sampling C different classes, each class having StrEach having a label sample, i.e.
Figure BDA0002752118020000031
Corresponding to, MtestEach time also samples C different classes, each class having SteEach having a label sample, i.e.
Figure BDA0002752118020000032
The meta-training set and the meta-test set in each episode cannot have overlapping parts, i.e. there is no overlap between them
Figure BDA0002752118020000033
In particular, the embedded module is used by
Figure BDA0002752118020000034
Parameterized embedding model
Figure BDA0002752118020000035
Mapping the data domain to a feature space such that the visual information is associated with each other, the feature representation being computed as:
Figure BDA0002752118020000036
in each episode, embedding the model
Figure BDA0002752118020000037
Minimize training set MtestError of fit of (3) LCE,LCEExpressed as:
Figure BDA0002752118020000038
Figure BDA0002752118020000039
is yiThe predicted value of (2).
Furthermore, because the final classification precision is influenced by the quality of the feature space, the loss of the dimension of V is reduced as much as possible, but the higher dimension brings certain burden to the operation of the measurement module, and in order to reduce the calculation complexity, the class structure is replaced by adopting the judgment center of mass in the embedded module.
Figure BDA00027521180200000310
Wherein the content of the first and second substances,
Figure BDA00027521180200000311
represents MtrainData marked as class k, | MtrainI represents the total amount of meta-training set data in each episode, OkThe center of the prototype representing the class k,
Figure BDA00027521180200000312
a characterization representation of the ith data.
In particular, the metrology module is adapted to maximize the distance between the different classes, using, for the resulting representation of the features V, a metrology model g parameterized by ττLearning a metric rule to maximize the discriminative power of the embedding space, gτIs composed of a single-layer neural network and a nonlinear activation function ReLU (x), wherein
Figure BDA0002752118020000041
For point X ∈ MtestThe parameter τ needs to be optimized to maximize the distance between the different classes, expressed as:
Figure BDA0002752118020000042
wherein, p isτ(y ═ k | X) denotes the posterior probability distribution,
Figure BDA0002752118020000043
representing the feature representation and center O of point X in embedding spacekThe distance in the metric space is determined,
Figure BDA0002752118020000044
representing the distance, O, of the feature representation of point X in embedding space from the other centers in measurement spacek'Representing the centers of the other classes than class k.
Further, to balance generalization ability and fitting abilityMean balance loss function Lbal
Lbal=Lgen+λLCE
Lambda belongs to [0,1] and is a hyper-parameter for representing the tendency of the model, the smaller the lambda is, the more the model tends to have stronger fitting capability, and the larger the lambda is, the stronger the generalization capability of the model is;
wherein for a point X ∈ MtestGeneralized loss LgenIs defined as:
Lgen=-logpτ(y=S|X)
pτ(y ═ S | X) denotes the posterior probability distribution.
Compared with the prior art, the method has the advantages that: the method can be directly applied to solving the problem of small sample classification of the remote sensing image; through meta-task organization training, the learning level is promoted from data to tasks, task-based indexes are learned instead of sample-based indexes, task level distribution is learned based on the measurement of the tasks, and compared with the sample level distribution, the task level distribution can better summarize invisible or unknown test tasks; the new loss function is named as a balance loss function, and is formed by combining a cross entropy loss function used by a traditional classification neural network and a loss function of a generalization error through a hyper-parameter lambda so as to balance between data fitting and new sample generalization, and meanwhile, the measurement space is more discriminative and better in classification effect.
Drawings
FIG. 1 shows a schematic flow diagram of an embodiment of the invention;
fig. 2 shows a schematic diagram of a framework of an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The small sample remote sensing scene classification method based on the element metric learning needs to solve two challenges existing in the remote sensing scene classification problem in the real world, namely (1) a trained model needs to face a new remote sensing scene which does not appear in a closed training set, and (2) the new remote sensing scene only has a few label samples.
In particular, a training set is given
Figure BDA0002752118020000051
Test set
Figure BDA0002752118020000052
Wherein L ist、LpFor label sets, the parameter θ of the predictor y ═ f (x; θ) is optimized so that it can be used in test sets D with only a small number of samplestestHas strong generalization ability. Namely, it is
Figure BDA0002752118020000053
Wherein L isbalIs a balance loss function used to measure the final performance of the model. It should be noted that, unlike the conventional remote sensing scene classification problem in the closed world, the scenes in the test set are included in the scenes in the training set, that is, the scenes in the training set
Figure BDA0002752118020000061
In the real world, new scenarios with a small number of samples do not exist in the known training set, i.e.
Figure BDA0002752118020000062
Fig. 1 shows a schematic flow diagram of an embodiment of the invention. The small sample remote sensing scene classification method based on element metric learning comprises the following steps:
step 1, establishing a deep neural network classification model facing to a remote sensing image, wherein the deep neural network classification model comprises an embedding module and a measuring module;
step 2, training the deep neural network classification model by adopting a meta-learning mode;
step 3, using the trained deep neural network classification model to classify the remote sensing image scene;
as shown in fig. 2, the meta learning method described in step 2 is performed by meta task organization training to learn task-based indexes, and the organization process includes: each time from training set DtrainDynamically constructing small-batch plots by using medium-sized non-repeated sampling, wherein the plots are formed by a meta-training set MtrainAnd meta test set MtestComposition allowing M in different episodestrainAnd MtestThere is an intersection where MtrainEach time sampling C different classes, each class having StrEach having a label sample, i.e.
Figure BDA0002752118020000063
Corresponding to, MtestEach time also samples C different classes, each class having SteEach having a label sample, i.e.
Figure BDA0002752118020000064
The meta-training set and the meta-test set in each episode cannot have overlapping parts, i.e. there is no overlap between them
Figure BDA0002752118020000065
Likewise, from DtrainThe meta-authentication set M separated from the meta-authentication setvalSelecting hyper-parameters for the classifier and selecting the best embedding model, and MtrainAnd MtestAre not mutually intersected.
In particular, the embedded module is used by
Figure BDA0002752118020000066
Parameterized embedding model
Figure BDA0002752118020000067
The data field is mapped to a feature space,such that the visual information is associated with each other, the feature representation is calculated as:
Figure BDA0002752118020000068
in each episode, embedding the model
Figure BDA0002752118020000069
Minimize training set MtestError of fit of (3) LCE,LCEExpressed as:
Figure BDA00027521180200000610
Figure BDA00027521180200000611
is yiThe predicted value of (2). Fitting error LCEThe quality of the feature representation V in the embedding space is visually represented, and V plays an important role in the final classification precision.
Furthermore, because the final classification precision is influenced by the quality of the feature space, the loss of the dimension of V is reduced as much as possible, but the higher dimension brings certain burden to the operation of the measurement module, and in order to reduce the calculation complexity, the class structure is replaced by adopting the judgment center of mass in the embedded module.
Figure BDA0002752118020000071
Wherein the content of the first and second substances,
Figure BDA0002752118020000072
represents MtrainData marked as class k, | MtrainI represents the total amount of meta-training set data in each episode, OkThe center of the prototype representing the class k,
Figure BDA0002752118020000073
a characterization representation of the ith data. This can be compared to operating directly with a representation of the embedding spaceAnd obtaining the maximum improvement of the operation speed on the premise of losing a small amount of precision.
In particular, the metrology module is adapted to maximize the distance between the different classes, using, for the resulting representation of the features V, a metrology model g parameterized by ττLearning a metric rule to maximize the discriminative power of the embedding space, gτIs composed of a single-layer neural network and a nonlinear activation function ReLU (x), wherein
Figure BDA0002752118020000074
Aiming at the problem that the number of new scene samples is too rare, rather than explicitly defining the distance, gτThe method aims to learn a measurement rule which can maximize the distance of different classes of feature representations V under the space, so that the information contained in the data can be fully utilized. For point X ∈ MtestThe parameter τ needs to be optimized to maximize the distance between the different classes, expressed as:
Figure BDA0002752118020000075
wherein p isτ(y ═ k | X) denotes the posterior probability distribution,
Figure BDA0002752118020000076
representing the feature representation and center O of point X in embedding spacekThe distance in the metric space is determined,
Figure BDA0002752118020000077
representing the distance, O, of the feature representation of point X in embedding space from the other centers in measurement spacek'Representing the centers of the other classes than class k.
Further, to balance generalization ability and fitting ability, a balance loss function L is definedbal
Lbal=Lgen+λLCE
Lambda belongs to [0,1] and is a hyper-parameter for representing the tendency of the model, the smaller the lambda is, the more the model tends to have stronger fitting capability, and the larger the lambda is, the stronger the generalization capability of the model is;
wherein for a point X ∈ MtestGeneralized loss LgenIs defined as:
Lgen=-logpτ(y=S|X)
pτ(y ═ S | X) denotes the posterior probability distribution.
Since the classifier model is oriented to a new remote sensing scene that does not appear in the closed dataset and has only a few labels, the parameters are learned in a way that maximizes generalization capability, so that only fitting error L is consideredCEMake constraints without constraining the generalization error Lgen
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (5)

1. The small sample remote sensing scene classification method based on element metric learning is characterized by comprising the following steps of:
step 1, establishing a deep neural network classification model facing to a remote sensing image, wherein the deep neural network classification model comprises an embedding module and a measuring module;
step 2, training the deep neural network classification model by adopting a meta-learning mode;
step 3, using the trained deep neural network classification model to classify the remote sensing image scene;
the meta-learning mode in the step 2 is organized and trained through meta-tasks, and learns indexes based on the tasks, and the organizing process comprises the following steps: each time from training set DtrainMiddle and non-repeated miningDynamically constructing small batch plots of the samples, wherein the plots are formed by a meta-training set MtrainAnd meta test set MtestComposition allowing M in different episodestrainAnd MtestThere is an intersection where MtrainEach time sampling C different classes, each class having StrEach having a label sample, i.e.
Figure FDA0002752118010000011
Corresponding to, MtestEach time also samples C different classes, each class having SteEach having a label sample, i.e.
Figure FDA0002752118010000012
The meta-training set and the meta-test set in each episode cannot have overlapping parts, i.e. there is no overlap between them
Figure FDA0002752118010000013
2. The method for classifying small-sample remote sensing scenes according to claim 1, wherein the embedded module is used
Figure FDA0002752118010000014
Parameterized embedding model
Figure FDA0002752118010000015
Mapping the data domain to a feature space such that the visual information is associated with each other, the feature representation being computed as:
Figure FDA0002752118010000016
in each episode, embedding the model
Figure FDA0002752118010000017
Minimize training set MtestError of fit of (3) LCE,LCEExpressed as:
Figure FDA0002752118010000018
Figure FDA0002752118010000019
is yiThe predicted value of (2).
3. The small sample remote sensing scene classification method according to claim 1 or 2, characterized in that the classification structure is replaced by adopting a distinguishing centroid in an embedded module:
Figure FDA0002752118010000021
wherein the content of the first and second substances,
Figure FDA0002752118010000022
represents MtrainData marked as class k, | MtrainI represents the total amount of meta-training set data in each episode, OkThe center of the prototype representing the class k,
Figure FDA0002752118010000023
a characterization representation of the ith data.
4. The method for classifying small-sample remote sensing scenes according to claim 2, characterized in that said metric module is adapted to maximize the distance between the different classes, and for the resulting representation of features V, a metric model g parameterized by τ is usedτLearning a metric rule to maximize the discriminative power of the embedding space, gτIs composed of a single-layer neural network and a nonlinear activation function ReLU (x), wherein
Figure FDA0002752118010000024
For point X ∈ MtestThe parameter τ needs to be optimized to maximize the distance between the different classes, expressed as:
Figure FDA0002752118010000025
wherein p isτ(y ═ k | X) denotes the posterior probability distribution,
Figure FDA0002752118010000026
representing the feature representation and center O of point X in embedding spacekThe distance in the metric space is determined,
Figure FDA0002752118010000027
representing the distance, O, of the feature representation of point X in embedding space from the other centers in measurement spacek'Representing the centers of the other classes than class k.
5. The method for classifying small-sample remote sensing scenes according to claim 4, characterized in that a balance loss function L is defined in order to balance generalization ability and fitting abilitybal
Lbal=Lgen+λLCE
Lambda belongs to [0,1] and is a hyper-parameter for representing the tendency of the model, the smaller the lambda is, the more the model tends to have stronger fitting capability, and the larger the lambda is, the stronger the generalization capability of the model is;
wherein for a point X ∈ MtestGeneralized loss LgenIs defined as:
Lgen=-logpτ(y=S|X)
pτ(y ═ S | X) denotes the posterior probability distribution.
CN202011188570.XA 2020-10-30 2020-10-30 Small sample remote sensing scene classification method based on element metric learning Pending CN112288013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011188570.XA CN112288013A (en) 2020-10-30 2020-10-30 Small sample remote sensing scene classification method based on element metric learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011188570.XA CN112288013A (en) 2020-10-30 2020-10-30 Small sample remote sensing scene classification method based on element metric learning

Publications (1)

Publication Number Publication Date
CN112288013A true CN112288013A (en) 2021-01-29

Family

ID=74354238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011188570.XA Pending CN112288013A (en) 2020-10-30 2020-10-30 Small sample remote sensing scene classification method based on element metric learning

Country Status (1)

Country Link
CN (1) CN112288013A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297174A (en) * 2021-05-24 2021-08-24 中南大学 Land use change simulation method based on deep learning
CN113505861A (en) * 2021-09-07 2021-10-15 广东众聚人工智能科技有限公司 Image classification method and system based on meta-learning and memory network
CN113537317A (en) * 2021-06-30 2021-10-22 中国海洋大学 Remote sensing image cross-domain classification method based on interpretable deep learning
CN114067160A (en) * 2021-11-22 2022-02-18 重庆邮电大学 Small sample remote sensing image scene classification method based on embedded smooth graph neural network
CN114092747A (en) * 2021-11-30 2022-02-25 南通大学 Small sample image classification method based on depth element metric model mutual learning
CN114943859A (en) * 2022-05-05 2022-08-26 兰州理工大学 Task correlation metric learning method and device for small sample image classification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243394A (en) * 2015-11-03 2016-01-13 中国矿业大学 Evaluation method for performance influence degree of classification models by class imbalance
CN110706303A (en) * 2019-10-15 2020-01-17 西南交通大学 Face image generation method based on GANs
CN111598163A (en) * 2020-05-14 2020-08-28 中南大学 Stacking integrated learning mode-based radar HRRP target identification method
CN111723675A (en) * 2020-05-26 2020-09-29 河海大学 Remote sensing image scene classification method based on multiple similarity measurement deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243394A (en) * 2015-11-03 2016-01-13 中国矿业大学 Evaluation method for performance influence degree of classification models by class imbalance
CN110706303A (en) * 2019-10-15 2020-01-17 西南交通大学 Face image generation method based on GANs
CN111598163A (en) * 2020-05-14 2020-08-28 中南大学 Stacking integrated learning mode-based radar HRRP target identification method
CN111723675A (en) * 2020-05-26 2020-09-29 河海大学 Remote sensing image scene classification method based on multiple similarity measurement deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAIFENG LI等: "RS-MetaNet: Deep meta metric learning for few-shot remote sensing scene classification", 《COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297174A (en) * 2021-05-24 2021-08-24 中南大学 Land use change simulation method based on deep learning
CN113297174B (en) * 2021-05-24 2023-10-13 中南大学 Land utilization change simulation method based on deep learning
CN113537317A (en) * 2021-06-30 2021-10-22 中国海洋大学 Remote sensing image cross-domain classification method based on interpretable deep learning
CN113537317B (en) * 2021-06-30 2023-12-22 中国海洋大学 Remote sensing image cross-domain classification method based on interpretable deep learning
CN113505861A (en) * 2021-09-07 2021-10-15 广东众聚人工智能科技有限公司 Image classification method and system based on meta-learning and memory network
CN114067160A (en) * 2021-11-22 2022-02-18 重庆邮电大学 Small sample remote sensing image scene classification method based on embedded smooth graph neural network
CN114092747A (en) * 2021-11-30 2022-02-25 南通大学 Small sample image classification method based on depth element metric model mutual learning
CN114943859A (en) * 2022-05-05 2022-08-26 兰州理工大学 Task correlation metric learning method and device for small sample image classification
CN114943859B (en) * 2022-05-05 2023-06-20 兰州理工大学 Task related metric learning method and device for small sample image classification

Similar Documents

Publication Publication Date Title
CN112288013A (en) Small sample remote sensing scene classification method based on element metric learning
Sun et al. RSOD: Real-time small object detection algorithm in UAV-based traffic monitoring
Luan et al. Research on text classification based on CNN and LSTM
WO2022135121A1 (en) Molecular graph representation learning method based on contrastive learning
CN112949786B (en) Data classification identification method, device, equipment and readable storage medium
CN108427740B (en) Image emotion classification and retrieval algorithm based on depth metric learning
CN113780003B (en) Cross-modal enhancement method for space-time data variable-division encoding and decoding
CN113515669A (en) Data processing method based on artificial intelligence and related equipment
CN109271546A (en) The foundation of image retrieval Feature Selection Model, Database and search method
Li et al. A review of deep learning methods for pixel-level crack detection
CN116975776A (en) Multi-mode data fusion method and device based on tensor and mutual information
CN114925693B (en) Multi-model fusion-based multivariate relation extraction method and extraction system
Zhao et al. A real-time typhoon eye detection method based on deep learning for meteorological information forensics
CN115712740A (en) Method and system for multi-modal implication enhanced image text retrieval
Shen et al. Clustering-driven deep adversarial hashing for scalable unsupervised cross-modal retrieval
CN113657473A (en) Web service classification method based on transfer learning
CN112668633A (en) Adaptive graph migration learning method based on fine granularity field
Gao et al. FIRN: a novel fish individual recognition method with accurate detection and attention mechanism
CN115659239A (en) High-robustness heterogeneous graph node classification method and system based on feature extraction reinforcement
CN116721458A (en) Cross-modal time sequence contrast learning-based self-supervision action recognition method
CN115934883A (en) Entity relation joint extraction method based on semantic enhancement and multi-feature fusion
CN115063612A (en) Fraud early warning method, device, equipment and storage medium based on face-check video
Zou et al. Research on human movement target recognition algorithm in complex traffic environment
CN115115966A (en) Video scene segmentation method and device, computer equipment and storage medium
Liu et al. Intelligent image recognition system for detecting abnormal features of scenic spots based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129