CN115063692A - Remote sensing image scene classification method based on active learning - Google Patents

Remote sensing image scene classification method based on active learning Download PDF

Info

Publication number
CN115063692A
CN115063692A CN202210797684.7A CN202210797684A CN115063692A CN 115063692 A CN115063692 A CN 115063692A CN 202210797684 A CN202210797684 A CN 202210797684A CN 115063692 A CN115063692 A CN 115063692A
Authority
CN
China
Prior art keywords
marked
samples
sample set
unlabeled
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210797684.7A
Other languages
Chinese (zh)
Other versions
CN115063692B (en
Inventor
李旭
王飞月
张一凡
李立欣
卫保国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210797684.7A priority Critical patent/CN115063692B/en
Publication of CN115063692A publication Critical patent/CN115063692A/en
Application granted granted Critical
Publication of CN115063692B publication Critical patent/CN115063692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image scene classification method based on active learning, which comprises the following steps: randomly selecting a batch of samples from the sample set to be artificially marked, wherein the marked samples are an initial marked sample set D 1 To doThe marked samples are unlabeled sample set U 1 Inputting the feature of the convolutional neural network to obtain a labeled sample feature map and an unlabeled sample feature map, inputting the labeled sample feature map and the unlabeled sample feature map to a branch network for training, and using a query strategy to obtain an unlabeled sample set U 1 Selecting a batch of samples to be marked manually, and adding the marked samples into an initial marked sample set D 1 To form a new labeled sample set D 2 Repeating until the sample set U is not marked 1 The total marked or marked samples reach the initial marked sample set D 1 And unlabeled sample set U 1 And 20% of; the invention uses the samples with rich information amount screened by active learning to classify the scene, and can obtain high classification precision by using less labeled samples.

Description

Remote sensing image scene classification method based on active learning
Technical Field
The invention belongs to the field of remote sensing image scene classification, and particularly relates to a remote sensing image scene classification method based on active learning.
Background
Remote sensing image scene classification (remote sensing image scene classification) refers to selecting images with similar scene characteristics from a series of remote sensing images, and allocating a specific scene type label to each selected image to complete scene classification. At present, remote sensing image scene classification plays an important role in urban planning, environment monitoring, ground target identification and detection, natural disaster loss assessment land resource management and other applications.
The method commonly used internationally at present is a deep feature learning-based method, which uses a neural network of a deep architecture to directly generate features from an original image and then classifies the features. In fact, methods based on deep feature learning rely heavily on large-scale samples, and the high cost of constructing large numbers of labeled samples limits the development of such methods to some extent. In conventional image classification, active learning is typically employed to reduce the need for a number of labeled samples, which can significantly reduce cost.
At present, a remote sensing image scene classification method is mainly a method based on deep feature learning, the method based on deep feature learning heavily depends on large-scale samples, and the development of the method is limited to a certain extent by the high cost of constructing a large number of marked samples.
Disclosure of Invention
The invention aims to provide a remote sensing image scene classification method based on active learning, which does not depend on large-scale marking samples and has lower marking cost.
The invention adopts the following technical scheme: a remote sensing image scene classification method based on active learning comprises the following steps:
step S1: randomly selecting a batch of samples from the sample set to be artificially marked, wherein the marked samples are an initial marked sample set D 1 The unlabeled sample is an unlabeled sample set U 1
Step S2: marking the initial sample set D 1 And unlabeled sample set U 1 Inputting the data into a convolutional neural network for feature extraction to obtain a labeled sample feature map and an unlabeled sample feature map,
step S3: inputting the marked sample feature map and the unmarked sample feature map into a branch network for training, wherein the branch network comprises a discriminator and a classifier,
inputting the characteristic diagram of the marked sample into a classifier for training, inputting the characteristic diagram of the marked sample and the characteristic diagram of the unmarked sample into a discriminator for training,
the loss function of the branched network is:
Figure BDA0003732738560000021
wherein L is Poly-1 =-log(P t )+ε 1 (1-P t )
Wherein Y is
Figure BDA0003732738560000022
Of the weight coefficient lambda 1 And λ 2 Has a value range of [0,1 ]],L BCE As a loss function of the discriminator, L Poly-1 As a loss function of the classifier, P t Is the predicted probability of the model to the real category of the target;
step S4: from unlabeled sample set U using query strategy 1 In which a batch of samples is selected for manual workMarking, adding the marked sample to the initial marked sample set D 1 To form a new labeled sample set D 2
Step S5: repeating steps S2-S4 until unlabeled sample set U 1 The total marked or marked samples reach the initial marked sample set D 1 And unlabeled sample set U 1 And 20% of the sum.
Further, the convolutional neural network in step S2 includes a first convolutional module, a second residual network, a third residual network, a fourth residual network, a fifth residual network, and a full link layer, which are connected in sequence, where the first convolutional module includes a convolutional layer and a pooling layer, which are connected in sequence, the second residual network, the third residual network, the fourth residual network, and the fifth residual network are all composed of two residual modules, which are connected in sequence, and each residual module includes two convolutional layers.
Further, a new labeled sample set D is formed in the selected samples in step S4 2 The selection method comprises the following steps:
from unlabeled sample set U with trained discriminator 1 The medium selection discriminator identifies 5k samples with highest probability of unmarked samples, k is the number of samples selected in each round of active learning, then the 5k samples are grouped into k classes by using a KMeans clustering method, k samples closest to a clustering central point are selected for marking, and the marked samples are added into an initial marked sample set D 1 To form a new labeled sample set D 2
The invention has the beneficial effects that: the method uses the samples with rich information amount screened by active learning to classify the scene, can obtain high classification precision by using less labeled samples, and does not need large-scale samples compared with the prior method, thereby obviously reducing the labeling cost of the sample data set.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the principle of the present invention;
FIG. 3 is a block diagram of SE-ResNet.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses a remote sensing image scene classification method based on active learning, wherein the size of a remote sensing image is H ' rows multiplied by W ' columns, the number of channels is C ', and a flow chart of the method is shown in figure 1 and comprises the following steps:
step S1: randomly selecting a batch of samples from the sample set to be artificially marked, wherein the marked samples are an initial marked sample set D 1 The unlabeled sample is an unlabeled sample set U 1
Step S2: marking the initial sample set D 1 And unlabeled sample set U 1 Inputting the data into a convolutional neural network for feature extraction to obtain a marked sample feature map and an unmarked sample feature map, namely, initially marking a sample set D 1 Inputting into convolutional neural network for feature extraction to obtain labeled sample feature map, and collecting unlabeled sample set U 1 Inputting the feature into a convolutional neural network to extract the feature to obtain an unlabeled sample feature map.
The process of feature extraction can be represented as follows:
Figure BDA0003732738560000031
wherein X is an input image, and X belongs to R H′×W′×C′ And F is a convolutional neural network,
Figure BDA0003732738560000032
in order to extract the characteristic map, the method comprises the following steps,
Figure BDA0003732738560000033
H. w, C respectively show characteristic diagrams
Figure BDA0003732738560000034
The number of rows, columns and channels.
The convolutional neural network in the step S2 includes a first convolutional module, a second residual network, a third residual network, a fourth residual network, a fifth residual network, and a full connection layer that are sequentially connected, where the first convolutional module includes a convolutional layer and a pooling layer that are sequentially connected, the second residual module, the second residual network, the third residual network, the fourth residual network, and the fifth residual network are all composed of two residual modules that are sequentially connected, and each residual module includes two convolutional layers.
Step S3: inputting a marked sample feature map and an unmarked sample feature map into a branch network for training, wherein the branch network comprises a discriminator and a classifier, inputting the marked sample feature map into the classifier for training, inputting the marked sample feature map and the unmarked sample feature map into the discriminator for training, and the discriminator and the classifier of the branch network are trained through a loss function L BCE And L Poly-1 And (4) performing combined training.
The loss function of the branched network is:
Figure BDA0003732738560000041
L Poly-1 =-log(P t )+ε 1 (1-P t ) (3)
wherein Y is
Figure BDA0003732738560000042
Of the weight coefficient lambda 1 And λ 2 Has a value range of [0,1 ]],L BCE As a loss function of the discriminator, L Poly-1 As a loss function of the classifier, P t Is the predicted probability of the model to the true class of the target.
Step S4: from unlabeled sample set U using query strategy 1 Selecting a batch of samples to be marked manually, and adding the marked samples into an initial marked sample set D 1 To form a new labeled sample set D 2
Forming a new labeled sample set D in the selected samples in step S4 2 The selection method comprises the following steps:
from unlabeled sample set U with trained discriminator n The medium-picking discriminator identifies the unmarked sample with the highest probabilityThe number of the samples of 5k is the number of the samples selected in each round of active learning, then the 5k samples are grouped into k classes by using a KMeans clustering method, k samples closest to a clustering central point are selected for marking, and the marked samples are added into an initial marked sample set D 1 To form a new labeled sample set D 2
Step S5: repeating steps S2-S4 until unlabeled sample set U 1 The total marked or marked samples reach the initial marked sample set D 1 And unlabeled sample set U 1 And 20% of the sum.
Example 1
The size of the remote sensing image is H ' rows multiplied by W ' columns, and the number of channels is C '. Taking AID and NWPU data sets as examples, the AID data set has 10000 images and contains 30 categories of scene images, wherein each category has about 220 and 420 images, the size of each image is 600 rows × 600 columns, and the number of channels is 3. The NWPU data set has 31500 pictures covering 45 scene classes, where each class has 700 pictures and each picture has a size of 256 rows by 256 columns of channels with a number of 3. Dividing the data set into a training set and a test set according to the ratio of 8: 2, wherein the AID training set comprises 8000 images, and the test set comprises 2000 images; the NWPU training set contained 252000 images and the test set contained 6300 images. A training set of data sets is defined as a sample set.
Step S1: selecting an initial marker sample set D 1
Randomly selecting 10% of samples (800 pictures) from the sample set for artificial marking, and defining the marked samples as an initial marked sample set D 1 Initially labeled sample set D 1 A training set for the first round of active learning.
Step S2: feature extraction
And selecting the SE-ResNet network as a feature extractor to complete feature extraction, wherein the flow chart of the SE-ResNet is shown in FIG. 3.
SE-ResNet adds a SE (Squeeze-and-Excitation) module to each residual mapping of ResNet. SE has two operations, Squeeze (Squeeze) and Excite (Excite). Firstly, input X is passed through convolution operation F tr B is obtained. X is formed by R 600×600×3 ,B∈R H×W×C . Residual operation F on B re Obtaining P, P ∈ R H×W×C 。P=[p 1 ,p 2 ,...,p C ],p k Is the kth element in P.
The process of Squeeze is carried out on P, the squeezing operation is a global average pooling, the result of the Squeeze operation is V, and V belongs to R 1×1×C . Kth element V in V k Is represented as follows:
Figure BDA0003732738560000051
wherein p is k (i, j) represents p k The median is the value of the point (i, j).
Performing an Excite operation on the V, wherein the result of the Excite operation is M, and M belongs to R 1×1×C ,m k Representing the kth element in M, the excitation process can be expressed as:
m k =F ex (v k ,W)=σ(g(v k ,W))=σ(W 2 δ(W 1 v k )) (5)
where σ denotes Sigmoid function, δ denotes ReLU function,
Figure BDA0003732738560000052
the parameter r is used to reduce the dimension of the fully connected layer.
Finally, the P and the M are subjected to Scale operation and then are added with B to obtain
Figure BDA0003732738560000053
Figure BDA0003732738560000054
Figure BDA0003732738560000055
Represent
Figure BDA0003732738560000056
The kth element in (b) may be represented as:
Figure BDA0003732738560000057
step S3:
inputting a marked sample feature map and an unmarked sample feature map obtained by feature extraction into a branch network, wherein the branch network comprises two branches, one branch is a discriminator, and the cross entropy loss L is minimized by minimizing the binary value BCE Distinguishing between labeled and unlabeled samples, the other branch being a classifier, by minimizing inter-class loss L Poly-1 To train a remote sensing image scene classifier.
The branch network is jointly trained by using the formula (2), and the loss function of the branch network is as follows:
Figure BDA0003732738560000061
L Poly-1 =-log(P t )+ε 1 (1-P t ) (3)
wherein Y is
Figure BDA0003732738560000062
Of the weight coefficient lambda 1 And λ 2 Has a value range of [0,1 ]],L BCE As a loss function of the discriminator, L Poly-1 As a loss function of the classifier, P t Is the predicted probability of the model to the true class of the target.
Wherein, the weight coefficient λ in the formula (2) 1 Is set to 1, lambda 2 Set to 0.5. When distinguishing between labeled and unlabeled samples, lambda 2 Is set to 1.
Step S4:
forming a new labeled sample set D in the selected sample 2 The selection method comprises the following steps:
from unlabeled sample set U with trained discriminator 1 The medium-selection discriminator identifies 5k samples with the highest probability of unmarked samples, wherein k is the number of samples selected in each round of active learning, and then the KMeans clustering method is utilized to carry out the comparisonClustering the 5k samples into k classes, selecting k samples closest to the clustering center point for marking, and adding the marked samples into a marked sample set D 1 To form a new labeled sample set D 2
Step S5: repeating steps S2-S4 until unlabeled sample set U 1 The total marked or marked samples reach the initial marked sample set D 1 And unlabeled sample set U 1 And 20% of the sum. And (4) reaching the expected target or marking all samples and stopping repeating to obtain the result of the final round of active learning.
The experiment for comparing the effect of the embodiment with that of other classical active learning methods is shown in table 1 and table 2, and representative three active learning methods, namely Lloss, CoreSet, TOD and Random, are selected in the experiment to verify the performance of the embodiment. All experiments were performed under Python3.7, with computer CPU being Intel Core (TM) i9-10980XE and GPU being NVIDIA GeForce RTX 2080 TI.
Table 1 experimental results on AID data set
Figure BDA0003732738560000071
Table 2 experimental results on NWPU dataset
Figure BDA0003732738560000072
As can be seen from tables 1 and 2, the TOD method and the llos method are less accurate than the Random method in most cases, which may be because the two strategies for actively learning and selecting samples mainly aim at the distribution of natural images and have poor generalization, and therefore, when transplanted to a remote sensing image for experiment, the two methods have lower performance; the precision of the CoreSet method is slightly higher than that of Random, which indicates that the generalization of the CoreSet method is higher, and the CoreSet method is also effective when being transplanted to the field of remote sensing image scene classification.
The best results were achieved in each round of active learning, with the method of this example achieving an accuracy of 83.4% on the AID dataset when using 20% labeled samples, 10.45% higher than Random, 6.42% higher than CoreSet, 13.2% higher than TOD, and 18.52% higher than LLoss.
This example achieved 85.61% accuracy on the NWPU dataset, 11.59% higher than Random, 5.25% higher than CoreSet, 11.72% higher than TOD, 20.37% higher than LLoss. The invention uses the mixed query strategy combining diversity and uncertainty, and can improve the diversity of the samples while selecting the samples with rich information; semi-supervised learning is introduced into active learning, and the classification precision is effectively improved by using the information of unlabeled samples. Experiments prove that the invention can improve the classification precision while reducing the number of labeled samples, and proves the effectiveness of active learning in the field of remote sensing image scene classification.
The invention uses a classifier and a discriminator to carry out combined training, and adds a loss term related to unlabeled samples (namely a loss function L of the discriminator) BCE ) The semi-supervised learning is introduced into the active learning, and the precision of classification training is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A remote sensing image scene classification method based on active learning is characterized by comprising the following steps:
step S1: manually and randomly selecting a batch of samples from the sample set for manual marking, wherein the marked samples are an initial marked sample set D 1 The unlabeled sample is an unlabeled sample set U 1
Step S2: marking the initial sample set D 1 And unlabeled sample set U 1 Inputting the data into a convolutional neural network for feature extraction to obtain a labeled sample feature map and an unlabeled sample feature map,
step S3: inputting the marked sample feature map and the unmarked sample feature map into a branch network for training, wherein the branch network comprises a discriminator and a classifier,
inputting the characteristic diagram of the marked sample into a classifier for training, inputting the characteristic diagram of the marked sample and the characteristic diagram of the unmarked sample into a discriminator for training,
the loss function of the branched network is:
Figure FDA0003732738550000011
wherein L is Poly-1 =-log(P t )+∈ 1 (1-P t )
Wherein Y is
Figure FDA0003732738550000012
Of the weight coefficient lambda 1 And λ 2 Has a value range of [0,1 ]],L BCE As a loss function of the discriminator, L Poly-1 As a loss function of the classifier, P t Is the predicted probability of the model to the real category of the target;
step S4: from unlabeled sample set U using query strategy 1 Selecting a batch of samples to be marked manually, and adding the marked samples into an initial marked sample set D 1 To form a new labeled sample set D 2
Step S5: repeating steps S2-S4 until unlabeled sample set U 1 The total number of marked or marked samples reaches the initial marked sample set D 1 And unlabeled sample set U 1 And 20% of the sum.
2. The method for classifying remote sensing image scenes based on active learning of claim 1, wherein the convolutional neural network in step S2 comprises a first convolutional module, a second residual network, a third residual network, a fourth residual network, a fifth residual network and a fully-connected layer which are connected in sequence, the first convolutional module comprises a convolutional layer and a pooling layer which are connected in sequence, the second residual network, the third residual network, the fourth residual network and the fifth residual network are all composed of two residual modules which are connected in sequence, and each residual module comprises two convolutional layers.
3. The method for classifying remote sensing image scenes based on active learning according to claim 1 or 2, characterized in that in step S4, a new labeled sample set D is formed by selecting samples 2 The selection method comprises the following steps:
from unlabeled sample set U with trained discriminator 1 The medium selection discriminator identifies 5k samples with highest probability of unmarked samples, k is the number of samples selected in each round of active learning, then the 5k samples are grouped into k classes by using a KMeans clustering method, k samples closest to a clustering central point are selected for marking, and the marked samples are added into an initial marked sample set D 1 To form a new labeled sample set D 2
CN202210797684.7A 2022-07-06 2022-07-06 Remote sensing image scene classification method based on active learning Active CN115063692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210797684.7A CN115063692B (en) 2022-07-06 2022-07-06 Remote sensing image scene classification method based on active learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210797684.7A CN115063692B (en) 2022-07-06 2022-07-06 Remote sensing image scene classification method based on active learning

Publications (2)

Publication Number Publication Date
CN115063692A true CN115063692A (en) 2022-09-16
CN115063692B CN115063692B (en) 2024-02-27

Family

ID=83204984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210797684.7A Active CN115063692B (en) 2022-07-06 2022-07-06 Remote sensing image scene classification method based on active learning

Country Status (1)

Country Link
CN (1) CN115063692B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019199072A1 (en) * 2018-04-11 2019-10-17 Samsung Electronics Co., Ltd. System and method for active machine learning
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN112541580A (en) * 2020-12-31 2021-03-23 南京航空航天大学 Semi-supervised domain self-adaption method based on active counterstudy
US20210110147A1 (en) * 2019-08-08 2021-04-15 Nec Laboratories America, Inc. Human detection in scenes
WO2022052367A1 (en) * 2020-09-10 2022-03-17 中国科学院深圳先进技术研究院 Neural network optimization method for remote sensing image classification, and terminal and storage medium
CN114627390A (en) * 2022-05-12 2022-06-14 北京数慧时空信息技术有限公司 Improved active learning remote sensing sample marking method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019199072A1 (en) * 2018-04-11 2019-10-17 Samsung Electronics Co., Ltd. System and method for active machine learning
US20210110147A1 (en) * 2019-08-08 2021-04-15 Nec Laboratories America, Inc. Human detection in scenes
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
WO2022052367A1 (en) * 2020-09-10 2022-03-17 中国科学院深圳先进技术研究院 Neural network optimization method for remote sensing image classification, and terminal and storage medium
CN112541580A (en) * 2020-12-31 2021-03-23 南京航空航天大学 Semi-supervised domain self-adaption method based on active counterstudy
CN114627390A (en) * 2022-05-12 2022-06-14 北京数慧时空信息技术有限公司 Improved active learning remote sensing sample marking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余广民;林金堂;姚剑敏;严群;林志贤;: "基于GAN网络的异常检测算法研究", 广播电视网络, no. 04, 15 April 2020 (2020-04-15) *
汪婵;王磊;丁西明;: "基于预聚类和主动半监督学习的遥感影像分类", 湖北第二师范学院学报, no. 02, 15 February 2018 (2018-02-15) *

Also Published As

Publication number Publication date
CN115063692B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN107066559B (en) Three-dimensional model retrieval method based on deep learning
CN109840560B (en) Image classification method based on clustering in capsule network
CN108090472B (en) Pedestrian re-identification method and system based on multi-channel consistency characteristics
CN112633382B (en) Method and system for classifying few sample images based on mutual neighbor
CN108446312B (en) Optical remote sensing image retrieval method based on deep convolution semantic net
CN103955702A (en) SAR image terrain classification method based on depth RBF network
CN112668630B (en) Lightweight image classification method, system and equipment based on model pruning
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111652273B (en) Deep learning-based RGB-D image classification method
CN106557579A (en) A kind of vehicle model searching system and method based on convolutional neural networks
CN103177265B (en) High-definition image classification method based on kernel function Yu sparse coding
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN112862015A (en) Paper classification method and system based on hypergraph neural network
CN111738052B (en) Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN111680579A (en) Remote sensing image classification method for adaptive weight multi-view metric learning
Zhang et al. Semisupervised center loss for remote sensing image scene classification
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN114579794A (en) Multi-scale fusion landmark image retrieval method and system based on feature consistency suggestion
CN114329031A (en) Fine-grained bird image retrieval method based on graph neural network and deep hash
CN106570514A (en) Automobile wheel hub classification method based on word bag model and support vector machine
CN115063692B (en) Remote sensing image scene classification method based on active learning
CN112990336B (en) Deep three-dimensional point cloud classification network construction method based on competitive attention fusion
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant