CN111428758A - Improved remote sensing image scene classification method based on unsupervised characterization learning - Google Patents

Improved remote sensing image scene classification method based on unsupervised characterization learning Download PDF

Info

Publication number
CN111428758A
CN111428758A CN202010149937.0A CN202010149937A CN111428758A CN 111428758 A CN111428758 A CN 111428758A CN 202010149937 A CN202010149937 A CN 202010149937A CN 111428758 A CN111428758 A CN 111428758A
Authority
CN
China
Prior art keywords
generator
discriminator
remote sensing
sensing image
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010149937.0A
Other languages
Chinese (zh)
Inventor
罗小波
魏宇帆
胡力心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010149937.0A priority Critical patent/CN111428758A/en
Publication of CN111428758A publication Critical patent/CN111428758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an improved remote sensing image scene classification method based on unsupervised representation learning, which is based on the existing generation confrontation network model, uses the basic theory of WGAN-GP to define the loss function and the training mode of a generator and a discriminator, extracts high-level and middle-level feature information in a scene class by adding a multi-feature fusion layer after the discriminator through operations such as maximum pooling and the like, and feeds back the feature gradient to the generator, so that the generator can generate an image close to a real sample, uses a more advanced WGAN-GP model to generate a more stable and diversified false sample image with a high quality of 256 × size aiming at the space complexity and the spectral characteristics of the remote sensing image, and finally uses a multi-layer perceptron classifier to classify the extracted features in the multi-feature fusion layer.

Description

Improved remote sensing image scene classification method based on unsupervised characterization learning
Technical Field
The invention belongs to the field of intelligent classification of remote sensing images. The method can generate high-quality false samples as data expansion by using samples without labels of the root bones, and then realize a high-performance classification task.
Background
The remote sensing image scene classification is an active research subject in the field of aviation and satellite image analysis, and is used for classifying scene images into a group of discrete meaningful land use and land utilization (L U L C) according to image contents.
In recent years, deep learning, as a new intelligent method in pattern recognition, has become a hotspot of research in the field of machine learning, and has been widely applied to recognition and classification of images, audios, and characters. The result of scene classification usually depends on features extracted from images, but the current hottest deep learning method often has many limitations, such as the precondition that a deep convolutional neural network can extract effective features is that a large number of labeled training samples are required. However, it is too costly to label unmarked remote sensing images. To address this problem, some researchers have proposed unsupervised deep learning models that learn from large scale unlabeled datasets to reusable features. Unsupervised feature learning has found widespread application in machine vision, such as the constrained boltzmann machine (RBM), Sparse Automatic Encoder (SAE), and Deep Belief Network (DBN). However, due to factors such as complex ground object types (such as densely populated areas, industrial areas, etc.) and inter-class similarity (such as churches and stadiums, roads and parking lots, etc.) in the remote sensing image scenes, their recognition accuracy of the remote sensing images is not satisfactory.
The model G generates false samples through random noise (one-dimensional sequence) to try to confuse the identification of a D model, and the model D identifies whether the main learning data source is from real data or generated data, the GANs is a method for unsupervised learning and can help solve the problem that samples with a labeled training set are few, the model training does not need to deduce hidden variables, the parameters of the generator are newer and do not come from data samples directly but from back propagation of a discriminator, in order to make the GANs more suitable for the image field, the Radford et al provides a deep convolution generation countermeasure network model (GAN) combining the generation of the countermeasure network and the deep convolution neural network, the model can well learn the feature representation of the image, then generate high-quality images, the models based on the DCGANs are proved to be superior, such as the image model produced by the remote sensing, the image model produced by the CGNs, the remote sensing, the image produced by the remote sensing, the image produced images produced by the remote sensing, the model G, the image produced by the remote sensing.
Disclosure of Invention
The invention aims to solve the problems in the existing research, and provides a more efficient and improved remote sensing image scene classification method based on unsupervised characterization learning, which is used for realizing complex remote sensing images from the viewpoint of unsupervised characterization learning and generating high-quality sample expansion.
In view of the above, in order to achieve the above object, the present invention adopts a technical solution that an improved remote sensing image scene classification method based on unsupervised characterization learning includes the following steps:
selecting a remote sensing image scene data set according to requirements, wherein the method is suitable for various remote sensing image scene data sets;
preprocessing the remote sensing image in the data set to obtain data distribution x of a real remote sensing image;
initializing parameters of a model, wherein the model comprises a generator and a discriminator, and the parameters comprise a size limit, the input size of the generator and the discriminator in the model is 256 × 256, other hyper-parameters such as an exponential decay rate β 1 is set to 0.5, and a decay coefficient β 2 is set to 0.9;
inputting a random noise z into a generator, and then mapping the noise in a deconvolution neural network forming the generator to obtain a new data distribution G (z);
inputting the data distribution x and G (z) of a real remote sensing image into a discriminator together, judging the two input data by the discriminator respectively, and outputting a probability value; the probability value of the real data is close to 1, the probability value of the generated data is close to 0, which indicates that the confidence coefficient of the data generated by the generator is not high at this moment, the discriminator feeds back the parameters needing to be adjusted in the generated data to the generator, and the generator adjusts and regenerates after receiving the adjusted gradient signal;
connecting feature mapping maps of three-layer reciprocal neural networks of the discriminator together through maximum pooling operation to serve as a multi-feature fusion layer, and extracting hidden complex space, texture characteristics and the like in the feature information of the remote sensing image;
the remote sensing image characteristic information extracted from the multi-characteristic fusion layer is input into a multi-layer perceptron Classifier (M L P Classifier) formed by a full-connection network to realize classification.
Further, the multi-feature fusion layer feeds back the feature matching loss and the true and false loss of the discriminator for judging whether the sample comes from the real sample to the generator, so that the generator can generate a false sample image close to the real sample image.
The invention has the following advantages and beneficial effects:
at present, most scene classification algorithms are based on a convolutional neural network model, and more abstract feature information is generally extracted by changing a model structure, parameters, a classifier and the like. The invention provides a novel unsupervised feature learning model based on a WGAN-GP (Wassertein GAN with GradientPenalty) model by using a method for generating a combination of a countermeasure network and a convolutional neural network, wherein Wassertein distance is introduced into the model to replace the original Jensen-Shannon divergence (JS) distance to measure the distance between a generated sample distribution and a real sample distribution, so that the difference between samples can still be well expressed under the condition that the distributions of two sample images are not overlapped. Particularly, the objective function of the originally generated countermeasure network (GANS) model is optimized by means of the fact that the loss of the generator and the discriminator does not take log, the absolute values of the discriminator are cut off to be not more than a fixed constant c after the parameters of the discriminator are updated every time, the structure is more stable, and clear and various high-resolution remote sensing images can be generated. The last three layers are connected together through operations such as maximum pooling and the like at the back of the discriminator to serve as a multi-feature layer, high-level information and middle-level information are fused in the multi-feature layer, and the characteristics of the hidden complex space and texture in the remote sensing image can be extracted to the maximum extent. The multi-feature layer designed by the invention not only provides feature information for the classifier, but also feeds back feature matching loss and true and false image loss rate for the generator, so that the generator can generate a false sample image close to a real sample image, thereby achieving the purpose of sample expansion and realizing sample enhancement.
Through the convolution neural network of the discriminator and the characteristic fusion step of a plurality of characteristic layers, a generator can generate a false sample image which is more approximate to the real sample distribution, and can also extract more abstract deep-level characteristic information which can be stored in a tensor form and provides support for the subsequent Classifier training.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of an improved remote sensing image scene classification method based on unsupervised characterization learning according to the present invention;
FIG. 2 is a network architecture diagram of a generator in the model of the present invention;
FIG. 3 is a network architecture diagram of discriminators and classifiers in the model of the invention.
Detailed Description
The technical solution in the embodiment of the present invention will be clearly described below with reference to the accompanying drawings in the embodiment of the present invention. The described examples are only a few of the embodiments of the present invention.
The method is oriented to the related research of remote sensing image scene classification, and the sample expansion and feature extraction are realized based on unsupervised characterization learning and generation of a countermeasure network. The three data sets of UC Merced, AID and NWPU-RESISC45 are used for experiment and verification, the codes and the operation flow are realized by means of a PyTorch platform, an Ubuntu 16.04 operating system is adopted, and the experiment configuration is that a 3.70GHz 8-core i7-8700k CPU and an NVIDIA GTX 1080 GPU are adopted. The invention provides a novel WGAN-GP model-based unsupervised feature learning model (as shown in figure 1), which optimizes an objective function of an originally generated confrontation network model, has a more stable structure and can generate clear and various high-resolution remote sensing images. According to the invention, the feature mapping maps of the reciprocal three-layer neural network of the discriminator are connected together through the maximum pooling operation behind the discriminator to serve as a multi-feature layer, the multi-feature layer integrates high-level and middle-level information, and the characteristics of the hidden complex space and texture in the remote sensing image can be extracted to the maximum extent. The multi-feature layer, in addition to providing feature information for the classifier, also feeds back feature matching loss and true and false image loss rate for the generator, enabling the generator to generate false sample images that approximate real sample images. The invention uses a multi-layer perceptron classifier composed of fully connected networks to implement the classification function. Through experiments on three data sets, the model can generate clearer and more various images and has higher classification performance than other unsupervised generation models.
FIG. 1 of the present invention shows a flow chart of the method of the present invention, and details of the training process of the whole algorithm and the design of the generator, the discriminator and the classifier in the model are respectively described below:
(1) all experimental environments are based on a PyTorch platform, the Batch Size during model training is 64, 300 epochs are trained in total, L eakyRe L U activation functions are used for all convolution layers in a discriminator, the Re L U activation functions are used for other layers except an output layer in a generator, the learning rate is set to be 0.0002, other hyper parameters such as exponential decay rate β 1 is set to be 0.5, decay coefficient β 2 is set to be 0.9, the generator and the discriminator update parameters in turn, and the generator is updated once every two times of updating of the discriminator, so that the model can be more stable, and the loss and the crash of the generator can be avoided.
(2) The invention uses six times of transposition convolution operation to generate random noise into an RGB remote sensing image with the size of 256 × 256, each transposition convolution operation is an up-sampling process, the size of a deconvolution kernel is set to be 4 ×, and the step size is 2.
(3) The invention uses six convolutional layers to extract characteristics and identify whether input data is false data from a generator or real sample data, the size of a convolutional kernel is set to be 5 × 5, the step size is 2, a multi-characteristic fusion layer contains characteristic information of three last convolutional layers, the invention respectively uses maximum pooling operation with 4 × 4 and 2 × 2 of a kernel to screen the characteristics of the third last convolutional layer and the second convolutional layer, and then the characteristics are connected with all characteristic graphs of the last convolutional layer to form the multi-characteristic fusion layer, thereby fusing high-level and middle-level information and maximally extracting hidden complex space and texture characteristics in a remote sensing image.
(4) A multidimensional random noise z is first input to the generator and then the noise is mapped to a new data distribution g (z) in the neural network that constitutes the generator. The overall formula for generating the countermeasure network can be expressed as follows:
Figure BDA0002402075540000041
wherein p isdata(x) Representing the distribution of real samples, pz(z) represents the distribution of the random noise z generating the false samples, and the first item on the right side of the formula ensures that the discriminator can make correct judgment on the real training image (output D (x) is close to 1); the purpose of the second item on the right is to make the generated image g (z) as authentic as possible, making it indistinguishable from an authentic image by a discriminator. D and G denote a discriminator and a generator,
Figure BDA0002402075540000042
expressing the formula is to represent the competing game process between the generator and the discriminator by minimizing the expected value of the generator and maximizing the expected value of the discriminator, x represents the real samples, z represents the random multidimensional noise, D (x) and G (z) respectively represent the distribution of the real samples received by the discriminator and the distribution of the false samples of the random noise output received by the generator,
Figure BDA0002402075540000043
is shown asThe probability of a sample is derived from the expected value of the real data,
Figure BDA0002402075540000044
representing the expected value of the probability that the sample came from the generator. D (g (z)) represents the distribution where the samples received by the discriminator are false samples from the generator.
The generation of the confrontation network training process can be seen as follows: the parameter updating of G is not directly from the sample, so that the condition that the learned model is limited due to the fact that the maximum likelihood estimation is carried out on the real sample in other generative models is avoided. In an ideal state, the generator finally generates a pseudo sample which is distributed the same as the real sample, the accuracy of the discriminator is about 0.5, nash equilibrium is achieved, and at the moment, the generator and the discriminator both learn the distribution and the characteristics of the sample. The loss function of the generator can therefore be defined to minimize the following equation:
L(G)wgan=-Ez~p(z)[D(z)]
wherein D (z) denotes the distribution of false samples generated by random noise received by the discriminator, Ez~p(z)[D(z)]Representative is when the sample received by the discriminator is the expected value of the sample generated by the random noise from the generator, L (G)wganRepresenting the loss function of the generator.
(5) The generator generates the distribution similar to the real samples by the transposition convolution operation of the multi-dimensional random noise, and the discriminator is a two-classifier to distinguish whether the input image is the real sample or the generated false sample. In the classification task, the invention takes the identification model as a feature extractor, and then the generated model provides additional training data, so that the identifier can better learn the image features. The weights of the generators are fixed when training the discriminators. The penalty function of the discriminator is therefore:
Figure BDA0002402075540000051
where lambda is a penalty factor, where,
Figure BDA0002402075540000052
representing a gradient function. The WGAN-GP model theoretically solves the problem that the training of an originally generated countermeasure network is unstable due to the disappearance of generator gradients, and has strong stability, so that the method uses a theoretical model based on the WGAN-GP.
(6) The feature mapping maps of the reciprocal three-layer neural network of the discriminator are connected together through maximum pooling operation behind the discriminator to serve as a multi-feature fusion layer, high-level information and middle-level information are fused in the multi-feature fusion layer, and hidden complex space and texture characteristics in the remote sensing image can be extracted to the maximum extent. The multi-feature layer not only provides feature information for the classifier, but also feeds back feature matching loss and true and false loss rate for the generator, wherein the true and false loss rate is a gradient signal fed back by a discriminator loss function to the generator, the purpose of the multi-feature layer is to output an expected value, judge whether an input image sample is from a real sample or a false sample generated by the generator, if the probability value of the real data is close to 1 and the probability value of the generated data is close to 0, the confidence coefficient of the data generated by the generator is not high at the moment, the discriminator feeds back parameters needing to be adjusted in the generated data to the generator, and the generator adjusts and regenerates the parameters after receiving the adjusted gradient signal. In order to make the image generated by the generator more like a real image, the invention combines the expected values of the feature matching layer in the discriminator with the generative model loss function of the original WGAN. A feature matching portion is added while training the generator to match expected values of features in the multi-feature layer of the discriminator. The feature matching penalty is therefore defined as:
Figure BDA0002402075540000053
where f (x) represents the activation function at the multi-feature level of the discriminator, which feeds back expected values of features in the unlabeled real sample data to the generator, enabling the generator to generate a feature image that is close to the unlabeled real sample the loss function of the original WGAN generator is Lwgan=-Ez~p(z)[D(z)]Thus, the present inventionThe final generator loss function of the explicit model is defined as follows:
Figure BDA0002402075540000054
(7) through experiments on a plurality of data sets, the model of the invention can generate clearer and more various images than other unsupervised generation models (such as MARTA GANs and the like), and the classification performance is higher.
The above examples are to be construed as merely illustrative, and not limitative of the remainder of the disclosure in any way whatsoever. Various changes or modifications equivalent to those made according to the present invention also fall within the scope of the present invention defined by the appended claims.

Claims (8)

1. An improved remote sensing image scene classification method based on unsupervised characterization learning is characterized by comprising the following steps:
preprocessing the remote sensing image in the data set to obtain data distribution x of a real remote sensing image;
initializing parameters of a model, the model comprising a generator and a discriminator;
inputting a random noise z into a generator, and then mapping the noise in a deconvolution neural network forming the generator to obtain a new data distribution G (z);
inputting the data distribution x and G (z) of a real remote sensing image into a discriminator together, judging the two input data by the discriminator respectively, and outputting a probability value;
connecting feature mapping maps of three reciprocal layers of neural networks of the discriminator together through maximum pooling operation to serve as a multi-feature fusion layer, and extracting feature information of the remote sensing image;
the remote sensing image characteristic information extracted from the multi-characteristic fusion layer is input into a multi-layer perceptron classifier formed by a full-connection network to realize classification.
2. The improved remote sensing image scene classification method based on unsupervised characterization learning as claimed in claim 1, wherein: the multi-feature fusion layer also feeds back the feature matching loss and the true and false loss of the discriminator for judging whether the sample comes from the real sample to the generator.
3. The improved remote sensing image scene classification method based on unsupervised characterization learning according to claim 1 or 2, characterized in that: the overall formula in the generator for generating the countermeasure network can be expressed as follows:
Figure FDA0002402075530000011
wherein p isdata(x) Representing the distribution of real samples, pz(z) denotes the distribution of random noise z generating false samples, D and G denote the discriminator and generator, respectively,
Figure FDA0002402075530000012
expressing the formula is to represent the competing game process between the generator and the discriminator by minimizing the expected value of the generator and maximizing the expected value of the discriminator, x represents the real samples, z represents the random multidimensional noise, D (x) and G (z) respectively represent the distribution of the real samples received by the discriminator and the distribution of the false samples of the random noise output received by the generator,
Figure FDA0002402075530000013
the probability of representing a sample is from the expected value of the real data,
Figure FDA0002402075530000014
representing the expected value of the probability that the sample came from the generator.
4. The improved remote sensing image scene classification method based on unsupervised characterization learning is characterized by comprising the following steps of: the loss function of the generator is defined to minimize the following equation:
L(G)wgan=-Ez~p(z)[D(z)]
wherein D (z) denotes the distribution of false samples generated by random noise received by the discriminator, Ez~p(z)[D(z)]Representative is when the sample received by the discriminator is the expected value of the sample generated by the random noise from the generator, L (G)wganRepresenting the loss function of the generator.
5. The improved remote sensing image scene classification method based on unsupervised characterization learning according to claim 1 or 2, characterized in that: the generator generates the multi-dimensional random noise to be distributed similar to real samples through a deconvolution neural network, and the discriminator is a two-classifier so as to distinguish whether the input image is the real sample or the generated false sample.
6. The improved remote sensing image scene classification method based on unsupervised characterization learning according to claim 5, characterized in that: fixing weights of the generators in training the discriminators; the loss function of the discriminator is then:
Figure FDA0002402075530000021
where lambda is a penalty factor, where,
Figure FDA0002402075530000022
representing a gradient function.
7. The improved remote sensing image scene classification method based on unsupervised characterization learning according to claim 2 or 6, characterized in that: the feature matching penalty is defined as follows:
Figure FDA0002402075530000023
where f (x) represents the activation function on the multi-feature layer of the discriminator.
8. The improved remote sensing image scene classification method based on unsupervised characterization learning is characterized by comprising the following steps of: the generator loss function is defined as follows:
Figure FDA0002402075530000024
CN202010149937.0A 2020-03-06 2020-03-06 Improved remote sensing image scene classification method based on unsupervised characterization learning Pending CN111428758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010149937.0A CN111428758A (en) 2020-03-06 2020-03-06 Improved remote sensing image scene classification method based on unsupervised characterization learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010149937.0A CN111428758A (en) 2020-03-06 2020-03-06 Improved remote sensing image scene classification method based on unsupervised characterization learning

Publications (1)

Publication Number Publication Date
CN111428758A true CN111428758A (en) 2020-07-17

Family

ID=71546215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010149937.0A Pending CN111428758A (en) 2020-03-06 2020-03-06 Improved remote sensing image scene classification method based on unsupervised characterization learning

Country Status (1)

Country Link
CN (1) CN111428758A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149755A (en) * 2020-10-12 2020-12-29 自然资源部第二海洋研究所 Small sample seabed underwater sound image substrate classification method based on deep learning
CN113537031A (en) * 2021-07-12 2021-10-22 电子科技大学 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
CN114764880A (en) * 2022-04-02 2022-07-19 武汉科技大学 Multi-component GAN reconstructed remote sensing image scene classification method
CN117292274A (en) * 2023-11-22 2023-12-26 成都信息工程大学 Hyperspectral wet image classification method based on zero-order learning of deep semantic dictionary
CN117741070A (en) * 2024-02-21 2024-03-22 山东多瑞电子科技有限公司 Deep learning-based gas safety intelligent detection method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825235A (en) * 2016-03-16 2016-08-03 博康智能网络科技股份有限公司 Image identification method based on deep learning of multiple characteristic graphs
CN107463960A (en) * 2017-08-07 2017-12-12 石林星 A kind of image-recognizing method and device
CN108805188A (en) * 2018-05-29 2018-11-13 徐州工程学院 A kind of feature based recalibration generates the image classification method of confrontation network
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN110717374A (en) * 2019-08-20 2020-01-21 河海大学 Hyperspectral remote sensing image classification method based on improved multilayer perceptron

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825235A (en) * 2016-03-16 2016-08-03 博康智能网络科技股份有限公司 Image identification method based on deep learning of multiple characteristic graphs
CN107463960A (en) * 2017-08-07 2017-12-12 石林星 A kind of image-recognizing method and device
CN108805188A (en) * 2018-05-29 2018-11-13 徐州工程学院 A kind of feature based recalibration generates the image classification method of confrontation network
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task
CN110717374A (en) * 2019-08-20 2020-01-21 河海大学 Hyperspectral remote sensing image classification method based on improved multilayer perceptron
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AI搬运工: "WGAN-GP方法介绍", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/52799555》 *
DAOYU LIN 等: "MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification", 《ARXIV:1612.08879V3》 *
冯帅星: "基于深度学习的小样本高光谱图像分类", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
郑华滨: "令人拍案叫绝的Wasserstein GAN", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/25071913》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149755A (en) * 2020-10-12 2020-12-29 自然资源部第二海洋研究所 Small sample seabed underwater sound image substrate classification method based on deep learning
CN112149755B (en) * 2020-10-12 2022-07-05 自然资源部第二海洋研究所 Small sample seabed underwater sound image substrate classification method based on deep learning
CN113537031A (en) * 2021-07-12 2021-10-22 电子科技大学 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
CN113537031B (en) * 2021-07-12 2023-04-07 电子科技大学 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
CN114764880A (en) * 2022-04-02 2022-07-19 武汉科技大学 Multi-component GAN reconstructed remote sensing image scene classification method
CN114764880B (en) * 2022-04-02 2024-04-26 武汉科技大学 Multi-component GAN reconstructed remote sensing image scene classification method
CN117292274A (en) * 2023-11-22 2023-12-26 成都信息工程大学 Hyperspectral wet image classification method based on zero-order learning of deep semantic dictionary
CN117292274B (en) * 2023-11-22 2024-01-30 成都信息工程大学 Hyperspectral wet image classification method based on zero-order learning of deep semantic dictionary
CN117741070A (en) * 2024-02-21 2024-03-22 山东多瑞电子科技有限公司 Deep learning-based gas safety intelligent detection method
CN117741070B (en) * 2024-02-21 2024-05-03 山东多瑞电子科技有限公司 Deep learning-based gas safety intelligent detection method

Similar Documents

Publication Publication Date Title
CN110689086B (en) Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN110414377B (en) Remote sensing image scene classification method based on scale attention network
CN108830296B (en) Improved high-resolution remote sensing image classification method based on deep learning
CN111259905B (en) Feature fusion remote sensing image semantic segmentation method based on downsampling
CN111428758A (en) Improved remote sensing image scene classification method based on unsupervised characterization learning
CN110428428B (en) Image semantic segmentation method, electronic equipment and readable storage medium
CN110728192B (en) High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
US10713563B2 (en) Object recognition using a convolutional neural network trained by principal component analysis and repeated spectral clustering
US9558268B2 (en) Method for semantically labeling an image of a scene using recursive context propagation
CN110084108A (en) Pedestrian re-identification system and method based on GAN neural network
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN112949647B (en) Three-dimensional scene description method and device, electronic equipment and storage medium
CN107239759B (en) High-spatial-resolution remote sensing image transfer learning method based on depth features
CN110210431B (en) Point cloud semantic labeling and optimization-based point cloud classification method
CN106354735A (en) Image target searching method and device
CN108537121B (en) Self-adaptive remote sensing scene classification method based on meteorological environment parameter and image information fusion
CN109766934B (en) Image target identification method based on depth Gabor network
CN110674685B (en) Human body analysis segmentation model and method based on edge information enhancement
CN113658100A (en) Three-dimensional target object detection method and device, electronic equipment and storage medium
CN112115806B (en) Remote sensing image scene accurate classification method based on Dual-ResNet small sample learning
CN111652273A (en) Deep learning-based RGB-D image classification method
CN115222998B (en) Image classification method
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN115564996A (en) Hyperspectral remote sensing image classification method based on attention union network
CN114782979A (en) Training method and device for pedestrian re-recognition model, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination