CN109948693A - Expand and generate confrontation network hyperspectral image classification method based on super-pixel sample - Google Patents

Expand and generate confrontation network hyperspectral image classification method based on super-pixel sample Download PDF

Info

Publication number
CN109948693A
CN109948693A CN201910201106.0A CN201910201106A CN109948693A CN 109948693 A CN109948693 A CN 109948693A CN 201910201106 A CN201910201106 A CN 201910201106A CN 109948693 A CN109948693 A CN 109948693A
Authority
CN
China
Prior art keywords
layer
discriminator
generator
sample
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910201106.0A
Other languages
Chinese (zh)
Other versions
CN109948693B (en
Inventor
张向荣
焦李成
邢珍杰
唐旭
刘芳
侯彪
马文萍
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910201106.0A priority Critical patent/CN109948693B/en
Publication of CN109948693A publication Critical patent/CN109948693A/en
Application granted granted Critical
Publication of CN109948693B publication Critical patent/CN109948693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention proposes one kind to expand based on super-pixel sample and generate confrontation network hyperspectral image classification method, it is intended to solve when there is label training samples number less, network over-fitting leads to the problem that classification accuracy is low.It is accomplished by construction initial training collection and test set, and is expanded, and obtains expanding training set and candidate test set;Building fights network by the generation that generator and arbiter form;Dummy copy is generated using generator, obtain dummy copy using arbiter and expands the true and false prediction label and class prediction label of training set;The loss function of generator and arbiter is constructed, and alternately training generator and arbiter;Training Support Vector Machines;By candidate test set by trained arbiter and support vector machines, candidate tally set is obtained;The class label of test set is determined using maximum Voting Algorithm to candidate tally set.The present invention is effectively extracted the space characteristics of high spectrum image, alleviates overfitting problem, improves classification accuracy, can be used for carrying out terrain classification to high spectrum image.

Description

Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network
Technical Field
The invention belongs to the technical field of image processing, relates to an image classification method, and further relates to a hyperspectral image classification method based on superpixel expansion and generation confrontation network. The method can be used for classifying the ground objects of the hyperspectral images.
Background
Compared with a common sensor, the hyperspectral imaging spectrometer has more spectral channels and a narrower wavelength range in a certain wavelength range. Due to spectral overlap between bands, the hyperspectral image has higher spectral resolution. The hyperspectral image simultaneously contains abundant one-dimensional spectral information and two-dimensional spatial information by combining the ground object spatial information acquired by the imaging spectrometer. The hyperspectral image classification technology is one of the hot spots in the field of hyperspectral data application. To avoid the over-fitting phenomenon, the training of the depth model requires a large number of labeled samples. However, the collection of the labels consumes a lot of manpower and material resources, which brings a challenge to the application of deep learning in the field of hyperspectral image classification.
Traditional classifiers, such as support vector machines, decision trees, logistic regression, etc., have been widely used in the field of hyperspectral image classification. These classification methods treat the pixel as an independent unit, taking into account only the spectral information. However, studies have demonstrated that the use of spatial information can greatly improve classification performance. Superpixel segmentation is an image preprocessing technique for extracting spatial information, which can provide spatial support for a method of computing region features. Compared with the traditional pixel level processing method, the method based on the super-pixel is more beneficial to extracting the spatial local features, eliminating the redundancy among data and reducing the computational complexity of subsequent processing.
Recently, a new framework called generation countermeasure network is used for hyperspectral image classification. Compared with other depth-based hyperspectral image feature extraction methods, the method for generating the countermeasure network can effectively relieve the overfitting problem through a competition strategy under the condition that training samples are limited. A
Yushi Chen et al, in its published paper, "general adaptive Networks for Hyperspectral Image Classification" (IEEE Transactions on Geoscience and remove Sensing), propose a hyperspectral Image Classification method based on generation of a countermeasure network. The method proposes two classification architectures, 1D-GAN and 3D-GAN. The 3D-GAN framework simultaneously utilizes the spectrum and space information of the hyperspectral image, and the method comprises the following steps: firstly, establishing a generation countermeasure network for hyperspectral image classification; secondly, extracting three principal components of the hyperspectral image by using a principal component analysis algorithm; and finally, dividing the hyperspectral image into a training set and a testing set, taking each sample point of the hyperspectral image as a central pixel point, taking a neighborhood of 64 multiplied by 64 size of the central sample point as a whole to be input and trained to generate an antagonistic network, and using the trained network for hyperspectral image classification. Although this method fully generates the superior characteristics of the countermeasure network and can extract the discriminative features, it does not make good use of the spatial information. The accuracy of the image classification result is low.
A hyperspectral Image Classification Method R-VCANet based on PCANet is proposed in a paper R-VCANet published by Bin Pan et al, A New Deep-Learning-based hyperspectral Image Classification Method (IEEE Journal of Selected topocs in applied Earth objects and Remote Sensing), and the Method better utilizes the spectrum-space information of the hyperspectral Image, and the Method comprises the following steps: firstly, smoothing a hyperspectral image by using an RGF-based method, wherein the process is also a process of combining spectrum-space information and can further mine the structural characteristics of a background; next, feature extraction of the hyperspectral image was performed using VCANet. VCANet is a simplified deep learning model, which evolves on the basis of PCANet. The R-VCANet based approach shows better performance than some of the most advanced approaches, especially if the available training samples are not abundant. However, this method still has a disadvantage in that the number of times of execution of the RGF is not easily determined, so that the spectrum-space information cannot be sufficiently combined, so that the classification accuracy is low.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a hyperspectral image classification method based on a superpixel sample expansion and generation countermeasure network, aiming at relieving the overfitting phenomenon of the network and improving the classification accuracy of hyperspectral images under the condition of less training samples through the full combination of spectrum-space information.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
(1) constructing an initial training set and a test set:
inputting a hyperspectral imagehpRepresenting a vector formed by the reflection values of the pixel points p in each wave band by using a spectral vector, wherein T is the total number of the pixel points in the hyperspectral image; the hyperspectral image H comprises c-type pixel points, wherein M pixel points with labels and N pixel points without labels are arranged, and each pixel point is a sample;
taking M labeled pixel points as initial training samples to form an initial training setWhich corresponds to a set of category labels ofForming a test set by taking N unlabeled pixel points as test samplesWherein x isiI-th initial training sample, l, representing an initial training setiIndicates the class label to which the ith initial training sample belongs, yjA jth test sample representing a test set;
(2) sample expansion is carried out by using a multi-scale superpixel segmentation method based on entropy rate:
(2a) extracting a first principal component from the hyperspectral image H by using a principal component analysis algorithm to obtain a principal component gray map, setting k different segmentation scales, and setting the number of superpixels of each scale to be SqAnd q is 1,2 and … k, and k superpixel segmentations based on entropy rate and with different scales are respectively carried out on the principal component gray-scale map to obtain k superpixels respectively containing SqSegmentation map of super-pixel blockWherein,shows a segmentation chart GqThe u-th superpixel block in (a);
(2b) for segmentation chart GqUsing all and initial training samples xiInitial training sample composition set belonging to same super pixel blockTo the collectionPerforming average pooling to obtain xiIn the division graph GqAverage pooling feature ofUsing all and test samples y in the same wayjTest sample composition set belonging to the same superpixel blockTo the collectionAverage pooling is carried out to obtain yjIn the division graph GqAverage pooling feature of
(2c) Taking each initial training sample xiAt each segmentation chart GqAverage pooling feature ofWill be provided withConstructing an extended training set as extended training samplesTaking extended training samplesClass label ofAnd xiClass label ofiSame, obtain the corresponding extended category tag setTaking each test sample yjAt each segmentation chart GqAverage pooling feature ofWill be provided withComposing a candidate test set as candidate test samples
(3) Constructing a generation countermeasure network consisting of a generator and an arbiter:
(3a) building a generator comprising two full-connection layers and two stepping convolution layers, and setting parameters of each layer;
(3b) building a discriminator comprising three convolution layers, three maximum pooling layers and four full-connection layers, and setting parameters of each layer;
(3c) initializing the generator and the discriminator, wherein the weights of the convolution layer, the stepping convolution layer and the full-connection layer are initialized to satisfy the normal distribution N (0, 0.02) of each element value2) The bias is initialized to a tensor in which each element value is 0;
(4) randomly sampling from uniformly distributed U (-1,1) to generate a 62-dimensional noise vector z, and taking the category label of an extended category label set L' to form a label vector y;
(5) inputting the noise vector z and the label vector y into a generator, and outputting a false sample X of the hyperspectral image through nonlinear mapping of the generatorfake
(6) False sample XfakeInputting the data into a discriminator, and outputting a false sample X through nonlinear mapping of the discriminatorfakeTrue and false prediction tag x ofsAnd class prediction label xc
(7) Inputting the extended training set X 'into a discriminator, and outputting a true and false prediction label X' of the extended training set X 'through nonlinear mapping of the discriminator'sAnd category predictive tag x'c
(8) Constructing a loss function for generating a countermeasure network:
(8a) generating an auxiliary classifier to a loss function L 'of a generator of a countermeasure network'GSum-feature matching loss function LFMAdding to obtain the loss function L of the generatorG
(8b) The loss function of the arbiter still uses the auxiliary classifier to generate the loss function L of the arbiter in the countermeasure networkD
(9) Alternate training generators and discriminators:
(9a) initializing the iteration time t to be 0, and the maximum iteration time to be 1000;
(9b) updating generator parameters by using a loss function of the generator by using a gradient descent method;
(9c) updating parameters of the discriminator by using a loss function of the discriminator by using a gradient descent method;
(9d) let t be t + 1;
(9e) judging whether the iteration times t is equal to 1000, if so, obtaining a trained discriminator, executing (7), and if not, returning to (9 b);
(10) inputting the extended training set X' into a trained discriminator to obtain a discrimination characteristic, and using the discrimination characteristic to train a support vector machine to obtain a trained support vector machine;
(11) obtaining class labels for all test samples in the test set:
inputting the candidate test set Y' into a trained discriminator and a trained support vector machine to obtain a corresponding candidate label set
For each j e {1,2, …, N }, using a maximum voting algorithm to obtain a new value from the set of valuesSubset of candidate tag set CTo select the most probable label cjC is to be measuredjAs a test specimen yjThe category label of (1).
Compared with the prior art, the invention has the following advantages:
firstly, the invention integrates the spectrum-space information of the hyperspectral image by using a multi-scale superpixel method, overcomes the problem that the image classification accuracy is not high because only the spectral characteristics of the pixels are extracted and the spatial neighborhood characteristics of the pixels are not extracted in the prior art, enhances the characteristic extraction capability of the network and improves the accuracy of the hyperspectral image classification.
Secondly, the invention adds the characteristic matching loss function constraint to the generator to enable the generator to generate a more 'true' false sample, and uses a multi-scale superpixel segmentation method to generate an extended training sample, the false sample and the extended training sample are added with a sample set to increase the number of the training samples, thereby relieving the phenomenon of low classification accuracy rate caused by the fact that the training samples are less and the network overfitting is generated in the prior art, and improving the accuracy rate under the condition of less sample number.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a diagram of the results of terrain classification of a Pavia University dataset using the three classification techniques of the present invention and the prior art.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
Referring to fig. 1, the specific steps for implementing the present invention are as follows:
step 1, constructing an initial training set and a test set.
Commonly used hyperspectral image datasets include the Pavia University dataset obtained by NASA's ross spectrometer, and the Indian pins dataset and the sainas dataset obtained by NASA jet propulsion laboratory's airborne visible/infrared imaging spectrometer AVIRIS
Inputting a hyperspectral imagehpRepresenting a vector formed by the reflection values of the pixel points p in each wave band by using a spectral vector, wherein T is the total number of the pixel points in the hyperspectral image; the hyperspectral image H comprises c-type pixel points, wherein M pixel points with labels and N pixel points without labels are arranged, and each pixel point is a sample;
taking M labeled pixel points as initial training samples to form an initial training setWhich corresponds to a set of category labels ofForming a test set by taking N unlabeled pixel points as test samplesWherein x isiI-th initial training sample, l, representing an initial training setiIndicates the class label to which the ith initial training sample belongs, yjA jth test sample representing a test set;
the M marked pixels mentioned in this embodiment are pixels that are selected from each type of pixels of the hyperspectral image in a medium number.
And 2, performing sample expansion by using a multi-scale superpixel segmentation method based on entropy rate.
The existing superpixel segmentation methods are mainly divided into two types, one is an algorithm based on a graph, and the other is an algorithm based on gradient descent. The Graph-based method comprises an N-cut-based algorithm, an SL operator-based method, a Graph-based algorithm, an entropy rate-based algorithm and the like, and the gradient descent-based algorithm comprises a watershed-based algorithm, a mean shift algorithm, an SLIC algorithm and the like. The invention uses a superpixel segmentation algorithm based on entropy rate to carry out superpixel segmentation, and carries out superpixel segmentation of different scales on a principal component gray level graph of a hyperspectral image to carry out sample expansion, and the specific method comprises the following steps:
2a) extracting a first principal component from the hyperspectral image H by using a principal component analysis algorithm to obtain a principal component gray map, setting k different segmentation scales, and setting the number of superpixels of each scale to be SqAnd q is 1,2 and … k, and k superpixel segmentations based on entropy rate and with different scales are respectively carried out on the principal component gray-scale map to obtain k superpixels respectively containing SqSegmentation map of super-pixel blockWherein,shows a segmentation chart GqThe u-th superpixel block in (a);
in the present embodiment, 3 division scales are provided, and the number of superpixels in each division scale is 100,200,300, so the number k of the division scales described here and later should be equal to 3, S without illustration1,S2,S3Unless otherwise stated, shall be equal to 100,200, 300;
2b) for segmentation chart GqUsing all and initial training samples xiInitial training sample composition set belonging to same super pixel blockTo the collectionPerforming average pooling to obtain xiIn the division graph GqAverage pooling feature ofBy the following formula:
wherein,denotes xiIn the division graph GqThe average pooling characteristics of the above (c) and (c),denotes xiBelonging to a segmentation chart GqThe u-th super-pixel block of (1),representing superpixel blocksNumber of included pixels, xrRepresentation collectionThe r-th initial training sample in (1);
using all and test samples y in the same wayjTest sample composition set belonging to the same superpixel blockWill be assembledAverage pooling is carried out to obtain yjIn the division graph GqAverage pooling feature of
2c) Taking each initial training sample xiAt each segmentation chart GqAverage pooling feature ofWill be provided withConstructing an extended training set as extended training samplesTaking extended training samplesClass label ofAnd xiClass label ofiSame, obtain the corresponding extended category tag setTaking each test sample yjAt each segmentation chart GqAverage pooling feature ofWill be provided withComposing a candidate test set as candidate test samples
And 3, constructing a generation countermeasure network consisting of a generator and a discriminator.
The method comprises the following steps of generating a confrontation network CGAN in a conditional mode by a common generation confrontation network, generating a confrontation network DCGAN by deep convolution, generating a confrontation network ACGAN by an auxiliary classifier, generating a confrontation network LSGAN by least square, and the like, wherein the confrontation network ACGAN is generated by the auxiliary classifier, and the confrontation network ACGAN is composed of a generator and a discriminator and is constructed by the following steps:
3a) the generator is constructed and each layer of parameters is set.
The generator comprises two fully-connected layers and two stepping convolution layers, wherein the first fully-connected layer, the second fully-connected layer, the first stepping convolution layer and the second stepping convolution layer are sequentially arranged from left to right; the number of the nodes of the first full connection layer is 1024, and the number of the nodes of the second full connection layer isThe convolution kernel size of the first-step convolutional layer is 1 × 3, the step size is 1 × 2, the number of channels is 64, the convolution kernel size of the second-step convolutional layer is 1 × 3, the step size is 1 × 2, the number of channels is 1, wherein,denotes a rounding-up operation, ch denotes the spectral vector hpA dimension value of (a);
3b) the discriminator is constructed and the parameters of each layer are set.
The discriminator comprises three convolution layers, 10 layers including three maximum pooling layers and four full-connection layers, and the three maximum pooling layers and the four full-connection layers sequentially comprise from left to right:
the first layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 64;
the second layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the third layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 128;
the fourth layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the fifth layer is a convolution layer, the size of the convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 512;
the sixth layer is a maximum pooling layer, the pooling window is 1 × 2, and the step length is 1 × 2;
the seventh layer is a full connection layer, and the number of nodes of the full connection layer is 1024;
the eighth layer is a full connection layer, and the number of nodes of the full connection layer is 64;
the ninth layer is a full connection layer, and the number of nodes of the full connection layer is 1;
and the tenth layer is a full connection layer, the node number of the full connection layer is c, and c is the category number of the hyperspectral image H.
And 4, setting the input of the generator.
The input to the generator comprises two parts, a noise vector z and a label vector y.
Randomly sampling and generating a 62-dimensional vector from the uniformly distributed U (-1,1) to form a noise vector z;
and (5) taking the category labels of the extended category label set L' to form a label vector y.
Step 5, generating false sample X by using generatorfake
Inputting the noise vector z and the label vector y into a generator, and outputting a false sample X of the hyperspectral image through nonlinear mapping of the generatorfakeThe method comprises the following implementation steps:
firstly, a noise vector z and a category vector y are input into a first full-link layer of a generator, and a first feature map g is output after full-link transformation and ReLU transformation in sequence1
Next, the first characteristic diagram g is set1Inputting the data into a second full-connection layer of the generator, sequentially performing full-connection transformation, batch normalization and ReLU transformation, and outputting a second characteristic diagram g2
Then, the second characteristic diagram g is set2The first step of the convolution layer input to the generator is sequentially performedStep convolution operation, batch normalization and ReLU transformation are carried out, and a third feature map g is output3
Then, the third feature map g is set3Inputting the second step convolution layer to the generator, sequentially performing step convolution operation and tanh transformation, and finally outputting a fourth feature map with the size of 1 × d, namely a false sample XfakeAnd d represents the number of channels of the hyperspectral image.
Step 6, outputting a false sample X by using a discriminatorfakeTrue and false prediction tag x ofsAnd class prediction label xc
False sample XfakeInputting the data into a discriminator, and outputting a true and false prediction label x of a false sample through nonlinear mapping of the discriminatorsAnd class prediction label xcThe method comprises the following implementation steps:
step 1, false sample X is processedfakeThe first layer of convolution layer input to the discriminator is sequentially subjected to convolution operation and Leaky-ReLU transformation to obtain a first output characteristic diagram d1
Step 2, outputting the first output characteristic diagram d1Inputting the second layer of maximum pooling layer to the discriminator to obtain a second output feature map d2
Step 3, outputting the second output characteristic diagram d2Inputting the third layer of convolution layer to the discriminator, and obtaining a third output characteristic diagram d through convolution operation, batch standardization and Leaky-ReLU transformation in sequence3
Step 4, outputting the third output characteristic diagram d3Inputting the data into a fourth maximum pooling layer of the discriminator to obtain a fourth output feature map d4
Step 5, outputting the fourth output characteristic diagram d4Inputting the fifth convolution layer to the discriminator, and sequentially performing convolution operation, batch normalization and Leaky-ReLU transformation to obtain a fifth output characteristic diagram d5
Step 6, outputting a fifth output characteristic diagram d5Input to judgeA sixth maximum pooling layer of the discriminator to obtain a sixth output characteristic diagram d6
Step 7, outputting a sixth output characteristic diagram d6Inputting the data into a seventh fully-connected layer of the discriminator, and sequentially carrying out fully-connected transformation, batch standardization and Leaky-ReLU transformation to obtain a seventh output characteristic diagram d with the size of 1 multiplied by 10247
Step 8, outputting a seventh output characteristic diagram d7Inputting the signals into the eighth and ninth full-link layers of the discriminator to obtain an eighth output feature map d of 1 × 64 size8And a ninth output feature map d of size 1 × 19
Step 9, outputting the eighth output characteristic diagram d8Inputting the data into the tenth fully-connected layer of the discriminator to obtain a tenth output feature map d with the size of 1 × c10
Step 10, a ninth output characteristic diagram d9Carrying out sigmoid transformation to obtain a true and false prediction label x with the size of 1 multiplied by 1sThe tenth output feature map d10After the softmax transformation, a class prediction label x with the size of 1 × c is outputc
Step 7, using a discriminator to output a true and false prediction label X 'of the extended training set X'sAnd category predictive tag x'c
Inputting the extended training set X 'into a discriminator, and outputting a true and false prediction label X of the extended training set X' through nonlinear mapping of the discriminators'and Category predictive tag x'cThe specific implementation steps are the same as those in step 6.
And 8, constructing loss functions of the generator and the discriminator.
8a) Loss function of generator:
8a1) generating a vector y with the number of elements equal to the label vector y0Each element value in the vector is equal to 0;
8a2) calculating a false sample XfakeTrue and false ofSurvey label xsAnd vector y0Cross entropy between LS
8a3) Calculating a false sample XfakeClass prediction label x ofcAnd cross entropy L between tag vectors yC
8a4) From the results of 8a2) and 8a3), the resulting auxiliary classifier generates a generator loss function L 'in the antagonistic network'G=LS+LC
8a5) Defining a feature matching loss function: l isFM=||d7,fake-d7,true||2+||d8,fake-d8,true||2Wherein d is7,fake,d7,fakeA seventh output profile, d, representing the dummy samples and the extended training samples, respectively8,fake,d8,trueAn eighth output feature map representing the dummy samples and the extended training samples, respectively;
8a6) from the results of 8a4) and 8a5), the loss function L of the generator is obtainedG=L'G+LFM
8b) The loss function of the arbiter still uses the auxiliary classifier to generate the loss function L of the arbiter in the countermeasure networkD,LDIs represented as follows:
LD=L'S+L'C-LS+LC
wherein, L'SRepresenting extended training sample true and false prediction labels xs' and y1Cross entropy between, y1Is a vector L 'with element values equal to 1 and the number of elements equal to the tag vector y'CIs a false sample XfakeClass prediction tag of x'cCross entropy with label vector y, LSIs a false sample XfakeTrue and false prediction tag x ofsAnd vector y0Cross entropy between, LCIs a false sample XfakeClass prediction label x ofcAnd the cross entropy between the tag vector y.
And 9, alternately training the generator and the discriminator.
9a) Initializing the iteration time t to be 0, and the maximum iteration time to be 1000;
9b) using the loss function L of the generator using a gradient descent methodGUpdating generator parameters;
9c) using a gradient descent method, using a loss function L of an arbiterDUpdating the parameters of the discriminator;
9d) let t be t + 1;
9e) judging whether the iteration times t is equal to 1000, if so, obtaining a trained discriminator, executing the step 10, otherwise, returning to the step (9 b);
and step 10, training a support vector machine.
Inputting the extended training set X' into a trained discriminator to obtain a ninth output characteristic diagram, namely a discrimination characteristic, and then inputting the discrimination characteristic into a support vector machine taking a radial basis function as a kernel function to carry out nonlinear classification; and then, searching the optimal parameters of the support vector machine by adopting a grid searching method to obtain the trained support vector machine.
Step 11, inputting the candidate test set Y' into the trained discriminator and the trained support vector machine to obtain the corresponding candidate class label setFor each test specimen yjThat is, for each j ∈ {1,2, …, N }, a maximum voting algorithm is used to extract from the subset of the candidate tag set CTo select the most probable label cjC is to be measuredjAs a test specimen yjThe category label of (1).
The technical effects of the invention are further explained by combining simulation tests as follows:
1. simulation conditions
The dataset used in the present invention was the university of Paviau, Italy, Paviauniversity, photographed by ROSIS optical sensors. The data set contains a total of 9 terrain categories, 610 x 340 in size, with a spatial resolution of 1.3 meters per pixel. After removing the noise and water vapor absorption bands, the remaining 103 bands were used for classification experiments.
The simulation platform is as follows: ubuntu14.04 operating system, tensrflow deep learning platform.
In the simulation experiment, three prior arts are specifically adopted as follows:
the Hyperspectral image classification Method proposed by Pan et al in "R-VCANet, A New Deep-Learning-Based Hyperspectral image classification Method, IEEE J.Sel.topics appl.Earth observer.Remote Sens., vol.10, No.5, pp.1975-1986, May.2017", is abbreviated as R-VCANet Method.
A Hyperspectral image classification method, called HiFi-We method for short, was proposed by Pan et al in "historical guiding filtration-Based EnsembleClassification for Hyperspectral Images, IEEE trans. Geosci. remote Sens., vol.55, No.7, pp.4177-4189, July.2017".
A Hyperspectral Image Classification method, abbreviated as PPF-CNN method, proposed by Li et al in "Hyperspectral Image Classification Using Deep Pixel-PairFeatures, IEEE trans. Geosci. remote Sens., vol.55, No.2, pp.844-853, Feb.2017".
2. Emulated content
The simulation experiment is to classify the ground features of the hyperspectral image Pavia University data set of Paviia University by adopting the method of the invention and three prior art R-VCANet method, HiFi-We method and PPF-CNN method, and the result is shown in figure 2, wherein:
FIG. 2(a) is a diagram showing a true terrain map of an input hyperspectral image Italy University of Pavia University dataset;
FIG. 2(b) is a graph showing the results of terrain classification using the R-VCANet method on the hyperspectral image Pavia University of Pavica University dataset of Italy Pavica;
FIG. 2(c) is a graph showing the results of classifying the terrain using HiFi-We method on the Pavia University dataset of the Hyperspectral image University of Pavica italy;
FIG. 2(d) is a diagram showing the result of classifying the feature of the Pavia University dataset of the Hyperspectral image University of Pavica italy by the PPF-CNN method;
FIG. 2(e) is a diagram showing the result of classifying the feature of the Pavia University dataset of the Hyperspectral image University of Pavica Italy according to the method of the present invention.
3. Analysis of simulation results
As can be seen from FIG. 2, the other three prior art techniques misclassify many samples of the classes Meadows and Aspalat. Compared with the prior art, the invention realizes higher classification results on the two categories. The reason is that the spatial-spectral characteristics of the hyperspectral image are effectively extracted by the superpixel segmentation algorithm, and a smoother classification result can be obtained.
In simulation result analysis, three evaluation indexes are adopted, specifically as follows:
OA, the overall precision, represents the proportion of correctly classified samples to all samples. The larger the OA value, the better the classification effect.
AA, mean accuracy, represents the average of all class classification accuracies. The larger the AA value, the better the classification effect.
Kappa, the chi-squared coefficient, the higher the value of the coefficient, the higher the classification accuracy achieved by the representative model.
Statistics are given to the results of the classification of the University of italian Pavia University, according to the present invention and three prior arts in fig. 2, and the overall classification accuracy OA of each type of ground feature is shown in table 1.
The total accuracy OA, the average accuracy AA, and the Kappa number Kappa of each evaluation index of all types of ground objects are shown in table 2.
TABLE 1 Classification accuracy of each type of ground object
TABLE 2 evaluation indexes of all the ground features
As can be seen from Table 1, the overall classification accuracy OA of most ground feature classes, particularly class Bricks is improved, and compared with R-VCANET, the OA index of the ground feature class is improved by 7.87%; compared with HiFi-We, the OA index of the invention on class Bricks is improved by 3.64%; compared with PPF-CNN, the OA index of the invention on the class of Bricks is improved by 12.21 percent.
As can be seen from Table 2, the overall classification accuracy OA, the average classification accuracy AA and the Kappa coefficient of the present invention are all greatly improved. Compared with the R-VCANet method, the method improves the OA index by 9.57 percent, improves the AA index by 3.72 percent and improves the Kappa index by 12.22 percent; compared with the HiFi-We method, the method improves the OA index by 4.14 percent, the AA index by 1.46 percent and the Kappa index by 5.35 percent; compared with PPF-CNN, the invention improves OA index by 15.55%, AA index by 7.87% and Kappa index by 19.69%.
In conclusion, compared with the prior art, the method has the advantage that the classification accuracy is greatly improved.

Claims (9)

1. A method for expanding and generating a confrontation network hyperspectral image classification based on a superpixel sample is characterized by comprising the following steps:
(1) constructing an initial training set and a test set:
inputting a hyperspectral imagehpRepresenting a vector formed by the reflection values of the pixel points p in each wave band by using a spectral vector, wherein T is the total number of the pixel points in the hyperspectral image; the high spectrumThe image H comprises c-type pixel points, wherein M pixel points with labels and N pixel points without labels are arranged, and each pixel point is a sample;
taking M labeled pixel points as initial training samples to form an initial training setWhich corresponds to a set of category labels ofForming a test set by taking N unlabeled pixel points as test samplesWherein x isiI-th initial training sample, l, representing an initial training setiIndicates the class label to which the ith initial training sample belongs, yjA jth test sample representing a test set;
(2) sample expansion is carried out by using a multi-scale superpixel segmentation method based on entropy rate:
(2a) extracting a first principal component from the hyperspectral image H by using a principal component analysis algorithm to obtain a principal component gray map, setting k different segmentation scales, and setting the number of superpixels of each scale to be SqAnd q is 1,2 and … k, and k superpixel segmentations based on entropy rate and with different scales are respectively carried out on the principal component gray-scale map to obtain k superpixels respectively containing SqSegmentation map of super-pixel blockWherein,shows a segmentation chart GqThe u-th superpixel block in (a);
(2b) for segmentation chart GqUsing all and initial training samples xiInitial training sample composition set belonging to same super pixel blockTo the collectionPerforming average pooling to obtain xiIn the division graph GqAverage pooling feature ofUsing all and test samples y in the same wayjTest sample composition set belonging to the same superpixel blockTo the collectionAverage pooling is carried out to obtain yjIn the division graph GqAverage pooling feature of
(2c) Taking each initial training sample xiAt each segmentation chart GqAverage pooling feature ofWill be provided withConstructing an extended training set as extended training samplesTaking extended training samplesClass label ofAnd xiClass label ofiAre identical to each otherObtaining the corresponding extended category tag setTaking each test sample yjAt each segmentation chart GqAverage pooling feature ofWill be provided withComposing a candidate test set as candidate test samples
(3) Constructing a generation countermeasure network consisting of a generator and an arbiter:
(3a) building a generator comprising two full-connection layers and two stepping convolution layers, and setting parameters of each layer;
(3b) building a discriminator comprising three convolution layers, three maximum pooling layers and four full-connection layers, and setting parameters of each layer;
(3c) initializing the generator and the discriminator, wherein the weights of the convolution layer, the stepping convolution layer and the full-connection layer are initialized to satisfy the normal distribution N (0, 0.02) of each element value2) The bias is initialized to a tensor in which each element value is 0;
(4) randomly sampling from uniformly distributed U (-1,1) to generate a 62-dimensional noise vector z, and taking the category label of an extended category label set L' to form a label vector y;
(5) inputting the noise vector z and the label vector y into a generator, and outputting a false sample X of the hyperspectral image through nonlinear mapping of the generatorfake
(6) False sample XfakeInputting the data into a discriminator, and outputting a false sample X through nonlinear mapping of the discriminatorfakeTrue and false prediction tag x ofsAnd class prediction label xc
(7) Inputting the extended training set X' to a discriminator, passing through the discriminatorOutputs true and false predicted labels X 'of the extended training set X'sAnd category predictive tag x'c
(8) Constructing a loss function for generating a countermeasure network:
(8a) generating an auxiliary classifier to a loss function L 'of a generator of a countermeasure network'GSum-feature matching loss function LFMAdding to obtain the loss function L of the generatorG
(8b) The loss function of the arbiter still uses the auxiliary classifier to generate the loss function L of the arbiter in the countermeasure networkD
(9) Alternate training generators and discriminators:
(9a) initializing the iteration time t to be 0, and the maximum iteration time to be 1000;
(9b) updating generator parameters by using a loss function of the generator by using a gradient descent method;
(9c) updating parameters of the discriminator by using a loss function of the discriminator by using a gradient descent method;
(9d) let t be t + 1;
(9e) judging whether the iteration times t is equal to 1000, if so, obtaining a trained discriminator, executing (7), and if not, returning to (9 b);
(10) inputting the extended training set X' into a trained discriminator to obtain a discrimination characteristic, and using the discrimination characteristic to train a support vector machine to obtain a trained support vector machine;
(11) obtaining class labels for all test samples in the test set:
inputting the candidate test set Y' into a trained discriminator and a trained support vector machine to obtain a corresponding candidate label set
For each j e {1,2, …, N }, a maximum voting algorithm is utilized from a subset of the candidate tag set CTo select the most probable label cjC is to be measuredjAs a measureTest specimen yjThe category label of (1).
2. The method of claim 1, wherein (2b) is a set of pairsPerforming average pooling to obtain xiIn the division graph GqAverage pooling feature ofBy the following formula:
wherein,denotes xiIn the division graph GqThe average pooling characteristics of the above (c) and (c),denotes xiBelonging to a segmentation chart GqThe u-th super-pixel block of (1),representing superpixel blocksNumber of contained pixels, xrRepresentation collectionThe r-th initial training sample in (1);
3. the method according to claim 1, wherein (4a) the constructed generator has a structure of a first fully-connected layer, a second fully-connected layer, and a first fully-connected layer from left to rightA further build-up layer and a second build-up layer; the number of the nodes of the first full connection layer is 1024, and the number of the nodes of the second full connection layer isThe convolution kernel size of the first-step convolutional layer is 1 × 3, the step size is 1 × 2, the number of channels is 64, the convolution kernel size of the second-step convolutional layer is 1 × 3, the step size is 1 × 2, the number of channels is 1, wherein,denotes a rounding-up operation, ch denotes the spectral vector hpThe dimension value of (a).
4. The method according to claim 1, wherein (4b) the constructed discriminator has a structure with 10 layers from left to right, i.e. the discriminator
The first layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 64;
the second layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the third layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 128;
the fourth layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the fifth layer is a convolution layer, the size of the convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 512;
the sixth layer is a maximum pooling layer, the pooling window is 1 × 2, and the step length is 1 × 2;
the seventh layer is a full connection layer, and the number of nodes of the full connection layer is 1024;
the eighth layer is a full connection layer, and the number of nodes of the full connection layer is 64;
the ninth layer is a full connection layer, and the number of nodes of the full connection layer is 1;
and the tenth layer is a full connection layer, the node number of the full connection layer is c, and c is the category number of the hyperspectral image H.
5. The method of claim 1 or 3, wherein the generator in (5) outputs a pseudo sample X of the hyperspectral imagefakeIt is implemented as follows:
(5a) inputting the noise vector z and the category vector y into a first full-connection layer of a generator, sequentially carrying out full-connection transformation and ReLU transformation, and outputting a first feature map g1
(5b) The first characteristic diagram g1Inputting the data into a second full-connection layer of the generator, sequentially performing full-connection transformation, batch normalization and ReLU transformation, and outputting a second characteristic diagram g2
(5c) The second characteristic diagram g2The first step convolution layer input to the generator is sequentially subjected to step convolution operation, batch normalization and ReLU transformation, and a third feature map g is output3
(5d) The third characteristic diagram g3Inputting the second step convolution layer to the generator, sequentially performing step convolution operation and tanh transformation, and finally outputting a fourth feature map with the size of 1 × d, namely a false sample XfakeAnd d represents the number of channels of the hyperspectral image.
6. The method according to claim 1 or 4, wherein the false sample X in (6)fakeInputting the data into a discriminator, and outputting a false sample X through nonlinear mapping of the discriminatorfakeTrue and false prediction vector x ofsAnd a class prediction vector xcIt is implemented as follows:
(6a) false sample XfakeThe first layer of convolution layer input to the discriminator is sequentially subjected to convolution operation and Leaky-ReLU transformation to obtain a first output characteristic diagram d1
(6b) The first output characteristic map d1Inputting the second layer of maximum pooling layer to the discriminator to obtain a second output feature map d2
(6c) The second output characteristic map d2Inputting the third layer of convolution layer to the discriminator, and obtaining a third output characteristic diagram d through convolution operation, batch standardization and Leaky-ReLU transformation in sequence3
(6d) The third output characteristicFIG. d3Inputting the data into a fourth maximum pooling layer of the discriminator to obtain a fourth output feature map d4
(6e) Outputting the fourth output characteristic diagram d4Inputting the fifth convolution layer to the discriminator, and sequentially performing convolution operation, batch normalization and Leaky-ReLU transformation to obtain a fifth output characteristic diagram d5
(6f) The fifth output characteristic diagram d5Inputting the maximum pooling layer of the sixth layer into the discriminator to obtain a sixth output feature map d6
(6g) Outputting the sixth output characteristic diagram d6Inputting the data into a seventh fully-connected layer of the discriminator, and sequentially carrying out fully-connected transformation, batch standardization and Leaky-ReLU transformation to obtain a seventh output characteristic diagram d with the size of 1 multiplied by 10247
(6h) Outputting the seventh output characteristic diagram d7Inputting the signals into the eighth and ninth full-link layers of the discriminator to obtain an eighth output feature map d of 1 × 64 size8And a ninth output feature map d of size 1 × 19
(6i) The eighth output characteristic diagram d9Inputting the data into the tenth fully-connected layer of the discriminator to obtain a tenth output feature map d with the size of 1 × c10Wherein c is the number of categories of the hyperspectral images;
(6j) the ninth output characteristic diagram d8Carrying out sigmoid transformation to obtain a true and false prediction label x with the size of 1 multiplied by 1sThe tenth output feature map d10After the softmax transformation, a class prediction label x with the size of 1 × c is outputc
7. Method according to claim 1 or 6, characterized in that (8a) the auxiliary classifier is generated as a loss function L 'of the generator of the countermeasure network'GSum-feature matching loss function LFMAdding to obtain the loss function L of the generatorGIt is implemented as follows:
(8a1) generating a vector y with the number of elements equal to the label vector y0Each element value in the vector is equal to 0;
(8a2) calculating a false sample XfakeIsFalse prediction tag xsAnd vector y0Cross entropy between LS
(8a3) Calculating a false sample XfakeClass prediction label x ofcAnd cross entropy L between tag vectors yC
(8a4) From the results of (8a2) and (8a3), get the generator loss function L 'in the auxiliary classifier generation countermeasure network'G=LS+LC
(8a5) Defining a feature matching loss function LFM=||d7,fake-d7,true||2+||d8,fake-d8,true||2Wherein d is7,fake,d7,fakeA seventh output profile, d, representing the dummy samples and the extended training samples, respectively8,fake,d8,trueAn eighth output feature map representing the dummy samples and the extended training samples, respectively;
(8a6) from the results of (8a4) and (8a5), the loss function L of the generator is obtainedG=L'G+LFM
8. The method of claim 1, 6 or 7, wherein the auxiliary classifier in (8b) generates a discriminant loss function L in the countermeasure networkDExpressed as follows:
LD=L'S+L'C-LS+LC
wherein, L'SRepresenting true and false prediction label x 'of augmented training sample'sAnd y1Cross entropy between, y1Is a vector L 'with element values equal to 1 and the number of elements equal to the tag vector y'CIs a false sample XfakeClass prediction tag of x'cCross entropy with label vector y, LSIs a false sample XfakeTrue and false prediction tag x ofsAnd vector y0Cross entropy between, LCIs a false sample XfakeClass prediction label x ofcAnd the cross entropy between the tag vector y.
9. The method of claim 1, wherein in (10), the training of the support vector machine is performed by inputting the extended training sample set to the trained discriminator to obtain a ninth output feature map, i.e. a discrimination feature; inputting the discrimination characteristics into a support vector machine taking a radial basis function as a kernel function to carry out nonlinear classification; and then, searching the optimal parameters of the support vector machine by adopting a grid searching method to obtain the trained support vector machine.
CN201910201106.0A 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network Active CN109948693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910201106.0A CN109948693B (en) 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910201106.0A CN109948693B (en) 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network

Publications (2)

Publication Number Publication Date
CN109948693A true CN109948693A (en) 2019-06-28
CN109948693B CN109948693B (en) 2021-09-28

Family

ID=67009002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910201106.0A Active CN109948693B (en) 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network

Country Status (1)

Country Link
CN (1) CN109948693B (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110456355A (en) * 2019-08-19 2019-11-15 河南大学 A kind of Radar Echo Extrapolation method based on long short-term memory and generation confrontation network
CN110688968A (en) * 2019-09-30 2020-01-14 西安电子科技大学 Hyperspectral target detection method based on multi-example deep convolutional memory network
CN110781976A (en) * 2019-10-31 2020-02-11 重庆紫光华山智安科技有限公司 Extension method of training image, training method and related device
CN110826059A (en) * 2019-09-19 2020-02-21 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN110909814A (en) * 2019-11-29 2020-03-24 华南理工大学 Classification method based on feature separation
CN111079602A (en) * 2019-12-06 2020-04-28 长沙千视通智能科技有限公司 Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN111104982A (en) * 2019-12-20 2020-05-05 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111275108A (en) * 2020-01-20 2020-06-12 国网山东省电力公司枣庄供电公司 Method for performing sample expansion on partial discharge data based on generation countermeasure network
CN111310791A (en) * 2020-01-17 2020-06-19 电子科技大学 Dynamic progressive automatic target identification method based on small sample number set
CN111461264A (en) * 2020-05-25 2020-07-28 南京大学 Scalable modular image recognition method based on generation countermeasure network
CN111461168A (en) * 2020-03-02 2020-07-28 平安科技(深圳)有限公司 Training sample expansion method and device, electronic equipment and storage medium
CN111638216A (en) * 2020-06-30 2020-09-08 黑龙江大学 Beet-related disease analysis method for unmanned aerial vehicle system for monitoring plant diseases and insect pests
CN111695467A (en) * 2020-06-01 2020-09-22 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN111723731A (en) * 2020-06-18 2020-09-29 西安电子科技大学 Hyperspectral image classification method based on spatial spectrum convolution kernel, storage medium and device
CN111832428A (en) * 2020-06-23 2020-10-27 北京科技大学 Data enhancement method applied to strip breakage fault diagnosis of cold rolling mill
CN111860124A (en) * 2020-06-04 2020-10-30 西安电子科技大学 Remote sensing image classification method based on space spectrum capsule generation countermeasure network
CN111861924A (en) * 2020-07-23 2020-10-30 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolved GAN
CN112215296A (en) * 2020-10-21 2021-01-12 红相股份有限公司 Infrared image identification method based on transfer learning and storage medium
CN112307926A (en) * 2020-10-26 2021-02-02 西北工业大学 Acoustic passive ship target classification method based on generation countermeasure network
CN112597702A (en) * 2020-12-21 2021-04-02 电子科技大学 Pneumatic modeling generation type confrontation network model training method based on radial basis function
CN112699899A (en) * 2020-12-31 2021-04-23 杭州电子科技大学 Hyperspectral image feature extraction method based on generation countermeasure network
CN112733769A (en) * 2021-01-18 2021-04-30 西安电子科技大学 Hyperspectral image classification method based on multiband entropy rate superpixel segmentation
CN112784930A (en) * 2021-03-17 2021-05-11 西安电子科技大学 CACGAN-based HRRP identification database sample expansion method
CN112926397A (en) * 2021-01-28 2021-06-08 中国石油大学(华东) SAR image sea ice type classification method based on two-round voting strategy integrated learning
CN113096080A (en) * 2021-03-30 2021-07-09 四川大学华西第二医院 Image analysis method and system
CN113095218A (en) * 2021-04-09 2021-07-09 西北工业大学 Hyperspectral image target detection algorithm
CN113222052A (en) * 2021-05-25 2021-08-06 云南电网有限责任公司电力科学研究院 Method for generating countermeasure neural network for power equipment hyperspectral image classification
CN113435243A (en) * 2021-05-14 2021-09-24 西安电子科技大学 Hyperspectral true downsampling fuzzy kernel estimation method
CN113516656A (en) * 2021-09-14 2021-10-19 浙江双元科技股份有限公司 Defect image data processing simulation method based on ACGAN and Cameralink cameras
CN113572710A (en) * 2021-07-21 2021-10-29 应急管理部四川消防研究所 WVD time-frequency analysis cross item suppression method and system based on generation countermeasure network and storage medium
CN113989100A (en) * 2021-09-18 2022-01-28 西安电子科技大学 Infrared texture sample expansion method based on pattern generation countermeasure network
CN114049567A (en) * 2021-11-22 2022-02-15 齐鲁工业大学 Self-adaptive soft label generation method and application in hyperspectral image classification
CN114419360A (en) * 2021-11-23 2022-04-29 东北电力大学 Photovoltaic panel infrared thermal image classification and hot spot positioning method
CN114460013A (en) * 2022-01-28 2022-05-10 自然资源部第一海洋研究所 Coastal wetland vegetation ground biomass GAN model self-learning remote sensing inversion method
CN114858782A (en) * 2022-07-05 2022-08-05 中国民航大学 Milk powder doping non-directional detection method based on Raman hyperspectral countermeasure discrimination model
CN114863225A (en) * 2022-07-06 2022-08-05 腾讯科技(深圳)有限公司 Image processing model training method, image processing model generation device, image processing equipment and image processing medium
CN114863293A (en) * 2022-05-07 2022-08-05 中国石油大学(华东) Hyperspectral oil spill detection method based on double-branch GAN network
CN115205692A (en) * 2022-09-16 2022-10-18 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN117077141A (en) * 2023-10-13 2023-11-17 国网山东省电力公司鱼台县供电公司 Smart power grid malicious software detection method and system
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change
CN117648643A (en) * 2024-01-30 2024-03-05 山东神力索具有限公司 Rigging predictive diagnosis method and device based on artificial intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7400770B2 (en) * 2002-11-06 2008-07-15 Hrl Laboratories Method and apparatus for automatically extracting geospatial features from multispectral imagery suitable for fast and robust extraction of landmarks
CN101826160A (en) * 2010-03-31 2010-09-08 北京航空航天大学 Hyperspectral image classification method based on immune evolutionary strategy
CN103034863A (en) * 2012-12-24 2013-04-10 重庆市勘测院 Remote-sensing image road acquisition method combined with kernel Fisher and multi-scale extraction
US8515201B1 (en) * 2008-09-18 2013-08-20 Stc.Unm System and methods of amplitude-modulation frequency-modulation (AM-FM) demodulation for image and video processing
CN106503727A (en) * 2016-09-30 2017-03-15 西安电子科技大学 A kind of method and device of classification hyperspectral imagery
CN107563355A (en) * 2017-09-28 2018-01-09 哈尔滨工程大学 Hyperspectral abnormity detection method based on generation confrontation network
CN108764173A (en) * 2018-05-31 2018-11-06 西安电子科技大学 The hyperspectral image classification method of confrontation network is generated based on multiclass
CN109145992A (en) * 2018-08-27 2019-01-04 西安电子科技大学 Cooperation generates confrontation network and sky composes united hyperspectral image classification method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7400770B2 (en) * 2002-11-06 2008-07-15 Hrl Laboratories Method and apparatus for automatically extracting geospatial features from multispectral imagery suitable for fast and robust extraction of landmarks
US8515201B1 (en) * 2008-09-18 2013-08-20 Stc.Unm System and methods of amplitude-modulation frequency-modulation (AM-FM) demodulation for image and video processing
CN101826160A (en) * 2010-03-31 2010-09-08 北京航空航天大学 Hyperspectral image classification method based on immune evolutionary strategy
CN103034863A (en) * 2012-12-24 2013-04-10 重庆市勘测院 Remote-sensing image road acquisition method combined with kernel Fisher and multi-scale extraction
CN106503727A (en) * 2016-09-30 2017-03-15 西安电子科技大学 A kind of method and device of classification hyperspectral imagery
CN107563355A (en) * 2017-09-28 2018-01-09 哈尔滨工程大学 Hyperspectral abnormity detection method based on generation confrontation network
CN108764173A (en) * 2018-05-31 2018-11-06 西安电子科技大学 The hyperspectral image classification method of confrontation network is generated based on multiclass
CN109145992A (en) * 2018-08-27 2019-01-04 西安电子科技大学 Cooperation generates confrontation network and sky composes united hyperspectral image classification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NANJUN HE 等: "Feature Extraction With Multiscale Covariance Maps for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
宋相法 等: "基于稀疏表示及光谱信息的高光谱遥感图像分类", 《电子与信息学报》 *

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110456355A (en) * 2019-08-19 2019-11-15 河南大学 A kind of Radar Echo Extrapolation method based on long short-term memory and generation confrontation network
CN110456355B (en) * 2019-08-19 2021-12-24 河南大学 Radar echo extrapolation method based on long-time and short-time memory and generation countermeasure network
CN110826059A (en) * 2019-09-19 2020-02-21 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN110826059B (en) * 2019-09-19 2021-10-15 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN110688968A (en) * 2019-09-30 2020-01-14 西安电子科技大学 Hyperspectral target detection method based on multi-example deep convolutional memory network
CN110688968B (en) * 2019-09-30 2022-12-02 西安电子科技大学 Hyperspectral target detection method based on multi-instance deep convolutional memory network
CN110781976A (en) * 2019-10-31 2020-02-11 重庆紫光华山智安科技有限公司 Extension method of training image, training method and related device
CN110781976B (en) * 2019-10-31 2021-01-05 重庆紫光华山智安科技有限公司 Extension method of training image, training method and related device
CN110909814A (en) * 2019-11-29 2020-03-24 华南理工大学 Classification method based on feature separation
CN110909814B (en) * 2019-11-29 2023-05-26 华南理工大学 Classification method based on feature separation
CN111079602A (en) * 2019-12-06 2020-04-28 长沙千视通智能科技有限公司 Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN111079602B (en) * 2019-12-06 2024-02-09 长沙千视通智能科技有限公司 Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN111104982A (en) * 2019-12-20 2020-05-05 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111104982B (en) * 2019-12-20 2021-09-24 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111310791A (en) * 2020-01-17 2020-06-19 电子科技大学 Dynamic progressive automatic target identification method based on small sample number set
CN111275108A (en) * 2020-01-20 2020-06-12 国网山东省电力公司枣庄供电公司 Method for performing sample expansion on partial discharge data based on generation countermeasure network
CN111461168A (en) * 2020-03-02 2020-07-28 平安科技(深圳)有限公司 Training sample expansion method and device, electronic equipment and storage medium
CN111461264A (en) * 2020-05-25 2020-07-28 南京大学 Scalable modular image recognition method based on generation countermeasure network
CN111695467A (en) * 2020-06-01 2020-09-22 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN111860124A (en) * 2020-06-04 2020-10-30 西安电子科技大学 Remote sensing image classification method based on space spectrum capsule generation countermeasure network
CN111860124B (en) * 2020-06-04 2024-04-02 西安电子科技大学 Remote sensing image classification method based on space spectrum capsule generation countermeasure network
CN111723731B (en) * 2020-06-18 2023-09-29 西安电子科技大学 Hyperspectral image classification method, storage medium and equipment based on spatial spectrum convolution kernel
CN111723731A (en) * 2020-06-18 2020-09-29 西安电子科技大学 Hyperspectral image classification method based on spatial spectrum convolution kernel, storage medium and device
CN111832428A (en) * 2020-06-23 2020-10-27 北京科技大学 Data enhancement method applied to strip breakage fault diagnosis of cold rolling mill
CN111832428B (en) * 2020-06-23 2024-02-23 北京科技大学 Data enhancement method applied to cold rolling mill broken belt fault diagnosis
CN111638216A (en) * 2020-06-30 2020-09-08 黑龙江大学 Beet-related disease analysis method for unmanned aerial vehicle system for monitoring plant diseases and insect pests
CN111861924A (en) * 2020-07-23 2020-10-30 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolved GAN
CN111861924B (en) * 2020-07-23 2023-09-22 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN
CN112215296B (en) * 2020-10-21 2023-05-05 红相股份有限公司 Infrared image recognition method based on transfer learning and storage medium
CN112215296A (en) * 2020-10-21 2021-01-12 红相股份有限公司 Infrared image identification method based on transfer learning and storage medium
CN112307926B (en) * 2020-10-26 2022-12-06 西北工业大学 Acoustic passive ship target classification method based on generation countermeasure network
CN112307926A (en) * 2020-10-26 2021-02-02 西北工业大学 Acoustic passive ship target classification method based on generation countermeasure network
CN112597702B (en) * 2020-12-21 2022-07-19 电子科技大学 Pneumatic modeling generation type confrontation network model training method based on radial basis function
CN112597702A (en) * 2020-12-21 2021-04-02 电子科技大学 Pneumatic modeling generation type confrontation network model training method based on radial basis function
CN112699899A (en) * 2020-12-31 2021-04-23 杭州电子科技大学 Hyperspectral image feature extraction method based on generation countermeasure network
CN112733769A (en) * 2021-01-18 2021-04-30 西安电子科技大学 Hyperspectral image classification method based on multiband entropy rate superpixel segmentation
CN112926397B (en) * 2021-01-28 2022-03-01 中国石油大学(华东) SAR image sea ice type classification method based on two-round voting strategy integrated learning
CN112926397A (en) * 2021-01-28 2021-06-08 中国石油大学(华东) SAR image sea ice type classification method based on two-round voting strategy integrated learning
CN112784930B (en) * 2021-03-17 2022-03-04 西安电子科技大学 CACGAN-based HRRP identification database sample expansion method
CN112784930A (en) * 2021-03-17 2021-05-11 西安电子科技大学 CACGAN-based HRRP identification database sample expansion method
CN113096080A (en) * 2021-03-30 2021-07-09 四川大学华西第二医院 Image analysis method and system
CN113096080B (en) * 2021-03-30 2024-01-16 四川大学华西第二医院 Image analysis method and system
CN113095218A (en) * 2021-04-09 2021-07-09 西北工业大学 Hyperspectral image target detection algorithm
CN113095218B (en) * 2021-04-09 2024-01-26 西北工业大学 Hyperspectral image target detection algorithm
CN113435243A (en) * 2021-05-14 2021-09-24 西安电子科技大学 Hyperspectral true downsampling fuzzy kernel estimation method
CN113435243B (en) * 2021-05-14 2024-06-14 西安电子科技大学 Hyperspectral true downsampling fuzzy kernel estimation method
CN113222052A (en) * 2021-05-25 2021-08-06 云南电网有限责任公司电力科学研究院 Method for generating countermeasure neural network for power equipment hyperspectral image classification
CN113572710A (en) * 2021-07-21 2021-10-29 应急管理部四川消防研究所 WVD time-frequency analysis cross item suppression method and system based on generation countermeasure network and storage medium
CN113516656A (en) * 2021-09-14 2021-10-19 浙江双元科技股份有限公司 Defect image data processing simulation method based on ACGAN and Cameralink cameras
CN113989100A (en) * 2021-09-18 2022-01-28 西安电子科技大学 Infrared texture sample expansion method based on pattern generation countermeasure network
CN113989100B (en) * 2021-09-18 2024-08-16 西安电子科技大学 Infrared texture sample expansion method based on style generation countermeasure network
CN114049567A (en) * 2021-11-22 2022-02-15 齐鲁工业大学 Self-adaptive soft label generation method and application in hyperspectral image classification
CN114049567B (en) * 2021-11-22 2024-02-23 齐鲁工业大学 Adaptive soft label generation method and application in hyperspectral image classification
US11631237B1 (en) 2021-11-23 2023-04-18 Northeast Electric Power University Infrared thermal image classification and hot spot positioning method of photovoltaic panel
CN114419360A (en) * 2021-11-23 2022-04-29 东北电力大学 Photovoltaic panel infrared thermal image classification and hot spot positioning method
CN114460013A (en) * 2022-01-28 2022-05-10 自然资源部第一海洋研究所 Coastal wetland vegetation ground biomass GAN model self-learning remote sensing inversion method
CN114460013B (en) * 2022-01-28 2023-10-17 自然资源部第一海洋研究所 Coastal wetland vegetation overground biomass GAN model self-learning remote sensing inversion method
CN114863293A (en) * 2022-05-07 2022-08-05 中国石油大学(华东) Hyperspectral oil spill detection method based on double-branch GAN network
CN114858782A (en) * 2022-07-05 2022-08-05 中国民航大学 Milk powder doping non-directional detection method based on Raman hyperspectral countermeasure discrimination model
CN114863225A (en) * 2022-07-06 2022-08-05 腾讯科技(深圳)有限公司 Image processing model training method, image processing model generation device, image processing equipment and image processing medium
CN115205692B (en) * 2022-09-16 2022-11-29 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN115205692A (en) * 2022-09-16 2022-10-18 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN117077141A (en) * 2023-10-13 2023-11-17 国网山东省电力公司鱼台县供电公司 Smart power grid malicious software detection method and system
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change
CN117612020B (en) * 2024-01-24 2024-07-05 西安宇速防务集团有限公司 SGAN-based detection method for resisting change of remote sensing image element of neural network
CN117648643A (en) * 2024-01-30 2024-03-05 山东神力索具有限公司 Rigging predictive diagnosis method and device based on artificial intelligence
CN117648643B (en) * 2024-01-30 2024-04-16 山东神力索具有限公司 Rigging predictive diagnosis method and device based on artificial intelligence

Also Published As

Publication number Publication date
CN109948693B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN109948693B (en) Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network
CN109145992B (en) Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination
CN113705526B (en) Hyperspectral remote sensing image classification method
CN110084159B (en) Hyperspectral image classification method based on combined multistage spatial spectrum information CNN
CN108764173B (en) Hyperspectral image classification method based on multi-class generation countermeasure network
CN106203523B (en) The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN110298396A (en) Hyperspectral image classification method based on deep learning multiple features fusion
CN114821164B (en) Hyperspectral image classification method based on twin network
CN109598306B (en) Hyperspectral image classification method based on SRCM and convolutional neural network
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN108734199B (en) Hyperspectral image robust classification method based on segmented depth features and low-rank representation
CN108229551B (en) Hyperspectral remote sensing image classification method based on compact dictionary sparse representation
CN103440505A (en) Spatial neighborhood information weighted hyper-spectral remote sensing image classification method
CN108460391A (en) Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN111914728A (en) Hyperspectral remote sensing image semi-supervised classification method and device and storage medium
CN114972885B (en) Multi-mode remote sensing image classification method based on model compression
Thirumaladevi et al. Remote sensing image scene classification by transfer learning to augment the accuracy
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
Jain et al. M-ary Random Forest-A new multidimensional partitioning approach to Random Forest
CN114299382A (en) Hyperspectral remote sensing image classification method and system
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
Li et al. Using improved ICA method for hyperspectral data classification
CN109460788B (en) Hyperspectral image classification method based on low-rank-sparse information combination network
CN114998725B (en) Hyperspectral image classification method based on self-adaptive spatial spectrum attention kernel generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant