CN109948693B - Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network - Google Patents

Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network Download PDF

Info

Publication number
CN109948693B
CN109948693B CN201910201106.0A CN201910201106A CN109948693B CN 109948693 B CN109948693 B CN 109948693B CN 201910201106 A CN201910201106 A CN 201910201106A CN 109948693 B CN109948693 B CN 109948693B
Authority
CN
China
Prior art keywords
layer
discriminator
sample
generator
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910201106.0A
Other languages
Chinese (zh)
Other versions
CN109948693A (en
Inventor
张向荣
焦李成
邢珍杰
唐旭
刘芳
侯彪
马文萍
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910201106.0A priority Critical patent/CN109948693B/en
Publication of CN109948693A publication Critical patent/CN109948693A/en
Application granted granted Critical
Publication of CN109948693B publication Critical patent/CN109948693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a hyperspectral image classification method based on superpixel sample expansion and generation confrontation network, and aims to solve the problem of low classification accuracy caused by overfitting of the network when the number of labeled training samples is small. The implementation is as follows: constructing an initial training set and a test set, and expanding to obtain an expanded training set and a candidate test set; constructing a generation countermeasure network consisting of a generator and a discriminator; generating a false sample by using a generator, and acquiring a false sample and a true and false prediction label and a category prediction label of an extended training set by using a discriminator; constructing loss functions of a generator and a discriminator, and alternately training the generator and the discriminator; training a support vector machine; passing the candidate test set through a trained discriminator and a support vector machine to obtain a candidate label set; a maximum voting algorithm is used on the candidate set of tags to determine the category tags of the test set. The method effectively extracts the spatial features of the hyperspectral images, alleviates the over-fitting problem, improves the classification accuracy, and can be used for classifying the ground objects of the hyperspectral images.

Description

Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network
Technical Field
The invention belongs to the technical field of image processing, relates to an image classification method, and further relates to a hyperspectral image classification method based on superpixel expansion and generation confrontation network. The method can be used for classifying the ground objects of the hyperspectral images.
Background
Compared with a common sensor, the hyperspectral imaging spectrometer has more spectral channels and a narrower wavelength range in a certain wavelength range. Due to spectral overlap between bands, the hyperspectral image has higher spectral resolution. The hyperspectral image simultaneously contains abundant one-dimensional spectral information and two-dimensional spatial information by combining the ground object spatial information acquired by the imaging spectrometer. The hyperspectral image classification technology is one of the hot spots in the field of hyperspectral data application. To avoid the over-fitting phenomenon, the training of the depth model requires a large number of labeled samples. However, the collection of the labels consumes a lot of manpower and material resources, which brings a challenge to the application of deep learning in the field of hyperspectral image classification.
Traditional classifiers, such as support vector machines, decision trees, logistic regression, etc., have been widely used in the field of hyperspectral image classification. These classification methods treat the pixel as an independent unit, taking into account only the spectral information. However, studies have demonstrated that the use of spatial information can greatly improve classification performance. Superpixel segmentation is an image preprocessing technique for extracting spatial information, which can provide spatial support for a method of computing region features. Compared with the traditional pixel level processing method, the method based on the super-pixel is more beneficial to extracting the spatial local features, eliminating the redundancy among data and reducing the computational complexity of subsequent processing.
Recently, a new framework called generation countermeasure network is used for hyperspectral image classification. Compared with other depth-based hyperspectral image feature extraction methods, the method for generating the countermeasure network can effectively relieve the overfitting problem through a competition strategy under the condition that training samples are limited. A
Yushi Chen et al, in its published paper, "general adaptive Networks for Hyperspectral Image Classification" (IEEE Transactions on science and remove Sensing), propose a Hyperspectral Image Classification method based on generation of a countermeasure network. The method proposes two classification architectures, 1D-GAN and 3D-GAN. The 3D-GAN framework simultaneously utilizes the spectrum and space information of the hyperspectral image, and the method comprises the following steps: firstly, establishing a generation countermeasure network for hyperspectral image classification; secondly, extracting three principal components of the hyperspectral image by using a principal component analysis algorithm; and finally, dividing the hyperspectral image into a training set and a testing set, taking each sample point of the hyperspectral image as a central pixel point, taking a neighborhood of 64 multiplied by 64 size of the central sample point as a whole to be input and trained to generate an antagonistic network, and using the trained network for hyperspectral image classification. Although this method fully generates the superior characteristics of the countermeasure network and can extract the discriminative features, it does not make good use of the spatial information. The accuracy of the image classification result is low.
A Hyperspectral Image Classification Method R-VCANet Based on PCANet is proposed in a paper R-VCANet published by Bin Pan et al, A New Deep-Learning-Based Hyperspectral Image Classification Method (IEEE Journal of Selected Topics in Applied Earth objects and Remote Sensing), the Method better utilizes the spectrum-space information of the Hyperspectral Image, and the Method comprises the following steps: firstly, smoothing a hyperspectral image by using an RGF-based method, wherein the process is also a process of combining spectrum-space information and can further mine the structural characteristics of a background; next, feature extraction of the hyperspectral image was performed using VCANet. VCANet is a simplified deep learning model, which evolves on the basis of PCANet. The R-VCANet based approach shows better performance than some of the most advanced approaches, especially if the available training samples are not abundant. However, this method still has a disadvantage in that the number of times of execution of the RGF is not easily determined, so that the spectrum-space information cannot be sufficiently combined, so that the classification accuracy is low.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a hyperspectral image classification method based on a superpixel sample expansion and generation countermeasure network, aiming at relieving the overfitting phenomenon of the network and improving the classification accuracy of hyperspectral images under the condition of less training samples through the full combination of spectrum-space information.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
(1) constructing an initial training set and a test set:
inputting a hyperspectral image
Figure BDA0001997450990000021
hpRepresenting a vector formed by the reflection values of the pixel points p in each wave band by using a spectral vector, wherein T is the total number of the pixel points in the hyperspectral image; the hyperspectral image H comprises c-type pixel points, wherein M pixel points with labels and N pixel points without labels are arranged, and each pixel point is a sample;
taking M labeled pixel points as initial training samples to form an initial training set
Figure BDA0001997450990000022
Which corresponds to a set of category labels of
Figure BDA0001997450990000023
Forming a test set by taking N unlabeled pixel points as test samples
Figure BDA0001997450990000024
Wherein x isiI-th initial training sample, l, representing an initial training setiIndicates the class label to which the ith initial training sample belongs, yjA jth test sample representing a test set;
(2) sample expansion is carried out by using a multi-scale superpixel segmentation method based on entropy rate:
(2a) extracting a first principal component from the hyperspectral image H by using a principal component analysis algorithm to obtain a principal component gray map, setting k different segmentation scales, and setting the number of superpixels of each scale to be SqAnd q is 1,2 and … k, and k superpixel segmentations based on entropy rate and with different scales are respectively carried out on the principal component gray-scale map to obtain k superpixels respectively containing SqSegmentation map of super-pixel block
Figure BDA0001997450990000031
Wherein the content of the first and second substances,
Figure BDA0001997450990000032
shows a segmentation chart GqThe u-th superpixel block in (a);
(2b) for segmentation chart GqUsing all and initial training samples xiInitial training sample composition set belonging to same super pixel block
Figure BDA0001997450990000033
To the collection
Figure BDA0001997450990000034
Performing average pooling to obtain xiIn the division graph GqAverage pooling feature of
Figure BDA0001997450990000035
Using all and test samples y in the same wayjTest sample composition set belonging to the same superpixel block
Figure BDA0001997450990000036
To the collection
Figure BDA0001997450990000037
Average pooling is carried out to obtain yjIn the division graph GqAverage pooling feature of
Figure BDA0001997450990000038
(2c) Taking each initial training sample xiAt each segmentation chart GqAverage pooling feature of
Figure BDA0001997450990000039
Will be provided with
Figure BDA00019974509900000310
Constructing an extended training set as extended training samples
Figure BDA00019974509900000311
Taking extended training samples
Figure BDA00019974509900000312
Class label of
Figure BDA00019974509900000313
And xiClass label ofiSame, obtain the corresponding extended category tag set
Figure BDA00019974509900000314
Taking each test sample yjAt each segmentation chart GqAverage pooling feature of
Figure BDA00019974509900000315
Will be provided with
Figure BDA00019974509900000316
Composing a candidate test set as candidate test samples
Figure BDA00019974509900000317
(3) Constructing a generation countermeasure network consisting of a generator and an arbiter:
(3a) building a generator comprising two full-connection layers and two stepping convolution layers, and setting parameters of each layer;
(3b) building a discriminator comprising three convolution layers, three maximum pooling layers and four full-connection layers, and setting parameters of each layer;
(3c) initializing the generator and the discriminator, wherein the weights of the convolution layer, the stepping convolution layer and the full-connection layer are initialized to satisfy the normal distribution N (0, 0.02) of each element value2) The bias is initialized to a tensor in which each element value is 0;
(4) randomly sampling from uniformly distributed U (-1,1) to generate a 62-dimensional noise vector z, and taking the category label of an extended category label set L' to form a label vector y;
(5) inputting the noise vector z and the label vector y into a generator, and outputting a false sample X of the hyperspectral image through nonlinear mapping of the generatorfake
(6) False sample XfakeInputting the data into a discriminator, and outputting a false sample X through nonlinear mapping of the discriminatorfakeTrue and false prediction tag x ofsAnd class prediction label xc
(7) Inputting the extended training set X' into the discriminantAnd a discriminator for outputting a true/false prediction tag X ' of the extended training set X ' through nonlinear mapping by the discriminator 'sAnd category predictive tag x'c
(8) Constructing a loss function for generating a countermeasure network:
(8a) generating an auxiliary classifier to a loss function L 'of a generator of a countermeasure network'GSum-feature matching loss function LFMAdding to obtain the loss function L of the generatorG
(8b) The loss function of the arbiter still uses the auxiliary classifier to generate the loss function L of the arbiter in the countermeasure networkD
(9) Alternate training generators and discriminators:
(9a) initializing the iteration time t to be 0, and the maximum iteration time to be 1000;
(9b) updating generator parameters by using a loss function of the generator by using a gradient descent method;
(9c) updating parameters of the discriminator by using a loss function of the discriminator by using a gradient descent method;
(9d) let t be t + 1;
(9e) judging whether the iteration times t is equal to 1000, if so, obtaining a trained discriminator, executing (7), and if not, returning to (9 b);
(10) inputting the extended training set X' into a trained discriminator to obtain a discrimination characteristic, and using the discrimination characteristic to train a support vector machine to obtain a trained support vector machine;
(11) obtaining class labels for all test samples in the test set:
inputting the candidate test set Y' into a trained discriminator and a trained support vector machine to obtain a corresponding candidate label set
Figure BDA0001997450990000041
For each j e {1,2, …, N }, a maximum voting algorithm is utilized from a subset of the candidate tag set C
Figure BDA0001997450990000042
Select the maximum possible targetLabel cjC is to be measuredjAs a test specimen yjThe category label of (1).
Compared with the prior art, the invention has the following advantages:
firstly, the invention integrates the spectrum-space information of the hyperspectral image by using a multi-scale superpixel method, overcomes the problem that the image classification accuracy is not high because only the spectral characteristics of the pixels are extracted and the spatial neighborhood characteristics of the pixels are not extracted in the prior art, enhances the characteristic extraction capability of the network and improves the accuracy of the hyperspectral image classification.
Secondly, the invention adds the characteristic matching loss function constraint to the generator to enable the generator to generate a more 'true' false sample, and uses a multi-scale superpixel segmentation method to generate an extended training sample, the false sample and the extended training sample are added with a sample set to increase the number of the training samples, thereby relieving the phenomenon of low classification accuracy rate caused by the fact that the training samples are less and the network overfitting is generated in the prior art, and improving the accuracy rate under the condition of less sample number.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a diagram of the results of terrain classification of a Pavia University dataset using the three classification techniques of the present invention and the prior art.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
Referring to fig. 1, the specific steps for implementing the present invention are as follows:
step 1, constructing an initial training set and a test set.
Commonly used hyperspectral image datasets include the Pavia University dataset obtained by NASA's ross spectrometer, and the Indian pins dataset and the sainas dataset obtained by NASA jet propulsion laboratory's airborne visible/infrared imaging spectrometer AVIRIS
Inputting a hyperspectral image
Figure BDA0001997450990000051
hpRepresenting a vector formed by the reflection values of the pixel points p in each wave band by using a spectral vector, wherein T is the total number of the pixel points in the hyperspectral image; the hyperspectral image H comprises c-type pixel points, wherein M pixel points with labels and N pixel points without labels are arranged, and each pixel point is a sample;
taking M labeled pixel points as initial training samples to form an initial training set
Figure BDA0001997450990000052
Which corresponds to a set of category labels of
Figure BDA0001997450990000053
Forming a test set by taking N unlabeled pixel points as test samples
Figure BDA0001997450990000054
Wherein x isiI-th initial training sample, l, representing an initial training setiIndicates the class label to which the ith initial training sample belongs, yjA jth test sample representing a test set;
the M marked pixels mentioned in this embodiment are pixels that are selected from each type of pixels of the hyperspectral image in a medium number.
And 2, performing sample expansion by using a multi-scale superpixel segmentation method based on entropy rate.
The existing superpixel segmentation methods are mainly divided into two types, one is an algorithm based on a graph, and the other is an algorithm based on gradient descent. The Graph-based method comprises an N-cut-based algorithm, an SL operator-based method, a Graph-based algorithm, an entropy rate-based algorithm and the like, and the gradient descent-based algorithm comprises a watershed-based algorithm, a mean shift algorithm, an SLIC algorithm and the like. The invention uses a superpixel segmentation algorithm based on entropy rate to carry out superpixel segmentation, and carries out superpixel segmentation of different scales on a principal component gray level graph of a hyperspectral image to carry out sample expansion, and the specific method comprises the following steps:
2a) principal component analysis calculation for hyperspectral image HExtracting a first principal component to obtain a principal component gray-scale image, setting k different segmentation scales, and setting the number of superpixels of each scale as SqAnd q is 1,2 and … k, and k superpixel segmentations based on entropy rate and with different scales are respectively carried out on the principal component gray-scale map to obtain k superpixels respectively containing SqSegmentation map of super-pixel block
Figure BDA0001997450990000061
Wherein the content of the first and second substances,
Figure BDA0001997450990000062
shows a segmentation chart GqThe u-th superpixel block in (a);
in the present embodiment, 3 division scales are provided, and the number of superpixels in each division scale is 100,200,300, so the number k of the division scales described here and later should be equal to 3, S without illustration1,S2,S3Unless otherwise stated, shall be equal to 100,200, 300;
2b) for segmentation chart GqUsing all and initial training samples xiInitial training sample composition set belonging to same super pixel block
Figure BDA0001997450990000063
To the collection
Figure BDA0001997450990000064
Performing average pooling to obtain xiIn the division graph GqAverage pooling feature of
Figure BDA0001997450990000065
By the following formula:
Figure BDA0001997450990000066
wherein the content of the first and second substances,
Figure BDA0001997450990000067
denotes xiIn the division graph GqThe average pooling characteristics of the above (c) and (c),
Figure BDA0001997450990000068
denotes xiBelonging to a segmentation chart GqThe u-th super-pixel block of (1),
Figure BDA0001997450990000069
representing superpixel blocks
Figure BDA00019974509900000610
Number of included pixels, xrRepresentation collection
Figure BDA00019974509900000611
The r-th initial training sample in (1);
using all and test samples y in the same wayjTest sample composition set belonging to the same superpixel block
Figure BDA00019974509900000612
Will be assembled
Figure BDA00019974509900000613
Average pooling is carried out to obtain yjIn the division graph GqAverage pooling feature of
Figure BDA00019974509900000614
2c) Taking each initial training sample xiAt each segmentation chart GqAverage pooling feature of
Figure BDA00019974509900000615
Will be provided with
Figure BDA00019974509900000616
Constructing an extended training set as extended training samples
Figure BDA00019974509900000617
Taking extended training samples
Figure BDA00019974509900000618
Class label of
Figure BDA00019974509900000619
And xiClass label ofiSame, obtain the corresponding extended category tag set
Figure BDA00019974509900000620
Taking each test sample yjAt each segmentation chart GqAverage pooling feature of
Figure BDA00019974509900000621
Will be provided with
Figure BDA00019974509900000622
Composing a candidate test set as candidate test samples
Figure BDA00019974509900000623
And 3, constructing a generation countermeasure network consisting of a generator and a discriminator.
The method comprises the following steps of generating a confrontation network CGAN in a conditional mode by a common generation confrontation network, generating a confrontation network DCGAN by deep convolution, generating a confrontation network ACGAN by an auxiliary classifier, generating a confrontation network LSGAN by least square, and the like, wherein the confrontation network ACGAN is generated by the auxiliary classifier, and the confrontation network ACGAN is composed of a generator and a discriminator and is constructed by the following steps:
3a) the generator is constructed and each layer of parameters is set.
The generator comprises two fully-connected layers and two stepping convolution layers, wherein the first fully-connected layer, the second fully-connected layer, the first stepping convolution layer and the second stepping convolution layer are sequentially arranged from left to right; the number of the nodes of the first full connection layer is 1024, and the number of the nodes of the second full connection layer is
Figure BDA0001997450990000071
The convolution kernel size of the first convolution layer is 1 × 3, the step size is 1 × 2, the number of channels is 64, the convolution kernel size of the second convolution layer is 1 × 3, and the step size is1 x 2, the number of channels is 1, wherein,
Figure BDA0001997450990000072
denotes a rounding-up operation, ch denotes the spectral vector hpA dimension value of (a);
3b) the discriminator is constructed and the parameters of each layer are set.
The discriminator comprises three convolution layers, 10 layers including three maximum pooling layers and four full-connection layers, and the three maximum pooling layers and the four full-connection layers sequentially comprise from left to right:
the first layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 64;
the second layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the third layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 128;
the fourth layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the fifth layer is a convolution layer, the size of the convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 512;
the sixth layer is a maximum pooling layer, the pooling window is 1 × 2, and the step length is 1 × 2;
the seventh layer is a full connection layer, and the number of nodes of the full connection layer is 1024;
the eighth layer is a full connection layer, and the number of nodes of the full connection layer is 64;
the ninth layer is a full connection layer, and the number of nodes of the full connection layer is 1;
and the tenth layer is a full connection layer, the node number of the full connection layer is c, and c is the category number of the hyperspectral image H.
And 4, setting the input of the generator.
The input to the generator comprises two parts, a noise vector z and a label vector y.
Randomly sampling and generating a 62-dimensional vector from the uniformly distributed U (-1,1) to form a noise vector z;
and (5) taking the category labels of the extended category label set L' to form a label vector y.
Step 5, generating false sample X by using generatorfake
Inputting the noise vector z and the label vector y into a generator, and outputting a false sample X of the hyperspectral image through nonlinear mapping of the generatorfakeThe method comprises the following implementation steps:
firstly, a noise vector z and a category vector y are input into a first full-link layer of a generator, and a first feature map g is output after full-link transformation and ReLU transformation in sequence1
Next, the first characteristic diagram g is set1Inputting the data into a second full-connection layer of the generator, sequentially performing full-connection transformation, batch normalization and ReLU transformation, and outputting a second characteristic diagram g2
Then, the second characteristic diagram g is set2The first step convolution layer input to the generator is sequentially subjected to step convolution operation, batch normalization and ReLU transformation, and a third feature map g is output3
Then, the third feature map g is set3Inputting the second step convolution layer to the generator, sequentially performing step convolution operation and tanh transformation, and finally outputting a fourth feature map with the size of 1 × d, namely a false sample XfakeAnd d represents the number of channels of the hyperspectral image.
Step 6, outputting a false sample X by using a discriminatorfakeTrue and false prediction tag x ofsAnd class prediction label xc
False sample XfakeInputting the data into a discriminator, and outputting a true and false prediction label x of a false sample through nonlinear mapping of the discriminatorsAnd class prediction label xcThe method comprises the following implementation steps:
step 1, false sample X is processedfakeThe first layer of convolution layer input to the discriminator is sequentially subjected to convolution operation and Leaky-ReLU transformation to obtain a first output characteristic diagram d1
Step 2, outputting the first output characteristic diagram d1Inputting the second layer of maximum pooling layer to the discriminator to obtain a second output feature map d2
Step 3, outputting the second output characteristic diagram d2The third layer of convolution layer input to the discriminator is sequentially subjected to convolution operation and batch standardizationAnd Leaky-ReLU transformation to obtain a third output characteristic diagram d3
Step 4, outputting the third output characteristic diagram d3Inputting the data into a fourth maximum pooling layer of the discriminator to obtain a fourth output feature map d4
Step 5, outputting the fourth output characteristic diagram d4Inputting the fifth convolution layer to the discriminator, and sequentially performing convolution operation, batch normalization and Leaky-ReLU transformation to obtain a fifth output characteristic diagram d5
Step 6, outputting a fifth output characteristic diagram d5Inputting the maximum pooling layer of the sixth layer into the discriminator to obtain a sixth output feature map d6
Step 7, outputting a sixth output characteristic diagram d6Inputting the data into a seventh fully-connected layer of the discriminator, and sequentially carrying out fully-connected transformation, batch standardization and Leaky-ReLU transformation to obtain a seventh output characteristic diagram d with the size of 1 multiplied by 10247
Step 8, outputting a seventh output characteristic diagram d7Inputting the signals into the eighth and ninth full-link layers of the discriminator to obtain an eighth output feature map d of 1 × 64 size8And a ninth output feature map d of size 1 × 19
Step 9, outputting the eighth output characteristic diagram d8Inputting the data into the tenth fully-connected layer of the discriminator to obtain a tenth output feature map d with the size of 1 × c10
Step 10, a ninth output characteristic diagram d9Carrying out sigmoid transformation to obtain a true and false prediction label x with the size of 1 multiplied by 1sThe tenth output feature map d10After the softmax transformation, a class prediction label x with the size of 1 × c is outputc
Step 7, using a discriminator to output a true and false prediction label X 'of the extended training set X'sAnd category predictive tag x'c
Inputting the extended training set X 'into a discriminator, and outputting a true and false prediction label X of the extended training set X' through nonlinear mapping of the discriminators'and Category predictive tag x'cThe specific implementation steps are the same as those in step 6.
And 8, constructing loss functions of the generator and the discriminator.
8a) Loss function of generator:
8a1) generating a vector y with the number of elements equal to the label vector y0Each element value in the vector is equal to 0;
8a2) calculating a false sample XfakeTrue and false prediction tag x ofsAnd vector y0Cross entropy between LS
8a3) Calculating a false sample XfakeClass prediction label x ofcAnd cross entropy L between tag vectors yC
8a4) From the results of 8a2) and 8a3), the resulting auxiliary classifier generates a generator loss function L 'in the antagonistic network'G=LS+LC
8a5) Defining a feature matching loss function: l isFM=||d7,fake-d7,true||2+||d8,fake-d8,true||2Wherein d is7,fake,d7,fakeA seventh output profile, d, representing the dummy samples and the extended training samples, respectively8,fake,d8,trueAn eighth output feature map representing the dummy samples and the extended training samples, respectively;
8a6) from the results of 8a4) and 8a5), the loss function L of the generator is obtainedG=L'G+LFM
8b) The loss function of the arbiter still uses the auxiliary classifier to generate the loss function L of the arbiter in the countermeasure networkD,LDIs represented as follows:
LD=L'S+L'C-LS+LC
wherein, L'SRepresenting extended training sample true and false prediction labels xs' and y1Cross entropy between, y1Is a vector L 'with element values equal to 1 and the number of elements equal to the tag vector y'CIs a false sample XfakeClass prediction tag of x'cCross entropy with label vector y, LSIs a false sample XfakeTrue and false prediction tag x ofsAnd vector y0Cross entropy between, LCIs a false sample XfakeClass prediction label x ofcAnd the cross entropy between the tag vector y.
And 9, alternately training the generator and the discriminator.
9a) Initializing the iteration time t to be 0, and the maximum iteration time to be 1000;
9b) using the loss function L of the generator using a gradient descent methodGUpdating generator parameters;
9c) using a gradient descent method, using a loss function L of an arbiterDUpdating the parameters of the discriminator;
9d) let t be t + 1;
9e) judging whether the iteration times t is equal to 1000, if so, obtaining a trained discriminator, executing the step 10, otherwise, returning to the step (9 b);
and step 10, training a support vector machine.
Inputting the extended training set X' into a trained discriminator to obtain a ninth output characteristic diagram, namely a discrimination characteristic, and then inputting the discrimination characteristic into a support vector machine taking a radial basis function as a kernel function to carry out nonlinear classification; and then, searching the optimal parameters of the support vector machine by adopting a grid searching method to obtain the trained support vector machine.
Step 11, inputting the candidate test set Y' into the trained discriminator and the trained support vector machine to obtain the corresponding candidate class label set
Figure BDA0001997450990000101
For each test specimen yjThat is, for each j ∈ {1,2, …, N }, a maximum voting algorithm is used to extract from the subset of the candidate tag set C
Figure BDA0001997450990000111
To select the most probable label cjC is to be measuredjAs a test specimen yjThe category label of (1).
The technical effects of the invention are further explained by combining simulation tests as follows:
1. simulation conditions
The dataset used in the present invention was the University of Pavia Italy, photographed by a ROSIS optical sensor. The data set contains a total of 9 terrain categories, 610 x 340 in size, with a spatial resolution of 1.3 meters per pixel. After removing the noise and water vapor absorption bands, the remaining 103 bands were used for classification experiments.
The simulation platform is as follows: ubuntu14.04 operating system, tensrflow deep learning platform.
In the simulation experiment, three prior arts are specifically adopted as follows:
the Hyperspectral Image Classification Method proposed by Pan et al in "R-VCANet, A New Deep-Learning-Based Hyperspectral Image Classification Method, IEEE J.Sel.topics appl.Earth observer.Remote Sens., vol.10, No.5, pp.1975-1986, May.2017", is abbreviated as R-VCANet Method.
A Hyperspectral image Classification method, called HiFi-We method for short, was proposed by Pan et al in "historical guiding filtration-Based analysis Classification for Hyperspectral Images, IEEE trans. Geosci. remote Sens., vol.55, No.7, pp.4177-4189, July.2017".
The Hyperspectral Image Classification method, abbreviated as PPF-CNN method, proposed by Li et al in "Hyperspectral Image Classification Using Deep Pixel-Pair Features, IEEE trans. Geosci. remote Sens., vol.55, No.2, pp.844-853, Feb.2017".
2. Emulated content
The simulation experiment is to classify the ground features of the hyperspectral image Pavia University data set of Paviia University by adopting the method of the invention and three prior art R-VCANet method, HiFi-We method and PPF-CNN method, and the result is shown in figure 2, wherein:
FIG. 2(a) is a diagram showing a true terrain map of an input hyperspectral image Italy University of Pavia University dataset;
FIG. 2(b) is a graph showing the results of terrain classification using the R-VCANet method on the hyperspectral image Pavia University of Pavica University dataset of Italy Pavica;
FIG. 2(c) is a graph showing the results of classifying the terrain using HiFi-We method on the Pavia University dataset of the Hyperspectral image University of Pavica italy;
FIG. 2(d) is a diagram showing the result of classifying the feature of the Pavia University dataset of the Hyperspectral image University of Pavica italy by the PPF-CNN method;
FIG. 2(e) is a diagram showing the result of classifying the feature of the Pavia University dataset of the Hyperspectral image University of Pavica Italy according to the method of the present invention.
3. Analysis of simulation results
As can be seen from FIG. 2, the other three prior art techniques misclassify many samples of the classes Meadows and Aspalat. Compared with the prior art, the invention realizes higher classification results on the two categories. The reason is that the spatial-spectral characteristics of the hyperspectral image are effectively extracted by the superpixel segmentation algorithm, and a smoother classification result can be obtained.
In simulation result analysis, three evaluation indexes are adopted, specifically as follows:
OA, the overall precision, represents the proportion of correctly classified samples to all samples. The larger the OA value, the better the classification effect.
AA, mean accuracy, represents the average of all class classification accuracies. The larger the AA value, the better the classification effect.
Kappa, the chi-squared coefficient, the higher the value of the coefficient, the higher the classification accuracy achieved by the representative model.
Statistics are given to the results of the classification of the University of italian Pavia University, according to the present invention and three prior arts in fig. 2, and the overall classification accuracy OA of each type of ground feature is shown in table 1.
The total accuracy OA, the average accuracy AA, and the Kappa number Kappa of each evaluation index of all types of ground objects are shown in table 2.
TABLE 1 Classification accuracy of each type of ground object
Figure BDA0001997450990000121
TABLE 2 evaluation indexes of all the ground features
Figure BDA0001997450990000131
As can be seen from Table 1, the overall classification accuracy OA of most ground feature classes, particularly class Bricks is improved, and compared with R-VCANET, the OA index of the ground feature class is improved by 7.87%; compared with HiFi-We, the OA index of the invention on class Bricks is improved by 3.64%; compared with PPF-CNN, the OA index of the invention on the class of Bricks is improved by 12.21 percent.
As can be seen from Table 2, the overall classification accuracy OA, the average classification accuracy AA and the Kappa coefficient of the present invention are all greatly improved. Compared with the R-VCANet method, the method improves the OA index by 9.57 percent, improves the AA index by 3.72 percent and improves the Kappa index by 12.22 percent; compared with the HiFi-We method, the method improves the OA index by 4.14 percent, the AA index by 1.46 percent and the Kappa index by 5.35 percent; compared with PPF-CNN, the invention improves OA index by 15.55%, AA index by 7.87% and Kappa index by 19.69%.
In conclusion, compared with the prior art, the method has the advantage that the classification accuracy is greatly improved.

Claims (9)

1. A method for expanding and generating a confrontation network hyperspectral image classification based on a superpixel sample is characterized by comprising the following steps:
(1) constructing an initial training set and a test set:
inputting a hyperspectral image
Figure FDA0002969962730000011
hpRepresenting a vector formed by the reflection values of the pixel points p in each wave band by using a spectral vector, wherein T is the total number of the pixel points in the hyperspectral image; the hyperspectral image H comprises c-type pixel points, wherein M pixel points with labels and N pixel points without labels are arranged, and each pixel point is a sample;
taking M labeled pixel points as initialThe training samples constitute an initial training set
Figure FDA0002969962730000012
Which corresponds to a set of category labels of
Figure FDA0002969962730000013
Forming a test set by taking N unlabeled pixel points as test samples
Figure FDA0002969962730000014
Wherein x isiI-th initial training sample, l, representing an initial training setiIndicates the class label to which the ith initial training sample belongs, yjA jth test sample representing a test set;
(2) sample expansion is carried out by using a multi-scale superpixel segmentation method based on entropy rate:
(2a) extracting a first principal component from the hyperspectral image H by using a principal component analysis algorithm to obtain a principal component gray map, setting k different segmentation scales, and setting the number of superpixels of each scale to be SqAnd q is 1,2 and … k, and k superpixel segmentations based on entropy rate and with different scales are respectively carried out on the principal component gray-scale map to obtain k superpixels respectively containing SqSegmentation map of super-pixel block
Figure FDA0002969962730000015
Wherein the content of the first and second substances,
Figure FDA0002969962730000016
shows a segmentation chart GqThe u-th superpixel block in (a);
(2b) for segmentation chart GqUsing all and initial training samples xiInitial training sample composition set belonging to same super pixel block
Figure FDA0002969962730000017
To the collection
Figure FDA0002969962730000018
Performing average pooling to obtain xiIn the division graph GqAverage pooling feature of
Figure FDA0002969962730000019
Using all and test samples y in the same wayjTest sample composition set belonging to the same superpixel block
Figure FDA00029699627300000110
To the collection
Figure FDA00029699627300000111
Average pooling is carried out to obtain yjIn the division graph GqAverage pooling feature of
Figure FDA00029699627300000112
(2c) Taking each initial training sample xiAt each segmentation chart GqAverage pooling feature of
Figure FDA00029699627300000113
Will be provided with
Figure FDA00029699627300000114
Constructing an extended training set as extended training samples
Figure FDA00029699627300000115
Taking extended training samples
Figure FDA00029699627300000116
Class label of
Figure FDA0002969962730000021
And xiClass label ofiSame, obtain the corresponding extended category tag set
Figure FDA0002969962730000022
Taking each test sample yjAt each segmentation chart GqAverage pooling feature of
Figure FDA0002969962730000023
Will be provided with
Figure FDA0002969962730000024
Composing a candidate test set as candidate test samples
Figure FDA0002969962730000025
(3) Constructing a generation countermeasure network consisting of a generator and an arbiter:
(3a) building a generator comprising two full-connection layers and two stepping convolution layers, and setting parameters of each layer;
(3b) building a discriminator comprising three convolution layers, three maximum pooling layers and four full-connection layers, and setting parameters of each layer;
(3c) initializing the generator and the discriminator, wherein the weights of the convolution layer, the stepping convolution layer and the full-connection layer are initialized to satisfy the normal distribution N (0, 0.02) of each element value2) The bias is initialized to a tensor in which each element value is 0;
(4) randomly sampling from uniformly distributed U (-1,1) to generate a 62-dimensional noise vector z, and taking the category label of an extended category label set L' to form a label vector y;
(5) inputting the noise vector z and the label vector y into a generator, and outputting a false sample X of the hyperspectral image through nonlinear mapping of the generatorfake
(6) False sample XfakeInputting the data into a discriminator, and outputting a false sample X through nonlinear mapping of the discriminatorfakeTrue and false prediction tag x ofsAnd class prediction label xc
(7) Inputting the extended training set X 'into a discriminator, and outputting a true and false prediction label X' of the extended training set X 'through nonlinear mapping of the discriminator'sAnd category predictive tag x'c
(8) Constructing a loss function for generating a countermeasure network:
(8a) generating an auxiliary classifier to a loss function L 'of a generator of a countermeasure network'GSum-feature matching loss function LFMAdding to obtain the loss function L of the generatorG
(8b) The loss function of the arbiter still uses the auxiliary classifier to generate the loss function L of the arbiter in the countermeasure networkD
(9) Alternate training generators and discriminators:
(9a) initializing the iteration time t to be 0, and the maximum iteration time to be 1000;
(9b) updating generator parameters by using a loss function of the generator by using a gradient descent method;
(9c) updating parameters of the discriminator by using a loss function of the discriminator by using a gradient descent method;
(9d) let t be t + 1;
(9e) judging whether the iteration times t is equal to 1000, if so, obtaining a trained discriminator, executing (7), and if not, returning to (9 b);
(10) inputting the extended training set X' into a trained discriminator to obtain a discrimination characteristic, and using the discrimination characteristic to train a support vector machine to obtain a trained support vector machine;
(11) obtaining class labels for all test samples in the test set:
inputting the candidate test set Y' into a trained discriminator and a trained support vector machine to obtain a corresponding candidate label set
Figure FDA0002969962730000031
For each j e {1,2, …, N }, a maximum voting algorithm is utilized from a subset of the candidate tag set C
Figure FDA0002969962730000032
To select the most probable label cjC is to be measuredjAs a test specimen yjThe category label of (1).
2. The method of claim 1, wherein (2b) is a set of pairs
Figure FDA0002969962730000033
Performing average pooling to obtain xiIn the division graph GqAverage pooling feature of
Figure FDA0002969962730000034
By the following formula:
Figure FDA0002969962730000035
wherein the content of the first and second substances,
Figure FDA0002969962730000036
denotes xiIn the division graph GqThe average pooling characteristics of the above (c) and (c),
Figure FDA0002969962730000037
denotes xiBelonging to a segmentation chart GqThe u-th super-pixel block of (1),
Figure FDA0002969962730000038
representing superpixel blocks
Figure FDA0002969962730000039
Number of contained pixels, xrRepresentation collection
Figure FDA00029699627300000310
The r-th initial training sample in (1).
3. The method according to claim 1, wherein (3a) the built generator is structured with a first fully connected layer, a second fully connected layer, a first step convolutional layer and a second step convolutional layer in this order from left to right; the number of nodes of the first full connection layer is 1024, the number of the nodes of the second full connection layer is
Figure FDA00029699627300000311
The convolution kernel size of the first-step convolutional layer is 1 × 3, the step size is 1 × 2, the number of channels is 64, the convolution kernel size of the second-step convolutional layer is 1 × 3, the step size is 1 × 2, the number of channels is 1, wherein,
Figure FDA00029699627300000312
denotes a rounding-up operation, ch denotes the spectral vector hpThe dimension value of (a).
4. The method according to claim 1, wherein (3b) the construction of the discriminator has 10 layers from left to right, i.e. the discriminator
The first layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 64;
the second layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the third layer is a convolution layer, the size of a convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 128;
the fourth layer is a maximum pooling layer, the pooling window is 1 multiplied by 2, and the step length is 1 multiplied by 2;
the fifth layer is a convolution layer, the size of the convolution kernel is 1 multiplied by 3, the step length is 1 multiplied by 1, and the number of channels is 512;
the sixth layer is a maximum pooling layer, the pooling window is 1 × 2, and the step length is 1 × 2;
the seventh layer is a full connection layer, and the number of nodes of the full connection layer is 1024;
the eighth layer is a full connection layer, and the number of nodes of the full connection layer is 64;
the ninth layer is a full connection layer, and the number of nodes of the full connection layer is 1;
and the tenth layer is a full connection layer, the node number of the full connection layer is c, and c is the category number of the hyperspectral image H.
5. The method of claim 1 or 3, wherein the generator in (5) outputs the hyperspectral imageFalse sample XfakeIt is implemented as follows:
(5a) inputting the noise vector z and the category vector y into a first full-connection layer of a generator, sequentially carrying out full-connection transformation and ReLU transformation, and outputting a first feature map g1
(5b) The first characteristic diagram g1Inputting the data into a second full-connection layer of the generator, sequentially performing full-connection transformation, batch normalization and ReLU transformation, and outputting a second characteristic diagram g2
(5c) The second characteristic diagram g2The first step convolution layer input to the generator is sequentially subjected to step convolution operation, batch normalization and ReLU transformation, and a third feature map g is output3
(5d) The third characteristic diagram g3Inputting the second step convolution layer to the generator, sequentially performing step convolution operation and tanh transformation, and finally outputting a fourth feature map with the size of 1 × d, namely a false sample XfakeAnd d represents the number of channels of the hyperspectral image.
6. The method according to claim 1 or 4, wherein the false sample X in (6)fakeInputting the data into a discriminator, and outputting a false sample X through nonlinear mapping of the discriminatorfakeTrue and false prediction vector x ofsAnd a class prediction vector xcIt is implemented as follows:
(6a) false sample XfakeThe first layer of convolution layer input to the discriminator is sequentially subjected to convolution operation and Leaky-ReLU transformation to obtain a first output characteristic diagram d1
(6b) The first output characteristic map d1Inputting the second layer of maximum pooling layer to the discriminator to obtain a second output feature map d2
(6c) The second output characteristic map d2Inputting the third layer of convolution layer to the discriminator, and obtaining a third output characteristic diagram d through convolution operation, batch standardization and Leaky-ReLU transformation in sequence3
(6d) The third output characteristic diagram d3Inputting the data into a fourth maximum pooling layer of the discriminator to obtain a fourth output feature map d4
(6e) Outputting the fourth output characteristic diagram d4Inputting the fifth convolution layer to the discriminator, and sequentially performing convolution operation, batch normalization and Leaky-ReLU transformation to obtain a fifth output characteristic diagram d5
(6f) The fifth output characteristic diagram d5Inputting the maximum pooling layer of the sixth layer into the discriminator to obtain a sixth output feature map d6
(6g) Outputting the sixth output characteristic diagram d6Inputting the data into a seventh fully-connected layer of the discriminator, and sequentially carrying out fully-connected transformation, batch standardization and Leaky-ReLU transformation to obtain a seventh output characteristic diagram d with the size of 1 multiplied by 10247
(6h) Outputting the seventh output characteristic diagram d7Inputting the signals into the eighth and ninth full-link layers of the discriminator to obtain an eighth output feature map d of 1 × 64 size8And a ninth output feature map d of size 1 × 19
(6i) The eighth output characteristic diagram d8Inputting the data into the tenth fully-connected layer of the discriminator to obtain a tenth output feature map d with the size of 1 × c10Wherein c is the number of categories of the hyperspectral images;
(6j) the ninth output characteristic diagram d9Carrying out sigmoid transformation to obtain a true and false prediction label x with the size of 1 multiplied by 1sThe tenth output feature map d10After the softmax transformation, a class prediction label x with the size of 1 × c is outputc
7. Method according to claim 6, characterized in that (8a) the auxiliary classifier is generated as a loss function L 'of the generator of the countermeasure network'GSum-feature matching loss function LFMAdding to obtain the loss function L of the generatorGIt is implemented as follows:
(8a1) generating a vector a with the number of elements equal to the label vector y0Each element value in the vector is equal to 0;
(8a2) calculating a false sample XfakeTrue and false prediction tag x ofsAnd vector a0Cross entropy between LS
(8a3) ComputingFalse sample XfakeClass prediction label x ofcAnd cross entropy L between tag vectors yC
(8a4) From the results of (8a2) and (8a3), get the generator loss function L 'in the auxiliary classifier generation countermeasure network'G=LS+LC
(8a5) Defining a feature matching loss function LFM=||d7,fake-d7,true||2+||d8,fake-d8,true||2Wherein d is7,fake,d7,trueA seventh output profile, d, representing the dummy samples and the extended training samples, respectively8,fake,d8,trueAn eighth output feature map representing the dummy samples and the extended training samples, respectively;
(8a6) from the results of (8a4) and (8a5), the loss function L of the generator is obtainedG=L'G+LFM
8. The method of claim 7, wherein the auxiliary classifier in (8b) generates a discriminant loss function L in the countermeasure networkDExpressed as follows:
LD=L'S+L'C-LS+LC
wherein, L'SRepresenting true and false prediction label x 'of augmented training sample'sAnd a1Cross entropy between a1Is a vector L 'with element values equal to 1 and the number of elements equal to the tag vector y'CIs a false sample XfakeClass prediction tag of x'cCross entropy with label vector y, LSIs a false sample XfakeTrue and false prediction tag x ofsAnd vector a0Cross entropy between, LCIs a false sample XfakeClass prediction label x ofcAnd the cross entropy between the tag vector y.
9. The method of claim 1, wherein in (10), the training of the support vector machine is performed by inputting the extended training sample set to the trained discriminator to obtain a ninth output feature map, i.e. a discrimination feature; inputting the discrimination characteristics into a support vector machine taking a radial basis function as a kernel function to carry out nonlinear classification; and then, searching the optimal parameters of the support vector machine by adopting a grid searching method to obtain the trained support vector machine.
CN201910201106.0A 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network Active CN109948693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910201106.0A CN109948693B (en) 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910201106.0A CN109948693B (en) 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network

Publications (2)

Publication Number Publication Date
CN109948693A CN109948693A (en) 2019-06-28
CN109948693B true CN109948693B (en) 2021-09-28

Family

ID=67009002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910201106.0A Active CN109948693B (en) 2019-03-18 2019-03-18 Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network

Country Status (1)

Country Link
CN (1) CN109948693B (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110456355B (en) * 2019-08-19 2021-12-24 河南大学 Radar echo extrapolation method based on long-time and short-time memory and generation countermeasure network
CN110826059B (en) * 2019-09-19 2021-10-15 浙江工业大学 Method and device for defending black box attack facing malicious software image format detection model
CN110688968B (en) * 2019-09-30 2022-12-02 西安电子科技大学 Hyperspectral target detection method based on multi-instance deep convolutional memory network
CN110781976B (en) * 2019-10-31 2021-01-05 重庆紫光华山智安科技有限公司 Extension method of training image, training method and related device
CN110909814B (en) * 2019-11-29 2023-05-26 华南理工大学 Classification method based on feature separation
CN111079602B (en) * 2019-12-06 2024-02-09 长沙千视通智能科技有限公司 Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN111104982B (en) * 2019-12-20 2021-09-24 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111310791A (en) * 2020-01-17 2020-06-19 电子科技大学 Dynamic progressive automatic target identification method based on small sample number set
CN111275108A (en) * 2020-01-20 2020-06-12 国网山东省电力公司枣庄供电公司 Method for performing sample expansion on partial discharge data based on generation countermeasure network
CN111461264B (en) * 2020-05-25 2023-06-13 南京大学 Scalable modularized image recognition method based on generation of countermeasure network
CN111695467B (en) * 2020-06-01 2023-05-30 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion
CN111860124B (en) * 2020-06-04 2024-04-02 西安电子科技大学 Remote sensing image classification method based on space spectrum capsule generation countermeasure network
CN111723731B (en) * 2020-06-18 2023-09-29 西安电子科技大学 Hyperspectral image classification method, storage medium and equipment based on spatial spectrum convolution kernel
CN111832428B (en) * 2020-06-23 2024-02-23 北京科技大学 Data enhancement method applied to cold rolling mill broken belt fault diagnosis
CN111638216A (en) * 2020-06-30 2020-09-08 黑龙江大学 Beet-related disease analysis method for unmanned aerial vehicle system for monitoring plant diseases and insect pests
CN111861924B (en) * 2020-07-23 2023-09-22 成都信息工程大学 Cardiac magnetic resonance image data enhancement method based on evolutionary GAN
CN112215296B (en) * 2020-10-21 2023-05-05 红相股份有限公司 Infrared image recognition method based on transfer learning and storage medium
CN112307926B (en) * 2020-10-26 2022-12-06 西北工业大学 Acoustic passive ship target classification method based on generation countermeasure network
CN112597702B (en) * 2020-12-21 2022-07-19 电子科技大学 Pneumatic modeling generation type confrontation network model training method based on radial basis function
CN112733769B (en) * 2021-01-18 2023-04-07 西安电子科技大学 Hyperspectral image classification method based on multiband entropy rate superpixel segmentation
CN112926397B (en) * 2021-01-28 2022-03-01 中国石油大学(华东) SAR image sea ice type classification method based on two-round voting strategy integrated learning
CN112784930B (en) * 2021-03-17 2022-03-04 西安电子科技大学 CACGAN-based HRRP identification database sample expansion method
CN113096080B (en) * 2021-03-30 2024-01-16 四川大学华西第二医院 Image analysis method and system
CN113095218B (en) * 2021-04-09 2024-01-26 西北工业大学 Hyperspectral image target detection algorithm
CN113435243A (en) * 2021-05-14 2021-09-24 西安电子科技大学 Hyperspectral true downsampling fuzzy kernel estimation method
CN113222052B (en) * 2021-05-25 2023-06-23 云南电网有限责任公司电力科学研究院 Method for generating antagonistic neural network for hyperspectral image classification of power equipment
CN113572710A (en) * 2021-07-21 2021-10-29 应急管理部四川消防研究所 WVD time-frequency analysis cross item suppression method and system based on generation countermeasure network and storage medium
CN113516656B (en) * 2021-09-14 2021-12-14 浙江双元科技股份有限公司 Defect image data processing simulation method based on ACGAN and Cameralink cameras
CN114049567B (en) * 2021-11-22 2024-02-23 齐鲁工业大学 Adaptive soft label generation method and application in hyperspectral image classification
CN114419360B (en) * 2021-11-23 2022-09-02 东北电力大学 Photovoltaic panel infrared thermal image classification and hot spot positioning method
CN114460013B (en) * 2022-01-28 2023-10-17 自然资源部第一海洋研究所 Coastal wetland vegetation overground biomass GAN model self-learning remote sensing inversion method
CN114863293B (en) * 2022-05-07 2023-07-18 中国石油大学(华东) Hyperspectral oil spill detection method based on double-branch GAN network
CN114858782B (en) * 2022-07-05 2022-09-27 中国民航大学 Milk powder doping non-directional detection method based on Raman hyperspectral countermeasure discriminant model
CN114863225B (en) * 2022-07-06 2022-10-04 腾讯科技(深圳)有限公司 Image processing model training method, image processing model generation device, image processing model equipment and image processing model medium
CN115205692B (en) * 2022-09-16 2022-11-29 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN117077141A (en) * 2023-10-13 2023-11-17 国网山东省电力公司鱼台县供电公司 Smart power grid malicious software detection method and system
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change
CN117648643B (en) * 2024-01-30 2024-04-16 山东神力索具有限公司 Rigging predictive diagnosis method and device based on artificial intelligence

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826160A (en) * 2010-03-31 2010-09-08 北京航空航天大学 Hyperspectral image classification method based on immune evolutionary strategy

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7400770B2 (en) * 2002-11-06 2008-07-15 Hrl Laboratories Method and apparatus for automatically extracting geospatial features from multispectral imagery suitable for fast and robust extraction of landmarks
US8515201B1 (en) * 2008-09-18 2013-08-20 Stc.Unm System and methods of amplitude-modulation frequency-modulation (AM-FM) demodulation for image and video processing
CN103034863B (en) * 2012-12-24 2015-08-12 重庆市勘测院 The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN106503727B (en) * 2016-09-30 2019-09-24 西安电子科技大学 A kind of method and device of classification hyperspectral imagery
CN107563355B (en) * 2017-09-28 2021-04-02 哈尔滨工程大学 Hyperspectral anomaly detection method based on generation of countermeasure network
CN108764173B (en) * 2018-05-31 2021-09-03 西安电子科技大学 Hyperspectral image classification method based on multi-class generation countermeasure network
CN109145992B (en) * 2018-08-27 2021-07-20 西安电子科技大学 Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826160A (en) * 2010-03-31 2010-09-08 北京航空航天大学 Hyperspectral image classification method based on immune evolutionary strategy

Also Published As

Publication number Publication date
CN109948693A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109948693B (en) Hyperspectral image classification method based on superpixel sample expansion and generation countermeasure network
CN109145992B (en) Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination
CN110084159B (en) Hyperspectral image classification method based on combined multistage spatial spectrum information CNN
CN108764173B (en) Hyperspectral image classification method based on multi-class generation countermeasure network
CN113705526B (en) Hyperspectral remote sensing image classification method
Westphal et al. Document image binarization using recurrent neural networks
CN103440505B (en) The Classification of hyperspectral remote sensing image method of space neighborhood information weighting
CN109598306B (en) Hyperspectral image classification method based on SRCM and convolutional neural network
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN108229551B (en) Hyperspectral remote sensing image classification method based on compact dictionary sparse representation
CN112052755A (en) Semantic convolution hyperspectral image classification method based on multi-path attention mechanism
Bai et al. Nhl pathological image classification based on hierarchical local information and googlenet-based representations
CN103886342A (en) Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning
CN114821164A (en) Hyperspectral image classification method based on twin network
CN105069478A (en) Hyperspectral remote sensing surface feature classification method based on superpixel-tensor sparse coding
CN111914728A (en) Hyperspectral remote sensing image semi-supervised classification method and device and storage medium
Jahan et al. Inverse coefficient of variation feature and multilevel fusion technique for hyperspectral and LiDAR data classification
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
Thirumaladevi et al. Remote sensing image scene classification by transfer learning to augment the accuracy
Jain et al. M-ary Random Forest-A new multidimensional partitioning approach to Random Forest
CN111222545A (en) Image classification method based on linear programming incremental learning
CN114863173A (en) Land resource audit-oriented self-interaction high-attention spectrum image classification method
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
Li et al. Using improved ICA method for hyperspectral data classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant