CN112560960A - Hyperspectral image classification method and device and computing equipment - Google Patents

Hyperspectral image classification method and device and computing equipment Download PDF

Info

Publication number
CN112560960A
CN112560960A CN202011498866.1A CN202011498866A CN112560960A CN 112560960 A CN112560960 A CN 112560960A CN 202011498866 A CN202011498866 A CN 202011498866A CN 112560960 A CN112560960 A CN 112560960A
Authority
CN
China
Prior art keywords
training set
training
sample
samples
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011498866.1A
Other languages
Chinese (zh)
Other versions
CN112560960B (en
Inventor
樊硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Moviebook Technology Corp ltd
Original Assignee
Beijing Moviebook Technology Corp ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Moviebook Technology Corp ltd filed Critical Beijing Moviebook Technology Corp ltd
Priority to CN202011498866.1A priority Critical patent/CN112560960B/en
Publication of CN112560960A publication Critical patent/CN112560960A/en
Application granted granted Critical
Publication of CN112560960B publication Critical patent/CN112560960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a hyperspectral image classification method and device and computing equipment. The method comprises the following steps: expanding the original training set by adopting a semi-supervised learning strategy to obtain a first training set D; expanding the first training set by adopting an OCSP algorithm to obtain a second training set D'; the single convolution layer neural network is trained by using FD, and the trained single convolution layer neural network is obtained, wherein the FD is { D, D' }; and classifying the hyperspectral images to be classified by using the trained single convolutional layer neural network. The device comprises: the device comprises a first data expansion module, a second data expansion module, a training module and a classification module. The computing device comprises a memory, a processor, and a computer program stored in the memory and executable by the processor, wherein the processor implements the methods described herein when executing the computer program.

Description

Hyperspectral image classification method and device and computing equipment
Technical Field
The application relates to the field of hyperspectral image classification, in particular to an expansion technology of training samples.
Background
A hyperspectral sensor can capture hundreds of spectral bands with high spectral resolution. Hyperspectral images (HSI) are applicable in many fields, such as land cover detection, environmental monitoring, medical diagnostics, and military reconnaissance. Hyperspectral classification is an important research topic, and assigns a label to each pixel in a hyperspectral image. Traditional classical algorithms such as K-nearest neighbors (KNN), Maximum Likelihood Classification (MLC), Support Vector Machines (SVM) and Artificial Neural Networks (ANN) have been successfully applied to hyperspectral classification with acceptable accuracy.
The detailed and rich spectral information contained in the hyperspectral image allows a more precise distinction between the different classes. Due to certain practical limitations, the cost of HSI high spectral resolution is its limited spatial resolution, which leads to the widespread existence of mixed pixels. More than one class is included in each blended pixel, whose spectral response is effectively a blend of the responses of the various materials present in the instantaneous field of view (IFOV) of the sensor, and therefore conventional pixel-level hard classification is not suitable for blended pixel classification.
Sub-pixel mapping is a well-known technique to solve this problem. Each hybrid pixel is divided into several sub-pixels, and different sub-pixels may belong to the same class. The sub-pixel mapping takes as input the abundance of each class within the blended pixel and predicts the spatial distribution of the sub-pixels therein. That is, the sub-pixel map outputs a hard classification map with higher spatial resolution.
On the other hand, cursing of the HSI dimension requires a large number of training samples to ensure the accuracy of the HSI supervised classification. In practice, however, the available training samples are often very limited.
Deep neural networks are capable of learning advanced functions through deep learning. The Stack Automatic Encoder (SAE), the Deep Belief Network (DBN) and the Convolutional Neural Network (CNN) can be used for the vision-based problem as a typical deep neural network architecture, and especially the CNN has unique local receptive field characteristics and plays a major role in image classification. CNN is a typical supervised model, requiring a large training data set to trigger its function, but hyperspectral images can only provide a certain number of labeled samples. Nonetheless, CNN is still widely used to provide better performance than SVM in different implementations of hyperspectral image classification. All the aforementioned algorithms depend on a training set with a balanced distribution.
In general, in most cases, traditional hyperspectral classification algorithms tend to perform better on large classes than on small classes, which means that these algorithms only focus on improving the overall accuracy, and neglect class-specific accuracy. It is a common consensus among people for hyperspectral image classification that, in order to improve overall accuracy, it contributes more to correct classification of large-size data than small-size data. In practice, however, the correct classification of small classes is more important than the classification of large classes, since they are usually foreground classes of interest. However, the number of small classes far exceeds the number of large classes. Therefore, recent research has focused on unbalanced data issues, with particular attention to small sample sets or categories.
In summary, the hyperspectral classification has the following problems:
1. hyperspectral images can only provide a certain number of marked samples and cannot meet the need of cursing CNN and HSI dimensions;
2. the accuracy of sub-pixel mapping is greatly influenced by the problem that the hyperspectral image training samples are limited.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to one aspect of the application, a hyperspectral image classification method is provided, and comprises the following steps:
expanding the original training set by adopting a semi-supervised learning strategy to obtain a first training set D;
expanding the first training set by adopting an OCSP algorithm to obtain a second training set D';
the single convolution layer neural network is trained by using FD, and the trained single convolution layer neural network is obtained, wherein the FD is { D, D' };
and classifying the hyperspectral images to be classified by using the trained single convolutional layer neural network.
Optionally, the expanding the original training set by using the semi-supervised learning strategy to obtain the first training set D includes:
will be assembled
Figure BDA0002840889840000021
Initializing to an empty set, wherein k represents the number of circulation;
the current training set
Figure BDA0002840889840000022
Middle unlabeled training sample XiIs shown as
Figure BDA0002840889840000023
Calculating to obtain coefficients
Figure BDA0002840889840000024
Wherein v represents the training sample XiDirect neighborhood pixels in four directions, up, down, left, right, v 1, 2, 3, 4, i 1, 2, …, n,
Figure BDA0002840889840000025
representing the set of training samples in the kth iteration cycle,
Figure BDA0002840889840000026
Figure BDA0002840889840000027
Figure BDA0002840889840000031
λ is the global regularization parameter, m is
Figure BDA0002840889840000032
The number of training samples;
according to the coefficient
Figure BDA0002840889840000033
Computing
Figure BDA0002840889840000034
Fractional abundance belonging to each class c
Figure BDA0002840889840000035
Using samples
Figure BDA0002840889840000036
To pair
Figure BDA0002840889840000037
Updating: if it is not
Figure BDA0002840889840000038
Then the sample is taken
Figure BDA0002840889840000039
Assigning to category labels
Figure BDA00028408898400000310
And add the sample to
Figure BDA00028408898400000311
Td is a preset threshold value; in the same way, using
Figure BDA00028408898400000312
Four direct neighborhood sample pairs per training sample
Figure BDA00028408898400000313
Updating to obtain the augmented training set in the kth cycle
Figure BDA00028408898400000314
If the number of training samples of the augmented training set, of which no pixels are surrounded by the augmented training set, meets the requirement, the augmented training set is used as a first training set D; otherwise, enter the (k +1) th cycle.
Optionally, the expanding the first training set D by using the OCSP algorithm to obtain a second training set D' includes:
generating an artificial sample set R according to the spectral range of the training samples in the first training set D;
applying gradient constraint on the artificial sample set R to filter samples in the artificial sample set R to obtain a synthesized sample Ru
By passing
Figure BDA00028408898400000315
Computing a corresponding positive interaction complement space
Figure BDA00028408898400000316
Wherein I is an identity matrix, D#Expressing the pseudo-inverse of D, and calculating
Figure BDA00028408898400000317
To obtain the projections of the sample set Ru on the orthogonal subspace of the first training set D, and to select the samples with projection values smaller than Ng in D as the training set finally obtained by OCSP, i.e. the second training set D', where Ng is a predefined parameter.
Optionally, the single convolutional layer neural network sequentially includes an input layer, a convolutional layer, a max pooling layer, a full connection layer, and an output layer.
Optionally, the convolutional layer uses tanh as an activation function, the max-pooling layer uses maxporoling as an activation function, the fully-connected layer uses tanh as an activation function, and the output layer uses softmax as an activation function.
According to the hyperspectral image classification method, the current training set is expanded by adopting a semi-supervised learning strategy of iterative cycle, and the OCSP is adopted for sample expansion, so that a sufficient number of training samples can be obtained, the hyperspectral classification precision of a convolutional neural network is improved, and the hyperspectral image classification performance of the hyperspectral image classification device is enhanced.
According to another aspect of the present application, there is provided a hyperspectral image classification apparatus including:
a first data expansion module configured to expand an original training set by using a semi-supervised learning strategy to obtain a first training set D;
the second data expansion module is configured to expand the first training set by adopting an OCSP algorithm to obtain a second training set D';
a training module configured to train the single convolutional layer neural network with the FD to obtain a trained single convolutional layer neural network, where FD is { D, D'; and
a classification module configured to classify the hyperspectral image to be classified using the trained single convolutional layer neural network.
Optionally, the first data expansion module includes:
an initialization submodule configured to aggregate
Figure BDA0002840889840000041
Initializing to an empty set, wherein k represents the number of circulation;
a coefficient calculation submodule configured to calculate a current training set
Figure BDA0002840889840000042
Middle unlabeled training sample XiIs shown as
Figure BDA0002840889840000043
Calculating to obtain coefficients
Figure BDA0002840889840000044
Wherein, i is 1, 2, …, n,
Figure BDA0002840889840000045
representing the set of training samples in the kth iteration cycle,
Figure BDA0002840889840000046
Figure BDA0002840889840000047
Figure BDA0002840889840000048
λ is the global regularization parameter, m is
Figure BDA0002840889840000049
The number of training samples;
a fractional abundance calculation submodule configured to calculate a fractional abundance from the coefficient
Figure BDA00028408898400000410
Computing
Figure BDA00028408898400000411
Fractional abundance belonging to each class c
Figure BDA00028408898400000412
An update submodule configured to utilize the samples
Figure BDA00028408898400000413
To pair
Figure BDA00028408898400000414
Updating: if it is not
Figure BDA00028408898400000415
Then the sample is taken
Figure BDA00028408898400000416
Assigning to category labels
Figure BDA00028408898400000417
And add the sample to
Figure BDA00028408898400000418
Wherein Td is a predetermined threshold value, and the same manner is adopted by
Figure BDA00028408898400000419
Four direct neighborhood sample pairs per training sample
Figure BDA00028408898400000420
Updating to obtain the augmented training set in the kth cycle
Figure BDA00028408898400000421
And
a judgment sub-module configured to take the augmented training set as a first training set D if the number of training samples in the augmented training set for which no pixel is surrounded by the augmented training set meets a requirement; otherwise, enter the (k +1) th cycle.
Optionally, the second data expansion module includes:
an artificial sample set generation submodule configured to generate an artificial sample set R from the spectral ranges of the training samples in the first training set D;
a filtering submodule configured to apply a gradient constraint to the set of artificial samples R to filter the samples in the set of artificial samples R to obtain a synthetic sample Ru(ii) a And
a sample selection submodule configured to pass
Figure BDA0002840889840000051
Computing a corresponding positive interaction complement space
Figure BDA0002840889840000052
Wherein I is an identity matrix, D#Expressing the pseudo-inverse of D, and calculating
Figure BDA0002840889840000053
To obtain a sample set RuThe projections of the orthogonal subspace on the first training set D, samples with projection values smaller than Ng are selected in D as the training set finally obtained by OCSP, i.e. the second training set D', where Ng is a predefined parameter.
Optionally, the single convolutional layer neural network sequentially comprises an input layer, a convolutional layer, a max pooling layer, a fully-connected layer and an output layer, wherein the convolutional layer uses tanh as an activation function, the max pooling layer uses maxporoling as an activation function, the fully-connected layer uses tanh as an activation function, and the output layer uses softmax as an activation function.
The hyperspectral image classification device adopts the iteration-cycle semi-supervised learning strategy to expand the current training set, and adopts the OCSP to expand the samples, so that a sufficient number of training samples can be obtained, the hyperspectral classification precision of a convolutional neural network is improved, and the hyperspectral image classification performance of the hyperspectral image classification device is enhanced.
According to a third aspect of the present application, there is provided a computing device comprising a memory, a processor and a computer program stored in the memory and executable by the processor, wherein the processor implements the method of the present application when executing the computer program.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic flow chart of a hyperspectral image classification method according to an embodiment of the application;
FIG. 2 is a schematic flow chart of step S1 in FIG. 1;
FIG. 3 is a schematic diagram of training data set enhancement in a single cycle of a semi-supervised learning strategy according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of step S2 in FIG. 1;
FIG. 5 is a schematic structural diagram of a hyperspectral image classification apparatus according to an embodiment of the application;
FIG. 6 is a schematic diagram of a first data expansion module of FIG. 5;
FIG. 7 is a schematic diagram of a second data expansion module shown in FIG. 5;
FIG. 8 is a schematic block diagram of a computing device according to one embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Fig. 1 is a schematic flow chart of a hyperspectral image classification method according to an embodiment of the application. The method may generally include:
step S1, expanding the original training set by adopting a semi-supervised learning strategy to obtain a first training set D;
step S2, expanding the first training set by adopting an OCSP algorithm to obtain a second training set D';
step S3, the single convolution layer neural network is trained by using FD, and the trained single convolution layer neural network is obtained, wherein the FD is { D, D' };
and step S4, classifying the hyperspectral images to be classified by using the trained single convolutional layer neural network.
The original training set has a total of C classes, C represents the class label, and C is 1, 2. Original training set
Figure BDA0002840889840000061
Wherein the n training samples are respectively
Figure BDA0002840889840000062
To
Figure BDA0002840889840000063
Any one of the samples
Figure BDA0002840889840000064
RN×PRepresenting an HSI image with N pixels P-band in each spectral band, j-1, 2. Samples in training set and corresponding classification labelsThe label is represented as
Figure BDA0002840889840000065
Figure BDA0002840889840000066
Wherein
Figure BDA0002840889840000067
Is that
Figure BDA0002840889840000068
The classification label of (1).
As shown in fig. 2, the step S1 includes:
step S11, in the k-th iteration, the set is collected
Figure BDA0002840889840000069
Initializing to an empty set containing no content, k representing the number of cycles;
step S12, for the current training set
Figure BDA00028408898400000610
Any one of the training samples XiI-1, 2, …, n, four of its immediately adjacent nodes being
Figure BDA00028408898400000611
If said X isiIs not marked, it can be represented cooperatively as
Figure BDA00028408898400000612
Current m training samples
Figure BDA00028408898400000613
Linear combination of (2), XiThe calculation process of (2) is as follows:
Figure BDA00028408898400000614
coefficient of performance
Figure BDA00028408898400000615
The calculation process of (2) is as follows:
wherein,
Figure BDA00028408898400000616
representing the set of training samples in the kth iteration cycle,
Figure BDA00028408898400000617
Figure BDA0002840889840000071
Figure BDA0002840889840000072
λ is the global regularization parameter, m is
Figure BDA0002840889840000073
The number of training samples in the training set,
Figure BDA0002840889840000074
representing in the k-th iteration loop
Figure BDA0002840889840000075
Current m training samples
Figure BDA0002840889840000076
Figure BDA0002840889840000077
Representing a current training set in the k iteration process;
step S13, according to the coefficient
Figure BDA0002840889840000078
Computing
Figure BDA0002840889840000079
Fractional abundance belonging to each class c
Figure BDA00028408898400000710
Because of small sample space, an expanded training sample set is obtained by violence selection from an original training sample set, and the closest training sample set is selected from the expanded training sample set in order to integrate information on space
Figure BDA00028408898400000711
K spatially adjacent training samples
Figure BDA00028408898400000712
The corresponding coefficients of the samples are obtained through the calculation of the formulas (1), (2) and (3)
Figure BDA00028408898400000713
Coefficient of performance
Figure BDA00028408898400000714
Is normalized to betai=[βi,1,βi,2,…,βi,j,…βi,K]。βiIn (1)
Figure BDA00028408898400000715
Sample(s)
Figure BDA00028408898400000716
Fractional abundance belonging to each class c
Figure BDA00028408898400000717
The calculation process of (2) is as follows:
Figure BDA00028408898400000718
wherein alpha isjThe value may be a fixed value or a weighted valueTo adjust according to the training result; beta is ai,jIs that
Figure BDA00028408898400000719
The regularization coefficients of (a) are,
Figure BDA00028408898400000720
λjis a sample
Figure BDA00028408898400000721
To the sample
Figure BDA00028408898400000722
The distance between the two or more of the two or more,
Figure BDA00028408898400000723
representing a sample
Figure BDA00028408898400000724
Coordinates on the X-axis;
Figure BDA00028408898400000725
representing a sample
Figure BDA00028408898400000726
Coordinates on the X-axis;
Figure BDA00028408898400000727
representing a sample
Figure BDA00028408898400000728
Coordinates on the Y-axis;
Figure BDA00028408898400000729
representing a sample
Figure BDA00028408898400000730
Coordinates on the Y-axis;
in formula (4)
Figure BDA00028408898400000731
Representation and sample
Figure BDA00028408898400000732
The number of training samples belonging to class c among the eight adjacent training samples, and thus the obtained training samples can be used
Figure BDA00028408898400000733
Calculating fractional abundance for each class c
Figure BDA00028408898400000734
Step S14 of using the sample
Figure BDA00028408898400000735
To pair
Figure BDA00028408898400000736
Updating:
if it is not
Figure BDA00028408898400000737
Then the sample is taken
Figure BDA00028408898400000738
Assigning to category labels
Figure BDA00028408898400000739
And mixing the sample
Figure BDA00028408898400000740
Is added to
Figure BDA00028408898400000741
Td is a preset threshold value;
in the same way, using
Figure BDA00028408898400000742
Four direct neighborhood sample pairs per training sample
Figure BDA00028408898400000743
Updating to obtain the augmented training set in the kth cycle
Figure BDA00028408898400000744
Step S15, if the number of training samples of the augmented training set is satisfied, the augmented training set is used as a first training set D; otherwise, enter the (k +1) th cycle.
Fig. 3 depicts the training set expansion process in a single cycle. Therein
Figure BDA0002840889840000081
A neighborhood set representing all current training samples.
Figure BDA0002840889840000082
And middle A, B, C and D represent four samples, respectively. From
Figure BDA0002840889840000083
To
Figure BDA0002840889840000084
Means that four sample points directly adjacent to the sample are screened out
Figure BDA0002840889840000085
To
Figure BDA0002840889840000086
The calculation result after the formula (1) is shown.
As shown in fig. 4, the step S2 includes:
step S21, based on the subclass of original training data set (i.e. the first training set D, D ═ D)1,d2,…dj,…,dp}) to generate a sample set R of artificial random screening, i.e. an artificial sample set:
R={d1,d2,…dj,…,dq}
where p ≠ q is allowed.djThe spectral range of each band in the spectrum is defined as [ db ]min:dbmax],dbminAnd dbmaxRespectively representing the frequency minima and the frequency maxima of the spectral range of said band, and thus, can be within
Figure BDA0002840889840000087
Randomly selecting a spectral value of the h wave band of the artificial sample, wherein h belongs to {1, 2, 3 …, mn }, and mn represents the wave band number in the hyperspectral data set;
step S22, applying gradient constraint to filter the synthetic samples in R which are seriously deviated from the actual training samples to obtain synthetic samples Ru
Is provided with
Figure BDA0002840889840000088
For the average sample of the true sample band, the gradient vector is calculated as follows:
Figure BDA0002840889840000089
here, the
Figure BDA00028408898400000810
The indicative expression can be written as:
r=[r1,r2,…,rs,…,rmn-1] (6)
wherein,
Figure BDA00028408898400000811
r is used to ensure that the randomly synthesized instances have the same trend of variation as the original actual training samples.
According to the above steps, instances can be further selected from R to form a new sample set Ru
Generating a sample set RuThe pseudo-code of (1) is as follows:
Figure BDA00028408898400000812
Figure BDA00028408898400000912
step S23, by
Figure BDA0002840889840000099
Computing a corresponding positive interaction complement space
Figure BDA00028408898400000910
Wherein I is an identity matrix, D#Expressing the pseudo-inverse of D, and calculating
Figure BDA00028408898400000911
To obtain a sample set RuThe projections of the orthogonal subspace on the first training set D, samples with projection values smaller than Ng are selected in D as the training set finally obtained by OCSP, i.e. the second training set D', where Ng is a predefined parameter.
From a global perspective, samples exhibiting similar spectral features are likely to belong to the same class, and from a local perspective, spatially neighboring pixels are more likely to share the same class label, and thus, the training set expansion strategy of step S2 is feasible.
In step S3, the data set FD of the input single-layer neural network is { D, D' }, the convolutional neural network is composed of an input layer, a convolutional layer, a max pooling layer, a full-link layer, and an output layer, the convolutional layer uses tanh as an activation function, the max pooling layer uses maxporoling as an activation function, the full-link layer uses tanh as an activation function, the output layer uses softmax as an activation function, and each sample pixel is used as an input to the input layer.
The size of the input layer is (n)1,1),n1Representing the number of bands of the hyperspectral image. Convolutional layer pass k1X 1 size t core and n1The x 1 input vector is filtered. Then, the number of nodes in the convolutional layer becomes t × n2X 1, and n2=n1-k1+1. Between the input layer and the convolutional layer, there is t × (k)1+1) training parameters. The kernel size adopted by the maximum pooling layer is k2X 1. Maximum pool stratification comprises t × n3X 1 nodes, where n3=n2÷k2. The full connection layer comprises n4A node, t x (n) between the layer and the previous layer3+1)×n4A training parameter. The last output layer has n5A node, n5Represents the number of classes, and has (n)4+1)×n5And (c) trainable parameters.
In a single convolutional layer neural network, the convolutional layer and the max pooling layer may serve as feature extractors for input of the hyperspectral dataset. Fully connected layers may be identified as trainable classifiers.
In this embodiment, values of different parameters in a set of neural networks are given, but sizes of images in actual tasks may be different, which causes the parameters of the neural networks to change, and the present application is not limited to the given set of parameters. The values of various parameters in the neural network are determined according to input data, and a group of optimal solutions is selected after training is finished. This set of parameters is:
n1=200,k1=28,t=20,k2=5,n4=100,n5=16
fig. 5 is a schematic structural diagram of a hyperspectral image classification apparatus according to an embodiment of the application. As shown in fig. 5, the hyperspectral image classification apparatus includes:
a first data expansion module 1 configured to expand an original training set by using a semi-supervised learning strategy to obtain a first training set D;
a second data expansion module 2 configured to expand the first training set by using an OCSP algorithm to obtain a second training set D';
a training module 3 configured to train the single convolutional layer neural network with FD, so as to obtain a trained single convolutional layer neural network, where FD is { D, D'; and
a classification module 4 configured to classify the hyperspectral image to be classified using the trained single convolutional layer neural network.
As shown in fig. 6, the first data expansion module 1 includes:
an initialization submodule 11 configured to aggregate
Figure BDA0002840889840000101
Initializing to an empty set, wherein k represents the number of circulation;
a coefficient calculation submodule 12 configured to calculate a current training set
Figure BDA0002840889840000102
Middle unlabeled training sample XiIs shown as
Figure BDA0002840889840000103
Calculating to obtain coefficients
Figure BDA0002840889840000104
Wherein, i is 1, 2, …, n,
Figure BDA0002840889840000105
representing the set of training samples in the kth iteration cycle,
Figure BDA0002840889840000106
Figure BDA0002840889840000107
Figure BDA0002840889840000111
λ is the global regularization parameter, m is
Figure BDA0002840889840000112
Middle trainingThe number of training samples;
a fractional abundance calculation submodule 13 configured to calculate a fractional abundance from the coefficient
Figure BDA0002840889840000113
Computing
Figure BDA0002840889840000114
Fractional abundance belonging to each class c
Figure BDA0002840889840000115
An update submodule 14 configured to utilise the samples
Figure BDA0002840889840000116
To pair
Figure BDA0002840889840000117
Updating: if it is not
Figure BDA0002840889840000118
Then the sample is taken
Figure BDA0002840889840000119
Assigning to category labels
Figure BDA00028408898400001110
And add the sample to
Figure BDA00028408898400001111
Wherein Td is a predetermined threshold value, and the same manner is adopted by
Figure BDA00028408898400001112
Four direct neighborhood sample pairs per training sample
Figure BDA00028408898400001113
Updating to obtain the augmented training set in the kth cycle
Figure BDA00028408898400001114
And
a judging submodule 15 configured to take the augmented training set as a first training set D if the number of training samples in the augmented training set for which no pixel is surrounded by the augmented training set meets a requirement; otherwise, enter the (k +1) th cycle.
As shown in fig. 7, the second data expansion module 2 includes:
an artificial sample set generating submodule 21 configured to generate an artificial sample set R from the spectral ranges of the training samples in the first training set D;
a filtering submodule 22 configured to apply a gradient constraint to the set of artificial samples R to filter the samples in the set of artificial samples R to obtain a synthetic sample Ra(ii) a And
a sample selection submodule 23 configured to pass
Figure BDA00028408898400001115
Computing a corresponding positive interaction complement space
Figure BDA00028408898400001116
Wherein I is an identity matrix, D#Expressing the pseudo-inverse of D, and calculating
Figure BDA00028408898400001117
To obtain a sample set RuThe projections of the orthogonal subspace on the first training set D, samples with projection values smaller than Ng are selected in D as the training set finally obtained by OCSP, i.e. the second training set D', where Ng is a predefined parameter.
The single convolutional layer neural network sequentially comprises an input layer, a convolutional layer, a maximum pooling layer, a fully-connected layer and an output layer, wherein the convolutional layer uses tanh as an activation function, the maximum pooling layer uses maxpouling as an activation function, the fully-connected layer uses tanh as an activation function, and the output layer uses softmax as an activation function.
The working principle and the effect of the hyperspectral image classification device in the embodiment of the application are the same as those of the hyperspectral image classification method in the embodiment of the application, and the description is omitted here.
Further provided is a computing device, referring to fig. 8, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 9, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A hyperspectral image classification method comprises the following steps:
expanding the original training set by adopting a semi-supervised learning strategy to obtain a first training set D;
expanding the first training set by adopting an OCSP algorithm to obtain a second training set D';
the single convolution layer neural network is trained by using FD, and the trained single convolution layer neural network is obtained, wherein the FD is { D, D' };
and classifying the hyperspectral images to be classified by using the trained single convolutional layer neural network.
2. The method of claim 1, wherein the expanding the original training set using the semi-supervised learning strategy to obtain the first training set D comprises:
will be assembled
Figure FDA0002840889830000011
Initializing to an empty set, wherein k represents the number of circulation;
the current training set
Figure FDA0002840889830000012
Middle unlabeled training sample XiIs shown as
Figure FDA0002840889830000013
Calculating to obtain coefficients
Figure FDA0002840889830000014
Wherein v represents the training sample XiDirect neighborhood pixels in four directions, up, down, left, right, v 1, 2, 3, 4, i 1, 2, …, n,
Figure FDA0002840889830000015
representing the set of training samples in the kth iteration cycle,
Figure FDA0002840889830000016
Figure FDA0002840889830000017
Figure FDA0002840889830000018
λ is the global regularization parameter, m is
Figure FDA0002840889830000019
The number of training samples;
according to the coefficient
Figure FDA00028408898300000110
Computing
Figure FDA00028408898300000111
Fractional abundance belonging to each class c
Figure FDA00028408898300000112
Using samples
Figure FDA00028408898300000113
To pair
Figure FDA00028408898300000114
Updating: if it is not
Figure FDA00028408898300000115
Then the sample is taken
Figure FDA00028408898300000116
Assigning to category labels
Figure FDA00028408898300000117
And add the sample to
Figure FDA00028408898300000118
Td is a preset threshold value; in the same way, using
Figure FDA00028408898300000119
Four direct neighborhood sample pairs per training sample
Figure FDA00028408898300000120
Updating to obtain the augmented training set in the kth cycle
Figure FDA00028408898300000121
If the number of training samples of the augmented training set, of which no pixels are surrounded by the augmented training set, meets the requirement, the augmented training set is used as a first training set D; otherwise, enter the (k +1) th cycle.
3. The method of claim 1 or 2, wherein the expanding the first training set D by using the OCSP algorithm to obtain a second training set D' comprises:
generating an artificial sample set R according to the spectral range of the training samples in the first training set D;
applying gradient constraint on the artificial sample set R to filter samples in the artificial sample set R to obtain a synthesized sample Ru
By passing
Figure FDA0002840889830000021
Computing a corresponding positive interaction complement space
Figure FDA0002840889830000022
Wherein I is an identity matrix, D#Expressing the pseudo-inverse of D, and calculating
Figure FDA0002840889830000023
To obtain a sample set RuThe projections of the orthogonal subspace on the first training set D, samples with projection values smaller than Ng are selected in D as the training set finally obtained by OCSP, i.e. the second training set D', where Ng is a predefined parameter.
4. The method of any one of claims 1-3, wherein the single convolutional layer neural network comprises an input layer, a convolutional layer, a max-pooling layer, a fully-connected layer, and an output layer in that order.
5. The method of claim 4, wherein the convolutional layer uses tanh as an activation function, the max-pooling layer uses maxporoling as an activation function, the fully-connected layer uses tanh as an activation function, and the output layer uses softmax as an activation function.
6. A hyperspectral image classification apparatus comprising:
a first data expansion module configured to expand an original training set by using a semi-supervised learning strategy to obtain a first training set D;
the second data expansion module is configured to expand the first training set by adopting an OCSP algorithm to obtain a second training set D';
a training module configured to train the single convolutional layer neural network with the FD to obtain a trained single convolutional layer neural network, where FD is { D, D'; and
a classification module configured to classify the hyperspectral image to be classified using the trained single convolutional layer neural network.
7. The apparatus of claim 6, wherein the first data expansion module comprises:
an initialization submodule configured to aggregate
Figure FDA0002840889830000024
Initializing to an empty set, wherein k represents the number of circulation;
a coefficient calculation submodule configured to calculate a current training set
Figure FDA0002840889830000025
Middle unlabeled training sample XiIs shown as
Figure FDA0002840889830000026
Calculating to obtain coefficients
Figure FDA0002840889830000027
Wherein, i is 1, 2, …, n,
Figure FDA0002840889830000028
representing the set of training samples in the kth iteration cycle,
Figure FDA0002840889830000029
Figure FDA00028408898300000210
Figure FDA0002840889830000031
λ is the global regularization parameter, m is
Figure FDA0002840889830000032
The number of training samples;
a fractional abundance calculation submodule configured to calculate a fractional abundance from the coefficient
Figure FDA0002840889830000033
Computing
Figure FDA0002840889830000034
Fractional abundance belonging to each class c
Figure FDA0002840889830000035
An update submodule configured to utilize the samples
Figure FDA0002840889830000036
To pair
Figure FDA0002840889830000037
Updating: if it is not
Figure FDA0002840889830000038
Then the sample is taken
Figure FDA0002840889830000039
Assigning to category labels
Figure FDA00028408898300000310
And add the sample to
Figure FDA00028408898300000311
Wherein Td is a predetermined threshold value, and the same manner is adopted by
Figure FDA00028408898300000312
Four direct neighborhood sample pairs per training sample
Figure FDA00028408898300000313
Updating to obtain the augmented training set in the kth cycle
Figure FDA00028408898300000314
And
a judgment sub-module configured to take the augmented training set as a first training set D if the number of training samples in the augmented training set for which no pixel is surrounded by the augmented training set meets a requirement; otherwise, enter the (k +1) th cycle.
8. The apparatus of claim 6 or 7, wherein the second data expansion module comprises:
an artificial sample set generation submodule configured to generate an artificial sample set R from the spectral ranges of the training samples in the first training set D;
a filtering submodule configured to apply a gradient constraint to the set of artificial samples R to filter the samples in the set of artificial samples R to obtain a synthetic sample Ru(ii) a And
a sample selection submodule configured to pass
Figure FDA00028408898300000315
Computing a corresponding positive interaction complement space
Figure FDA00028408898300000316
Wherein I is an identity matrix, D#Expressing the pseudo-inverse of D, and calculating
Figure FDA00028408898300000317
To obtain a sample set RuThe projections of the orthogonal subspace on the first training set D, samples with projection values smaller than Ng are selected in D as the training set finally obtained by OCSP, i.e. the second training set D', where Ng is a predefined parameter.
9. The apparatus of any one of claims 6-8, wherein the single convolutional layer neural network comprises an input layer, a convolutional layer, a max pooling layer, a fully-connected layer, and an output layer in this order, the convolutional layer using tanh as an activation function, the max pooling layer using maxpouling as an activation function, the fully-connected layer using tanh as an activation function, and the output layer using softmax as an activation function.
10. A computing device comprising a memory, a processor, and a computer program stored in the memory and executable by the processor, wherein the processor implements the method of any of claims 1-5 when executing the computer program.
CN202011498866.1A 2020-12-16 2020-12-16 Hyperspectral image classification method and device and computing equipment Active CN112560960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011498866.1A CN112560960B (en) 2020-12-16 2020-12-16 Hyperspectral image classification method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011498866.1A CN112560960B (en) 2020-12-16 2020-12-16 Hyperspectral image classification method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN112560960A true CN112560960A (en) 2021-03-26
CN112560960B CN112560960B (en) 2024-08-13

Family

ID=75063191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011498866.1A Active CN112560960B (en) 2020-12-16 2020-12-16 Hyperspectral image classification method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN112560960B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113116363A (en) * 2021-04-15 2021-07-16 西北工业大学 Method for judging hand fatigue degree based on surface electromyographic signals
CN117523345A (en) * 2024-01-08 2024-02-06 武汉理工大学 Target detection data balancing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886342A (en) * 2014-03-27 2014-06-25 西安电子科技大学 Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning
CN106780367A (en) * 2016-11-28 2017-05-31 上海大学 HDR photo style transfer methods based on dictionary learning
CN107451616A (en) * 2017-08-01 2017-12-08 西安电子科技大学 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth
CN108596213A (en) * 2018-04-03 2018-09-28 中国地质大学(武汉) A kind of Classification of hyperspectral remote sensing image method and system based on convolutional neural networks
CN109784392A (en) * 2019-01-07 2019-05-21 华南理工大学 A kind of high spectrum image semisupervised classification method based on comprehensive confidence
CN110298396A (en) * 2019-06-25 2019-10-01 北京工业大学 Hyperspectral image classification method based on deep learning multiple features fusion
CN110766655A (en) * 2019-09-19 2020-02-07 北京航空航天大学 Hyperspectral image significance analysis method based on abundance
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN111862289A (en) * 2020-08-04 2020-10-30 天津大学 Point cloud up-sampling method based on GAN network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886342A (en) * 2014-03-27 2014-06-25 西安电子科技大学 Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning
CN106780367A (en) * 2016-11-28 2017-05-31 上海大学 HDR photo style transfer methods based on dictionary learning
CN107451616A (en) * 2017-08-01 2017-12-08 西安电子科技大学 Multi-spectral remote sensing image terrain classification method based on the semi-supervised transfer learning of depth
CN108596213A (en) * 2018-04-03 2018-09-28 中国地质大学(武汉) A kind of Classification of hyperspectral remote sensing image method and system based on convolutional neural networks
CN109784392A (en) * 2019-01-07 2019-05-21 华南理工大学 A kind of high spectrum image semisupervised classification method based on comprehensive confidence
CN110298396A (en) * 2019-06-25 2019-10-01 北京工业大学 Hyperspectral image classification method based on deep learning multiple features fusion
CN110766655A (en) * 2019-09-19 2020-02-07 北京航空航天大学 Hyperspectral image significance analysis method based on abundance
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN111414942A (en) * 2020-03-06 2020-07-14 重庆邮电大学 Remote sensing image classification method based on active learning and convolutional neural network
CN111862289A (en) * 2020-08-04 2020-10-30 天津大学 Point cloud up-sampling method based on GAN network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
INMACULADA DÓPIDO等: "Semisupervised Self-Learning for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 51, no. 7, 31 July 2013 (2013-07-31), pages 4032 - 4044, XP011515823, DOI: 10.1109/TGRS.2012.2228275 *
LE SUN等: "Supervised Spectral–Spatial Hyperspectral Image Classification With Weighted Markov Random Fields", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 53, no. 3, 31 March 2015 (2015-03-31), pages 1490 - 1503 *
YIFAN ZHANG等: "SUPER-RESOLUTION CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH A SMALL TRAINING SET USING SEMI-SUPERVISED LEARNING", 《WHISPERS》, 27 June 2019 (2019-06-27), pages 1 - 5 *
刘颖等: "基于空谱联合协同表征的高光谱图像分类算法", 《计算机工程与设计》, vol. 41, no. 3, 31 March 2020 (2020-03-31), pages 815 - 820 *
李素婧: "面向大规模高光谱数据的半监督地物分类方法", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2017, 15 March 2017 (2017-03-15), pages 140 - 1707 *
蒋梦莹: "基于随机子空间集成的高光谱图像分类算法的研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, no. 2020, 15 February 2020 (2020-02-15), pages 028 - 230 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113116363A (en) * 2021-04-15 2021-07-16 西北工业大学 Method for judging hand fatigue degree based on surface electromyographic signals
CN117523345A (en) * 2024-01-08 2024-02-06 武汉理工大学 Target detection data balancing method and device
CN117523345B (en) * 2024-01-08 2024-04-23 武汉理工大学 Target detection data balancing method and device

Also Published As

Publication number Publication date
CN112560960B (en) 2024-08-13

Similar Documents

Publication Publication Date Title
CN112052787B (en) Target detection method and device based on artificial intelligence and electronic equipment
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
He et al. Supercnn: A superpixelwise convolutional neural network for salient object detection
Najibi et al. Fa-rpn: Floating region proposals for face detection
CN109840530A (en) The method and apparatus of training multi-tag disaggregated model
Qiu et al. Deep learning-based algorithm for vehicle detection in intelligent transportation systems
CN111160217A (en) Method and system for generating confrontation sample of pedestrian re-identification system
WO2022105108A1 (en) Network data classification method, apparatus, and device, and readable storage medium
CN112560960B (en) Hyperspectral image classification method and device and computing equipment
Xie et al. SDE: A novel selective, discriminative and equalizing feature representation for visual recognition
Brust et al. Active and incremental learning with weak supervision
Huang et al. Image saliency detection via multi-scale iterative CNN
Li et al. Learning to learn cropping models for different aspect ratio requirements
Kim et al. Improving discrimination ability of convolutional neural networks by hybrid learning
Choi et al. Content recapture detection based on convolutional neural networks
Song et al. 1000fps human segmentation with deep convolutional neural networks
CN111507288A (en) Image detection method, image detection device, computer equipment and storage medium
CN116863194A (en) Foot ulcer image classification method, system, equipment and medium
CN116844032A (en) Target detection and identification method, device, equipment and medium in marine environment
CN112529025A (en) Data processing method and device
Abbas et al. Improving deep learning-based image super-resolution with residual learning and perceptual loss using SRGAN model
Li et al. TA-YOLO: a lightweight small object detection model based on multi-dimensional trans-attention module for remote sensing images
Zhang et al. A spatial–spectral adaptive learning model for textile defect images recognition with few labeled data
Nayak et al. Reinforcement learning for improving object detection
CN113139540A (en) Backboard detection method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant