CN109492593B - Hyperspectral image classification method based on principal component analysis network and space coordinates - Google Patents

Hyperspectral image classification method based on principal component analysis network and space coordinates Download PDF

Info

Publication number
CN109492593B
CN109492593B CN201811366518.1A CN201811366518A CN109492593B CN 109492593 B CN109492593 B CN 109492593B CN 201811366518 A CN201811366518 A CN 201811366518A CN 109492593 B CN109492593 B CN 109492593B
Authority
CN
China
Prior art keywords
pixel point
principal component
component analysis
spectral
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811366518.1A
Other languages
Chinese (zh)
Other versions
CN109492593A (en
Inventor
慕彩红
刘逸
刁许玲
刘若辰
熊涛
李阳阳
刘敬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811366518.1A priority Critical patent/CN109492593B/en
Publication of CN109492593A publication Critical patent/CN109492593A/en
Application granted granted Critical
Publication of CN109492593B publication Critical patent/CN109492593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image classification method based on a principal component analysis network and space coordinates, which mainly solves the problems that in the prior art, the fusion of space and spectrum information is complex or insufficient, the calculation complexity is high when the principal component analysis network is used for hyperspectral classification, and the like. The implementation scheme is as follows: reading a data set of a hyperspectral image; then, randomly selecting a training set and a test set according to the space blocks of the data set; then, carrying out dimension reduction, normalization and edge-preserving filtering processing on the spectral information; then expanding the space coordinate and fusing the space coordinate with the spectral characteristics; then training a principal component analysis network to obtain a trained principal component analysis network; inputting the test set data into a trained principal component analysis network to obtain a feature vector of each pixel point in the test set; and finally, obtaining a classification result by utilizing a Support Vector Machine (SVM). The method reduces the calculation complexity, improves the classification effect, and can be applied to target identification in resource exploration, forest coverage and disaster monitoring.

Description

Hyperspectral image classification method based on principal component analysis network and space coordinates
Technical Field
The invention belongs to the technical field of image processing, and further relates to a hyperspectral image classification method which can be applied to target identification in resource exploration, forest coverage and disaster monitoring.
Background
The key of the hyperspectral image classification technology is that a small number of training samples are used for obtaining high classification precision. In the early stage, the hyperspectral images are mainly classified by using spectral information, and researchers found in recent years that spatial information of the hyperspectral images is also very important, so that how to simultaneously and fully utilize the spectral information and the spatial information becomes a key for improving the classification accuracy of the hyperspectral images.
An improved principal component analysis network VCANet for Hyperspectral Image Classification is proposed in the paper "R-VCANet: A New Deep-Learning-Based Hyperspectral Image Classification Method" (IEEE Journal of Selected Topics in Applied earths Observations & Remote Sensing,2017,10(5): 1975-. After filtering each spectrum band of a hyperspectral image, inputting the spectrum information of a pixel point into an improved VCANet for feature extraction, and then classifying by using a Support Vector Machine (SVM). The method has the disadvantages that the utilization of the spatial information is not sufficient, namely the spatial information of the hyperspectral image is utilized only during filtering; in addition, the method inputs all dimensionality spectral information into an improved VCANet network for training, and the calculation complexity is large.
The patent document ' hyperspectral classification method based on fusion of space coordinates and space spectrum features ' (patent application number: 201710644479.6, application publication number: CN 107451614A) applied by the university of Western ' an electronic technology proposes a hyperspectral image classification method based on fusion of space coordinates and space spectrum features. The method has the disadvantages that the spatial information of the hyperspectral image is not fully utilized, only the spatial coordinates are utilized, and the spatial coordinates have poor classification effect on the ground object categories with non-concentrated sample distribution or small sample amount.
Disclosure of Invention
The invention aims to provide a hyperspectral image classification method based on a principal component analysis network and a space coordinate to overcome the defects of the prior art, so that the utilization rate of spectral information and space information is improved, and the computational complexity is reduced.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
(1) inputting a data set corresponding to a hyperspectral image to be classified;
(2) uniformly dividing an input hyperspectral image dataset into 100 small datasets according to the spatial positions of pixel points in an image, randomly selecting training samples according to a fixed proportion for each ground object type in each small dataset, combining the selected training samples together for random disordering to serve as a training set, and forming a test set by the rest pixel points;
(3) sequentially performing dimensionality reduction, normalization and filtering processing on an input hyperspectral image to be classified to obtain preprocessed spectral features;
(4) acquiring a spatial coordinate value of each pixel point in the hyperspectral image to be classified, and expanding the spatial coordinates;
(5) fusing the expanded spatial coordinates of each pixel point with the preprocessed spectral characteristics to obtain the fused characteristics of each pixel point;
(6) after 3 × 3 neighborhood blocks are selected for each pixel point of the hyperspectral image, the fusion characteristics of all the pixel points in the neighborhood blocks are arranged according to columns to obtain a two-dimensional characteristic matrix corresponding to each pixel point;
(7) training a principal component analysis network by using the two-dimensional feature matrix corresponding to each pixel point in the training set to obtain a trained principal component analysis network;
(8) inputting the two-dimensional feature matrix corresponding to each pixel point in the test set into a trained principal component analysis network to obtain a feature vector corresponding to each pixel point in the test set;
(9) and inputting the feature vector corresponding to each pixel point in the test set into a Support Vector Machine (SVM) for classification to obtain a classification result of each pixel point in the test set.
Compared with the prior art, the invention has the following advantages:
first, the invention overcomes the problems of the deep learning network that empirical knowledge is needed, the network structure is complex, and the training complexity is complex, because the invention uses the principal component analysis network, so that the training efficiency is improved.
Secondly, the invention provides that the spectral information is fused with the space coordinate after dimensionality reduction, the problem of high calculation complexity caused by directly inputting all spectral dimensions in the existing technology for performing hyperspectral classification by utilizing a principal component analysis network is solved, and in addition, 3 × 3 neighborhood information is selected as input information of each pixel point while the space coordinate is fused, the space information is fully utilized, and the problem of classification precision reduction caused by reduction of the spectral information dimensionality is solved.
Thirdly, the invention uses the edge-preserving filter and the spatial coordinate at the same time, so that the invention has good classification effect no matter whether the sample to be classified is concentrated or not, because the spatial coordinate has good classification effect on the ground feature classes with concentrated classification, but for the ground feature classes with little sample amount or concentrated distribution, the classification effect still has room for improvement, and the edge-preserving filter can well preserve the information of the small samples or the sample classes with concentrated distribution, thereby overcoming the problem of low classification precision of the samples.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
fig. 2 is a block diagram of a conventional principal component analysis network.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
Referring to fig. 1, the present invention is embodied as follows.
Step 1, inputting a data set corresponding to a hyperspectral image to be classified.
And 2, acquiring a training set and a test set.
(2a) Uniformly dividing an input hyperspectral image dataset into 100 parts of small datasets according to the spatial position of a pixel point in an image;
(2b) randomly selecting training samples according to the same proportion for each ground feature type in each small data set;
(2c) and combining the selected training samples together to randomly scramble, wherein the training samples are used as a training set, and the rest pixel points form a test set.
And 3, preprocessing the input image.
(3a) And reducing the dimension of the input image.
The commonly used dimensionality reduction method in image processing comprises Principal Component Analysis (PCA), Local Linear Embedding (LLE), Linear Discriminant (LDA) and the like, the Principal Component Analysis (PCA) is adopted in the embodiment to reduce the dimensionality of the input hyperspectral image, and the specific steps are as follows:
(3a1) expanding all dimensional spectrums of each pixel point in the hyperspectral image matrix into a spectrum characteristic vector, and arranging the spectrum characteristic vectors of all the pixel points according to rows to form a spectrum characteristic matrix;
(3a2) averaging the elements in the spectral feature matrix according to columns, and subtracting the average value of the corresponding column of the elements in the spectral feature matrix from each element in the spectral feature matrix respectively to obtain a mean-removed spectral feature matrix;
(3a3) solving the covariance of each two columns of elements in the spectrum characteristic matrix after mean value removal, and constructing a covariance matrix of the spectrum characteristic matrix;
(3a4) obtaining the eigenvalues of all covariance matrixes corresponding to the spectral eigenvectors one by using the eigenequation of the covariance matrix;
(3a5) sorting all eigenvalues from big to small, selecting the first 3 eigenvalues from the sorting, and forming a main eigenvalue matrix by the spectral eigenvectors corresponding to the 3 eigenvalues respectively according to the column;
(3a6) projecting the input hyperspectral image matrix onto the main characteristic matrix to obtain a hyperspectral image after dimension reduction;
(3b) and normalizing the hyperspectral images to be classified after dimension reduction.
Common normalization methods are min-max normalization and Z-score normalization, which in this example is min-max normalization, which is expressed as
Figure BDA0001868698570000041
Wherein the content of the first and second substances,
Figure BDA0001868698570000042
is the jth pixel value in the ith dimension spectral image after dimension reduction,
Figure BDA0001868698570000043
is the minimum pixel value in the ith dimension spectral image after dimension reduction,
Figure BDA0001868698570000044
is the maximum pixel value in the ith dimension spectral image after dimension reduction;
(3c) filtering the normalized hyperspectral image to be classified:
the filter commonly used in the image processing field includes mean value filter, gaussian filter, median filter, bilateral filter, edge preserving filter, etc., the RGF edge preserving filter is one of the edge preserving filters, it can preserve the edge information well while processing the image smoothly, this example adopts the RGF edge preserving filter to process the filter, its formula is as follows:
Figure BDA0001868698570000045
wherein the content of the first and second substances,
Figure BDA0001868698570000046
p is a pixel, q is a pixel in the neighborhood of pixel p, σsIs the spatial weight, σrIs the range weight, t is the number of iterations, I (q) is the pixel value of pixel point q in the input image, N (p) is the neighborhood of pixel point p,
Figure BDA0001868698570000047
Jt(p) is the value of pixel point p after the t-th iteration, Jt(q) is the value of pixel point q after the t-th iteration, Jt+1(p) is the value of pixel point p for iteration t +1 times.
And 4, expanding the space coordinates.
(4a) Acquiring a spatial coordinate value of each pixel point in an input hyperspectral image;
(4b) the space coordinate is expanded, that is, for a pixel point p, the spectral feature after dimension reduction by a principal component analysis method is assumed to be p ═ p (p)1,p2,...,pm) And the coordinate value is (x, y), the expanded spatial coordinate value is (x, x,.., x, y, y.., y), wherein m is the spectral dimension of the input hyperspectral image after dimension reduction, and the number of x and y in the expanded spatial coordinate value is an optimal value selected according to the input hyperspectral image to be classified.
And 5, fusing the space coordinate and the spectral characteristics.
Fusing the expanded spatial coordinates with the preprocessed spectral features to obtain the fusion features of each pixel point, namely, connecting the expanded spatial coordinates of the pixel point p with the preprocessed spectral features in series to obtain the fusion features q of the pixel point p as follows:
q=(p1,p2,...,pm,x,x,...,x,y,y,...,y)。
and 6, neighborhood block taking.
(6a) Selecting 3 × 3 neighborhood blocks for each pixel point of the input hyperspectral image;
(6b) arranging the fusion features of all the pixel points in the 3 x 3 neighborhood blocks according to columns to obtain a two-dimensional feature matrix corresponding to each pixel point.
And 7, training a principal component analysis network.
The principal component analysis network has a structure shown in fig. 2, and comprises two layers of PCA convolutional layers and a feature extraction layer, namely, a first layer of PCA convolutional layers → a second layer of PCA convolutional layers → a third layer of feature extraction layer, wherein the convolutional layers are used for performing mean value removal and convolutional filtering on an input image, and the feature extraction layer is used for performing binarization and histogram operations on the image processed by the convolutional layers and obtaining a feature vector of the input image.
The step of training the principal component analysis network in the step is as follows:
(7a) inputting a two-dimensional feature matrix corresponding to each pixel point in a training set into a first layer of a principal component analysis network, performing block taking and mean value removing operation on the input matrices, obtaining 8 filters of the principal component analysis network by using a principal component analysis method, and performing convolution on each input matrix and the 8 filters respectively to obtain a first layer feature matrix of an updated training set;
(7b) inputting the first layer feature matrix of the updated training set into the second layer of the principal component analysis network, and repeating the step (7a) to obtain a second layer feature matrix of the updated training set;
(7c) and carrying out binarization and block histogram operation on the second-layer feature matrix of the updated training set to obtain a trained principal component analysis network.
And 8, inputting the two-dimensional feature matrix corresponding to each pixel point in the test set into the trained principal component analysis network to obtain the feature vector corresponding to each pixel point in the test set.
And 9, inputting the feature vector corresponding to each pixel point in the test set into a Support Vector Machine (SVM) for classification to obtain a classification result of each pixel point in the test set.
The effect of the invention is further explained by combining the following simulation experiments:
1. simulation experiment conditions are as follows:
the hardware test platform adopted by the simulation experiment of the invention is as follows: the processor is InterXeon E5-2630M, the main frequency is 2.20GHz, and the memory is 64 GB; the software platform is as follows: windows 10 Enterprise edition 64 bit operating system and Matlab R2018a were subjected to simulation testing.
The hyperspectral image datasets used in the experiment are an Indian pines dataset and a Pavia unity dataset, wherein the size of an image of the Indian pines dataset is 145 × 145, the image has 200 spectral bands and contains 16 types of ground objects, and the type and the number of each type of ground object are shown in table 1; the Pavia unity data set image size was 610 x 340 with 103 spectral bands containing 9 classes of terrain, the class and number of each class of terrain being shown in table 2.
TABLE 1 Indian pins sample Categories and quantities
Class label Class of ground object Number of
1 Alfalfa 46
2 Corn-notill 1428
3 Corn-mintill 830
4 Corn 237
5 Grass-pasture 483
6 Grass-trees 730
7 Grass-pasture-mowed 28
8 Hay-windrowed 478
9 Oats 20
10 Soybean-nottill 972
11 Soybean-mintill 2455
12 Soybean-clean 593
13 Wheat 205
14 Woods 1265
15 Buildings-grass-trees-drives 386
16 Stone-steel-towers 93
TABLE 2 Pavia unity sample types and quantities
Class label Class of ground object Number of
1 Asphalt 6631
2 Meadows 18649
3 Gravel 2099
4 Trees 3064
5 Sheets 1345
6 Bare soil 5029
7 Bitumen 1330
8 Bricks 3682
9 Shadows 947
On an Indiana pines data set, the method selects the front 35-dimensional data obtained by principal component analysis as spectral characteristics, and the space coordinate is expanded to 4-dimension, and on a Pavia unity data set, the method selects the front 20-dimensional data obtained by Principal Component Analysis (PCA) as spectral characteristics, and the space coordinate is expanded to 4-dimension.
2. Simulation experiment contents:
experiment one:
in order to verify the effectiveness of the method provided by the invention, the classification results of the example and the three classification methods in the hyperspectral field on two hyperspectral datasets are compared, and the results are shown in tables 3 and 4.
These three existing methods are:
1) the method is characterized in that a classical Support Vector Machine (SVM) is used for the hyperspectral image classification, and the method directly classifies spectral information through the SVM;
2) the method comprises the steps that spectral information is converted into an image, namely a matrix form, and the image is input into a PCANet to obtain spectral characteristics, meanwhile, the spectral information of each pixel point and the pixel points in the neighborhood forms a matrix and is input into a principal component analysis network PCANet to extract spatial characteristics, and then the spectral characteristics and the spatial characteristics are fused and input into a support vector machine to obtain a classification result;
3) a hyperspectral image classification method R-VCANet based on depth, which is proposed in 2017 by Pan Bin et al, is characterized in that all spectral dimensions are filtered, filtered spectral information is converted into an image, namely, the image is input into an improved principal component analysis network VCANet in a matrix form to extract features, and finally the extracted features are classified by using a support vector machine.
The comparison of the overall classification accuracy OA, the average classification accuracy AA and the kappa coefficient k of the three prior art on two hyperspectral datasets is shown in table 3.
TABLE 3 comparison of the Classification accuracy of the prior art and the present invention
Figure BDA0001868698570000081
In table 3, OA is the overall classification accuracy of all the classification results in the test set, AA is the average classification accuracy of each classification result in the test set, and k is a coefficient for measuring consistency.
As can be seen from Table 3, the classification results of the method are significantly better than the three prior arts in 3 indexes regarding classification accuracy, regardless of the Indiana pines data set or the Pavia unity data set.
The time required for the classification of the present invention is compared to the time required for the classification of the prior R-VCANet method, as shown in Table 4.
TABLE 4 comparison of R-VCANet with the invention in terms of time(s) required for operation
Data set R-VCANet The invention
Indiana 2390.25 186.92
Pavia 14855.22 625.25
As can be seen from Table 4, the present invention greatly shortens the time required for classification compared to the prior art R-VCANet.
In conclusion, NSSNet, R-VCANet and the method are improved based on a principal component analysis network, but the two methods have extremely long running time because all spectral dimensions are classified, and the method utilizes Principal Component Analysis (PCA) to perform dimension reduction processing on spectral information, so that the spectral information is completely reserved, and meanwhile, the calculation complexity is greatly reduced; in addition, the spatial information of the image is increased by introducing the spatial coordinates and the neighborhood information of the pixel points, and the problem of low classification precision caused by dimension reduction of spectral information is solved by fully utilizing the spatial information.
Experiment two:
the experiment compares the hyperspectral image classification method based on the fusion of the space coordinates and the spatial spectrum features with the existing hyperspectral classification method based on the fusion of the space coordinates and the spatial spectrum features (patent application No. 201710644479.6, application publication No. CN 107451614A). In the comparative experiment, the experimental conditions are consistent with those of the patent, the adopted data set is an Indian pines hyperspectral data set, and the proportion of training samples is selected to be 10%. Specific comparison results are shown in table 5:
TABLE 5 comparison of the classification accuracy of the present invention and SPE-SPA-SVM
Evaluation index Comparison method The invention
OA(%) 98.71 98.54
AA(%) 96.05 97.63
k 0.9853 0.9833
The Indiana pines images are characterized by several categories with a very small number of samples, such as: as shown in Table 1, Oats with the category of 9 has only 20, and after 10% of training samples are selected, the number of the training samples is only 2, so how to better predict the category with too few samples is a difficult problem in image classification of Indiana pines.
As can be seen from Table 5, although the evaluation indexes OA and k of the invention are slightly lower than those of the SPE-SPA-SVM, the average accuracy AA of the invention is obviously better than that of the SPE-SPA-SVM, namely, the classification of the class with smaller number can be better realized. Compared with SPE-SPA-SVM, the method uses edge-preserving filtering, so that more pixel point information of classes with small sample amount can be preserved, and the average classification accuracy AA is improved.
By combining the result analysis of the first experiment and the second experiment, the method provided by the invention can effectively solve the problem of high calculation complexity when the principal component analysis network is used for hyperspectral classification, and can solve the problem of low average classification accuracy AA when spatial coordinates are used for hyperspectral classification.

Claims (3)

1. A hyperspectral image classification method based on principal component analysis network and space coordinates is characterized by comprising the following steps:
(1) inputting a data set corresponding to a hyperspectral image to be classified;
(2) uniformly dividing an input hyperspectral image dataset into 100 small datasets according to the spatial positions of pixel points in an image, randomly selecting training samples according to a fixed proportion for each ground object type in each small dataset, combining the selected training samples together for random disordering to serve as a training set, and forming a test set by the rest pixel points;
(3) sequentially performing dimensionality reduction, normalization and filtering processing on an input hyperspectral image to be classified to obtain preprocessed spectral features; the dimensionality reduction of the hyperspectral image to be classified is realized by using a Principal Component Analysis (PCA), and the method comprises the following steps of:
(3a) expanding all dimensional spectrums of each pixel point in the hyperspectral image matrix into a spectrum characteristic vector, and arranging the spectrum characteristic vectors of all the pixel points according to rows to form a spectrum characteristic matrix;
(3b) averaging the elements in the spectral feature matrix according to columns, and subtracting the average value of the corresponding column of the elements in the spectral feature matrix from each element in the spectral feature matrix;
(3c) solving the covariance of each two columns of elements in the spectral feature matrix to construct a covariance matrix of the spectral feature matrix;
(3d) obtaining the eigenvalues of all covariance matrixes corresponding to the spectral eigenvectors one by using the eigenequation of the covariance matrix;
(3e) sorting all eigenvalues from big to small, selecting the first 3 eigenvalues from the sorting, and forming a main eigenvalue matrix by using spectral eigenvectors corresponding to the 3 eigenvalues respectively according to the column;
(3f) projecting the hyperspectral image matrix onto the main characteristic matrix to obtain a hyperspectral image after dimension reduction;
filtering the hyperspectral images to be classified after dimension reduction and normalization by utilizing RGF edge preserving filtering for each dimension of the hyperspectral images after dimension reduction and normalization, wherein the formula is as follows:
Figure FDA0003185248390000011
wherein the content of the first and second substances,
Figure FDA0003185248390000012
p is a pixel, q is a pixel in the neighborhood of pixel p, σsIs the spatial weight, σrIs the range weight, t is the number of iterations, I (q) is the pixel value of pixel point q in the input image, N (p) is the neighborhood of pixel point p,
Figure FDA0003185248390000013
Jt(p) is the value of pixel point p after the t-th iteration, Jt(q) is the value of pixel point q after the t-th iteration, Jt+1(p) is the value of pixel point p for t +1 iterations;
(4) acquiring a spatial coordinate value of each pixel point in the hyperspectral image to be classified, and expanding the spatial coordinates;
(5) fusing the expanded spatial coordinates of each pixel point with the preprocessed spectral characteristics to obtain the fused characteristics of each pixel point; the implementation is as follows:
for a pixel point p, the spectral characteristic of the pixel point p subjected to dimensionality reduction by a principal component analysis method is assumed to be p ═ p (p)1,p2,…,pm) The coordinate value is (x, y), and the expanded spatial coordinate value is (x, x, …, x, y, y, …, y);
connecting the expanded space coordinate of the pixel point p and the processed spectrum characteristic in series to obtain a fusion characteristic q of the pixel point p as follows:
q=(p1,p2,...,pm,x,x,...,x,y,y,...,y),
wherein m is the spectral dimension of the input hyperspectral image after dimension reduction, and the number of x and y in the expanded spatial coordinate values is an optimal value selected according to the input hyperspectral image to be classified;
(6) after 3 × 3 neighborhood blocks are selected for each pixel point of the hyperspectral image, the fusion characteristics of all the pixel points in the neighborhood blocks are arranged according to columns to obtain a two-dimensional characteristic matrix corresponding to each pixel point;
(7) training a principal component analysis network by using the two-dimensional feature matrix corresponding to each pixel point in the training set to obtain a trained principal component analysis network;
(8) inputting the two-dimensional feature matrix corresponding to each pixel point in the test set into a trained principal component analysis network to obtain a feature vector corresponding to each pixel point in the test set;
(9) and inputting the feature vector corresponding to each pixel point in the test set into a Support Vector Machine (SVM) for classification to obtain a classification result of each pixel point in the test set.
2. The method according to claim 1, wherein the hyperspectral image to be classified after the dimensionality reduction is normalized in (3) by the following formula:
Figure FDA0003185248390000021
wherein the content of the first and second substances,
Figure FDA0003185248390000022
is the jth pixel value in the ith dimension spectral image after dimension reduction,
Figure FDA0003185248390000023
is the minimum pixel value in the ith dimension spectral image after dimension reduction,
Figure FDA0003185248390000024
is the maximum pixel value in the ith dimension spectral image after dimension reduction.
3. The method of claim 1, wherein the principal component analysis network is trained in (7) by using the two-dimensional feature matrix corresponding to each pixel point in the training set, and the method is implemented as follows:
(7a) inputting a two-dimensional feature matrix corresponding to each pixel point in a training set into a first layer of a principal component analysis network, performing block taking and mean value removing operation on the input matrices, obtaining 8 filters of the principal component analysis network by using a principal component analysis method, and performing convolution on each input matrix and the 8 filters respectively to obtain an updated first layer feature matrix of the training set;
(7b) inputting the updated first-layer feature matrix of the training set into a second layer of the principal component analysis network, and repeating the step (7a) to obtain the updated second-layer feature matrix of the training set;
(7c) and carrying out binarization and block histogram operation on the second-layer feature matrix of the updated training set to obtain a trained principal component analysis network.
CN201811366518.1A 2018-11-16 2018-11-16 Hyperspectral image classification method based on principal component analysis network and space coordinates Active CN109492593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811366518.1A CN109492593B (en) 2018-11-16 2018-11-16 Hyperspectral image classification method based on principal component analysis network and space coordinates

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811366518.1A CN109492593B (en) 2018-11-16 2018-11-16 Hyperspectral image classification method based on principal component analysis network and space coordinates

Publications (2)

Publication Number Publication Date
CN109492593A CN109492593A (en) 2019-03-19
CN109492593B true CN109492593B (en) 2021-09-10

Family

ID=65695078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811366518.1A Active CN109492593B (en) 2018-11-16 2018-11-16 Hyperspectral image classification method based on principal component analysis network and space coordinates

Country Status (1)

Country Link
CN (1) CN109492593B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084239B (en) * 2019-04-10 2022-09-06 中国科学技术大学 Method for reducing overfitting of network training during off-line handwritten mathematical formula recognition
CN109977030B (en) * 2019-04-26 2022-04-19 北京信息科技大学 Method and device for testing deep random forest program
CN111881933B (en) * 2019-06-29 2024-04-09 浙江大学 Hyperspectral image classification method and system
CN110298414B (en) * 2019-07-09 2022-12-06 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guided filtering
CN110596017B (en) * 2019-09-12 2022-03-08 生态环境部南京环境科学研究所 Hyperspectral image soil heavy metal concentration assessment method based on space weight constraint and variational self-coding feature extraction
CN110781974A (en) * 2019-10-31 2020-02-11 上海融军科技有限公司 Dimension reduction method and system for hyperspectral image
CN111639697B (en) * 2020-05-27 2023-03-24 西安电子科技大学 Hyperspectral image classification method based on non-repeated sampling and prototype network
CN112784777B (en) * 2021-01-28 2023-06-02 西安电子科技大学 Unsupervised hyperspectral image change detection method based on countermeasure learning
CN113361407A (en) * 2021-06-07 2021-09-07 上海海洋大学 PCANet-based space spectrum feature and hyperspectral sea ice image combined classification method
CN113095305B (en) * 2021-06-08 2021-09-07 湖南大学 Hyperspectral classification detection method for medical foreign matters

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095864A (en) * 2015-07-16 2015-11-25 西安电子科技大学 Aurora image detection method based on deep learning two-dimensional principal component analysis network
CN105184309A (en) * 2015-08-12 2015-12-23 西安电子科技大学 Polarization SAR image classification based on CNN and SVM
CN105868760A (en) * 2016-03-11 2016-08-17 信阳农林学院 Pattern recognition method and system
CN105913081A (en) * 2016-04-08 2016-08-31 西安电子科技大学 Improved PCAnet-based SAR image classification method
CN106469316A (en) * 2016-09-07 2017-03-01 深圳大学 The sorting technique of the high spectrum image based on super-pixel level information fusion and system
CN107451614A (en) * 2017-08-01 2017-12-08 西安电子科技大学 The hyperspectral classification method merged based on space coordinates with empty spectrum signature
CN107798286A (en) * 2017-07-13 2018-03-13 西安电子科技大学 High spectrum image evolution sorting technique based on marker samples position
CN107871132A (en) * 2017-10-31 2018-04-03 广东交通职业技术学院 A kind of hyperspectral image classification method of the adaptive optimizing of space characteristics
CN108171122A (en) * 2017-12-11 2018-06-15 南京理工大学 The sorting technique of high-spectrum remote sensing based on full convolutional network
CN108460342A (en) * 2018-02-05 2018-08-28 西安电子科技大学 Hyperspectral image classification method based on convolution net and Recognition with Recurrent Neural Network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198463B (en) * 2013-04-07 2014-08-27 北京航空航天大学 Spectrum image panchromatic sharpening method based on fusion of whole structure and space detail information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095864A (en) * 2015-07-16 2015-11-25 西安电子科技大学 Aurora image detection method based on deep learning two-dimensional principal component analysis network
CN105184309A (en) * 2015-08-12 2015-12-23 西安电子科技大学 Polarization SAR image classification based on CNN and SVM
CN105868760A (en) * 2016-03-11 2016-08-17 信阳农林学院 Pattern recognition method and system
CN105913081A (en) * 2016-04-08 2016-08-31 西安电子科技大学 Improved PCAnet-based SAR image classification method
CN106469316A (en) * 2016-09-07 2017-03-01 深圳大学 The sorting technique of the high spectrum image based on super-pixel level information fusion and system
CN107798286A (en) * 2017-07-13 2018-03-13 西安电子科技大学 High spectrum image evolution sorting technique based on marker samples position
CN107451614A (en) * 2017-08-01 2017-12-08 西安电子科技大学 The hyperspectral classification method merged based on space coordinates with empty spectrum signature
CN107871132A (en) * 2017-10-31 2018-04-03 广东交通职业技术学院 A kind of hyperspectral image classification method of the adaptive optimizing of space characteristics
CN108171122A (en) * 2017-12-11 2018-06-15 南京理工大学 The sorting technique of high-spectrum remote sensing based on full convolutional network
CN108460342A (en) * 2018-02-05 2018-08-28 西安电子科技大学 Hyperspectral image classification method based on convolution net and Recognition with Recurrent Neural Network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Hyperspectral Image Classification Based on Nonlinear Spectral–Spatial Network;Bin Pan 等;《IEEE Geoscience and Remote Sensing Letters》;20160930;第13卷(第12期);第1782-1786页 *
R-VCANet: A New Deep-Learning-Based Hyperspectral Image Classification Method;Bin Pan 等;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20170214;第10卷(第5期);第1975-1986页 *
SAR Image Change Detection Using PCANet Guided by Saliency Detection;Mengke Li 等;《IEEE Geoscience and Remote Sensing Letters》;20181105;第16卷(第3期);第402-406页 *
Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles;Mathieu Fauvel 等;《IEEE Transactions on Geoscience and Remote Sensing》;20081121;第46卷(第11期);第3804-3814页 *
基于PCANet的高光谱图像分类方法;刁许玲;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20210515(第05期);第C028-159页 *
多源遥感影像融合及其应用研究;刘金梅;《中国博士学位论文全文数据库 信息科技辑》;20141115(第11期);第I140-31页 *

Also Published As

Publication number Publication date
CN109492593A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109492593B (en) Hyperspectral image classification method based on principal component analysis network and space coordinates
CN107451614B (en) Hyperspectral classification method based on fusion of space coordinates and space spectrum features
Tirandaz et al. PolSAR image segmentation based on feature extraction and data compression using weighted neighborhood filter bank and hidden Markov random field-expectation maximization
CN110298414B (en) Hyperspectral image classification method based on denoising combination dimensionality reduction and guided filtering
CN106503739A (en) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
Liu et al. Multiscale Dense Cross‐Attention Mechanism with Covariance Pooling for Hyperspectral Image Scene Classification
Chauhan et al. An efficient data mining classification approach for detecting lung cancer disease
Bhargava et al. Machine learning–based detection and sorting of multiple vegetables and fruits
CN107563442B (en) Hyperspectral image classification method based on sparse low-rank regular graph tensor embedding
Safdar et al. Intelligent microscopic approach for identification and recognition of citrus deformities
CN107239759B (en) High-spatial-resolution remote sensing image transfer learning method based on depth features
CN105718942B (en) High spectrum image imbalance classification method based on average drifting and over-sampling
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
Rathod et al. Leaf disease detection using image processing and neural network
Boggavarapu et al. A new framework for hyperspectral image classification using Gabor embedded patch based convolution neural network
Liu et al. Multimorphological superpixel model for hyperspectral image classification
CN107871132B (en) Hyperspectral image classification method for spatial feature adaptive optimization
CN108399355B (en) Hyperspectral image classification method based on spatial information adaptive fusion
CN102521605A (en) Wave band selection method for hyperspectral remote-sensing image
CN111310571B (en) Hyperspectral image classification method and device based on spatial-spectral-dimensional filtering
Su A filter-based post-processing technique for improving homogeneity of pixel-wise classification data
Li et al. Ensemble EMD-based spectral-spatial feature extraction for hyperspectral image classification
CN106778885A (en) Hyperspectral image classification method based on local manifolds insertion
Valliammal et al. A novel approach for plant leaf image segmentation using fuzzy clustering
CN103390170A (en) Surface feature type texture classification method based on multispectral remote sensing image texture elements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant