CN113205143A - Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics - Google Patents

Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics Download PDF

Info

Publication number
CN113205143A
CN113205143A CN202110507336.7A CN202110507336A CN113205143A CN 113205143 A CN113205143 A CN 113205143A CN 202110507336 A CN202110507336 A CN 202110507336A CN 113205143 A CN113205143 A CN 113205143A
Authority
CN
China
Prior art keywords
pixel
pixels
training set
matrix
kernel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110507336.7A
Other languages
Chinese (zh)
Inventor
王�华
陈梦奇
黄伟
殷君茹
李志刚
陈启强
吴庆岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN202110507336.7A priority Critical patent/CN113205143A/en
Publication of CN113205143A publication Critical patent/CN113205143A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-scale superpixel hyperspectral remote sensing image classification method based on coupled spatial spectral features. The method comprises the following steps: dividing a hyperspectral remote sensing image data set into a training set and a test set, and performing dimensionality reduction on the training set by using a Principal Component Analysis (PCA) method to obtain an effective spectral band; under different scales, respectively using an entropy rate-based superpixel segmentation algorithm ERS to perform superpixel segmentation processing on effective spectral bands; calculating the similarity between any two super pixels through the RBF kernel function, and further obtaining the spatial spectrum kernel matrix K of the training setpp(ii) a Calculating the similarity between any two pixels in the training set through a polynomial kernel function, and further obtaining an original spectrum kernel matrix K of the training setyp(ii) a Nuclear matrix K of space spectrumppAnd the original spectrum kernel matrix KypPerforming fusion to obtain a multi-scale superpixel space spectrum synthesis kernel matrix, and then training an SVM classifier model; and classifying the test set by using the trained SVM classifier model, and outputting to obtain a corresponding ground feature classification image.

Description

Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a multi-scale superpixel hyperspectral remote sensing image classification method based on coupled spatial spectral features.
Background
The remote sensing image is always used as a main data source for information acquisition and updating such as land utilization classification, medical image processing, target detection and identification and the like due to high timeliness, wherein the hyperspectral remote sensing image can remarkably improve the difference and the identification degree of different objects by means of abundant image spectrum information, and is deeply favored by researchers. The hyperspectral remote sensing image not only contains the spatial information of the ground objects, but also contains rich spectral information of the ground objects, and the exploration of the hyperspectral image classification method has great significance for distinguishing the ground objects and mastering the information of the ground objects in the area in real time. The existing hyperspectral remote sensing image classification research adopts a superpixel method under a single scale to carry out image segmentation processing, the optimal number of superpixels cannot be determined, image detail information is easy to ignore, and a single kernel matrix cannot represent multi-feature information, so that the classification precision is reduced.
At present, algorithms for classifying hyperspectral remote sensing images comprise decision trees, Support Vector Machines (SVMs), deep learning and the like, compared with common remote sensing images, the hyperspectral remote sensing images comprise ground feature space features and a large number of fine spectral features, each hyperspectral remote sensing image comprises a large number of pixel-level features, if a traditional pixel-by-pixel method is used for extracting the features, the influence of noise is expanded, and the importance of similar feature clustering is often ignored during feature extraction, so that the image classification precision is reduced. The super-pixel is an image feature clustering method, which can divide pixel points with similar features in an image into small regions, and further abstract a pixel-level image into region-level high-dimensional data.
Currently, the superpixel method has been applied to a number of fields, such as: rajalakshmi C et al complete moving object detection based on superpixels; liu Lijun et al complete medical image segmentation based on superpixels; kisang Kim et al achieve indoor space recognition based on superpixels; and so on. By applying the super-pixel segmentation method to HSI classification or target detection, more effective sample space characteristics can be extracted, and the classification effect or target detection efficiency is improved. The method shows that the improved method based on the super-pixel can effectively improve the target classification precision, but the following defects still exist in the application of the super-pixel segmentation to the HSI classification: (1) the uncertainty of the initial super-pixel number easily causes the extracted image features to be not fine and comprehensive enough, and if the extracted features contain more interference information or lose key information, the classification precision is greatly influenced. (2) The classification method related to the superpixel mostly performs classification research aiming at a single kernel function, and the forward influence of kernel function fusion on classification precision is easy to ignore.
Disclosure of Invention
Aiming at the problem of low classification precision of the existing hyperspectral remote sensing image classification method, the invention provides a multi-scale superpixel hyperspectral remote sensing image classification method coupled with spatial spectral features, and the classification precision of hyperspectral remote sensing images can be effectively improved.
The invention provides a multi-scale superpixel hyperspectral remote sensing image classification method of coupled spatial spectral features, which comprises the following steps:
step 1: dividing a hyperspectral remote sensing image data set into a training set and a test set, and performing dimensionality reduction on the training set by using a Principal Component Analysis (PCA) method to obtain an effective spectral band;
step 2: under different scales, respectively using an entropy rate-based superpixel segmentation algorithm ERS to perform superpixel segmentation processing on effective spectral bands;
and step 3: calculating the similarity between any two super pixels through the RBF kernel function, and further obtaining the spatial spectrum kernel matrix K of the training setpp(ii) a Calculating the similarity between any two pixels in the training set through a polynomial kernel function, and further obtaining an original spectrum kernel matrix K of the training setyp
And 4, step 4: subjecting the spatial spectrum to a kernel matrix KppWith said original spectral kernel matrix KypPerforming fusion to obtain a multi-scale superpixel space spectrum synthesis kernel matrix, and then training an SVM classifier model;
and 5: and classifying the test set by using the trained SVM classifier model, and outputting to obtain a corresponding ground feature classification image.
Further, the step 1 specifically comprises:
step 1.1: setting training set to have n pixels { X1,X2…XnPixel XiHas a tag value of YiCalculating to obtain any two pixels X in the training seti,XjCovariance cov (X) betweeni,Xj) Further obtain the covariance matrix C of the training setn×n
Step 1.2: the covariance matrix C is obtained by calculationn×nIs equal to { λ ═ λ123…λnAnd the feature vector E ═ ξ123…ξn}; then p column vectors corresponding to the first p features with larger eigenvalues are selected from the E to construct a pattern matrix Ep=[ξ123…ξp];
Step 1.3: each pixel in the training set is processed by decentralization to obtain an intermediate matrix
Figure BDA0003058944630000031
The image matrix after the dimension reduction processing is [ E ]p T×QT]T(ii) a T denotes transposition.
Further, the step 2 specifically includes:
step 2.1: construct graph G ═ (V, E) over the available spectral bands, and set the objective function for superpixel segmentation:
Figure BDA0003058944630000032
wherein V is a vertex set corresponding to pixels in the training set, E is an edge set corresponding to the similarity between adjacent pixels, H (A) represents entropy rate, B (A) represents a balance term, A represents an edge set, and lambda is more than or equal to 0 and is the weight of the balance term;
step 2.2: calculating a function value of an edge between each pixel vertex and the adjacent pixel through the objective function;
step 2.3: deleting the side with the maximum function value to ensure that two pixels on the deleted side belong to the same super pixel;
step 2.4: and (4) repeatedly executing the step 2.2 and the step 2.3 until the number of the super pixels reaches the set super pixel value L.
Further, the step 3 specifically includes:
step 3.1: traversing each superpixel position in the training set, and calculating any two superpixels S under different scales by adopting RBF kernel function K ()iAnd SjSimilarity between them<Φ(Si),Φ(Sj)>Further, a spatial spectrum kernel matrix K composed of all the similarity values between the super pixels is obtainedpp(ii) a Wherein:
Figure BDA0003058944630000033
wherein the content of the first and second substances,
Figure BDA0003058944630000034
representing a super-pixel Si={Xi1,Xi2,Xi3…XikMean value of the spectrum of the set of pixels in (X)ieRepresenting a super-pixel SiThe e-th pixel of (1);
Figure BDA0003058944630000035
representing a super-pixel Sj={Xj1,Xj2,Xj3…XjkMean value of the spectrum of the set of pixels in (X)jeRepresenting a super-pixel SjThe e-th pixel of (1); k is a super pixel Si、SjThe total number of pixels in; m represents the set number of scales; s represents a scale number; delta denotes the widthA parameter;
step 3.2: computing any two pixels X in a training set using a polynomial kernel functioni,XjSimilarity between them<Φ(Xi),Φ(Xj)>Further, an original spectrum kernel matrix K composed of all the similarity values between pixels is obtainedyp(ii) a Wherein:
Figure BDA0003058944630000041
wherein d represents the order of the polynomial function, and d is a positive integer; l isi=(li1,li2,…,lic) Representing a pixel XiSpectral values at c bands,/ifRepresenting a pixel XiSpectral values at the f-th band;
Figure BDA0003058944630000042
representing a pixel XiThe mean value of the spectra under c bands; l isj=(lj1,lj2,…,ljc) Representing a pixel XjSpectral values at c bands,/jfRepresenting a pixel XjSpectral values at the f-th band;
Figure BDA0003058944630000043
representing a pixel XjMean of the spectra at c bands.
Further, the step 4 specifically includes:
step 4.1: subjecting the spatial spectrum to a kernel matrix KppWith said original spectral kernel matrix KypObtaining a multi-scale superpixel space spectrum synthesis kernel matrix K by fusion in a weight distribution modeMs-RPSK
KMs-RPSK=μKpp+(1-μ)Kyp
Wherein mu is a weight balance parameter;
step 4.2: pixel X in training setiWith its label value YiForming a data pair, and further obtaining a training sample set S { (X) of the SVM classifier1,Y1),(X2,Y2),…,(Xn,Yn) }; n represents the number of pixels in the training set;
step 4.3: and taking the multi-scale superpixel space spectrum synthesis kernel matrix as a kernel function of the SVM classifier, and training by adopting a training sample set S to obtain an SVM classifier model.
The invention has the beneficial effects that:
1. aiming at high-dimensional spectral feature information of a hyperspectral image, the invention provides the Ms-RPSK model, can effectively solve the problems of imprecise image feature extraction and inaccurate initial superpixel number, can obviously improve the HSI classification precision, accurately master the current land resource utilization information, and has important significance for the compilation and implementation of future homeland space planning.
2. When the spatial spectrum information and the original spectrum information are fused, the spatial spectrum information and the original spectrum information are fused in a weight value distribution mode, and a more optimal classification model is obtained by giving different weight values for testing; the region classification prediction graph obtained by the Ms-RPSK model is more aggregated on a spatial level, and small region ground objects can be well and accurately classified; the region classification prediction graph obtained by the Ms-RPSK model can distinguish ground features with similar spectral characteristics and ground features scattered around a large area.
3. Compared with the existing hyperspectral remote sensing image classification method based on a single kernel function, the hyperspectral remote sensing image classification method obtained by fusion of the invention combines the spatial characteristics and the spectral characteristics of the object, fully utilizes the characteristic of ground object spatial autocorrelation, and improves the classification precision of HIS.
Drawings
FIG. 1 is a schematic flow chart of a multi-scale superpixel hyperspectral remote sensing image classification method of coupled spatial spectral features according to an embodiment of the invention;
FIG. 2 is a schematic frame diagram of a multi-scale superpixel hyperspectral remote sensing image classification method of coupled spatial spectral features according to an embodiment of the present invention;
fig. 3 is a data set surface feature type and sample labeling diagram provided by the embodiment of the present invention: a (1) -a (3) represent stereoscopic display images; b (1) -b (3) represent actual ground object images; c (1) -c (3) represent sample marker templates;
fig. 4 is a diagram of a test set classification result provided in the embodiment of the present invention: (4-1) representing the result of classification on the Pavia University dataset; (4-2) represents the result of classification on the Pavia Center dataset; (4-3) represents the results of classification on Washington DC Mall data set;
fig. 5 is a diagram of a data set prediction classification result provided in the embodiment of the present invention: (5-1) representing the result of classification on the Pavia University dataset; (5-2) represents the results of classification on the Pavia Center dataset; (5-3) results of classification on Washington DC Mall data set are shown;
fig. 6 is a diagram of relative error results of the models provided by the embodiment of the present invention: (a) a graph representing the relative error results on the Pavia University dataset; (b) a graph showing the relative error results on the Pavia Center dataset; (c) relative error results are shown on Washington DC Mall data set.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With reference to fig. 1 and fig. 2, an embodiment of the present invention provides a method for classifying a multi-scale superpixel hyperspectral remote sensing image (Ms-RPSK method for short) by coupling spatial spectral features, including the following steps:
s101: dividing a hyperspectral remote sensing image data set into a training set and a test set, and performing dimensionality reduction on the training set by using a Principal Component Analysis (PCA) method to obtain an effective spectral band;
s102: under different scales, respectively using an entropy rate-based superpixel segmentation algorithm ERS to perform superpixel segmentation processing on effective spectral bands;
s103: calculating the similarity between any two super pixels through the RBF kernel function, and further obtaining the spatial spectrum kernel matrix K of the training setpp(ii) a Calculating the similarity between any two pixels in the training set through a polynomial kernel function, and further obtaining an original spectrum kernel matrix K of the training setyp
S104: subjecting the spatial spectrum to a kernel matrix KppWith said original spectral kernel matrix KypPerforming fusion to obtain a multi-scale superpixel space spectrum synthesis kernel matrix, and then training an SVM classifier model;
s105: and classifying the test set by using the trained SVM classifier model, and outputting to obtain a corresponding ground feature classification image.
As an implementation manner, the step S101 specifically includes:
s1011: setting training set to have n pixels { X1,X2…XnPixel XiHas a tag value of Yi(the label value of a certain pixel is used for indicating the ground object class of the pixel), and any two pixels X in the training set are calculatedi,XjCovariance cov (X) betweeni,Xj) Further obtain the covariance matrix C of the training setn×n
Specifically, the pixel mean is first calculated to be
Figure BDA0003058944630000061
Then calculating to obtain different pixels Xi,XjCovariance of each other
Figure BDA0003058944630000062
Finally obtaining the covariance matrix of the training set
Figure BDA0003058944630000071
S1012: the covariance matrix C is obtained by calculationn×nIs equal to { λ ═ λ123…λnAnd the feature vector E ═ ξ123…ξn}; then select and feature in EConstructing p column vectors corresponding to the first p features with larger values to obtain a pattern matrix Ep=[ξ123…ξp];
S1013: each pixel in the training set is processed by decentralization to obtain an intermediate matrix
Figure BDA0003058944630000072
The image matrix after the dimension reduction processing is [ E ]p T×QT]T(ii) a T denotes transposition.
As an implementation manner, the step S102 specifically includes:
s1021: construct graph G ═ V, E over the active spectral band (i.e., over the first p PCA components), and set the objective function for superpixel segmentation:
Figure BDA0003058944630000073
wherein V is a vertex set corresponding to pixels in the training set, E is an edge set corresponding to the similarity between adjacent pixels, H (A) represents entropy rate, B (A) represents a balance term, A represents an edge set, and lambda is more than or equal to 0 and is the weight of the balance term; b (A) is mainly used to promote clusters of similar size and reduce the number of unbalanced superpixels;
s1022: calculating a function value of an edge between each pixel vertex and the adjacent pixel through the objective function;
s1023: deleting the side with the maximum function value to ensure that two pixels on the deleted side belong to the same super pixel;
s1024: step S1022 and step S1023 are repeatedly executed until the number of super pixels reaches the set super pixel value L.
As an implementation manner, the step S103 is specifically:
s1031: traversing each superpixel position in the training set, and calculating any two superpixels S under different scales by adopting RBF kernel function K ()iAnd SjSimilarity between them<Φ(Si),Φ(Sj)>And further obtain all supernumeraritiesSpatial spectrum kernel matrix K composed of similarity values between pixelspp(ii) a Wherein:
Figure BDA0003058944630000081
wherein the content of the first and second substances,
Figure BDA0003058944630000082
representing a super-pixel Si={Xi1,Xi2,Xi3…XikMean value of the spectrum of the set of pixels in (X)ieRepresenting a super-pixel SiThe e-th pixel of (1);
Figure BDA0003058944630000083
representing a super-pixel Sj={Xj1,Xj2,Xj3…XjkMean value of the spectrum of the set of pixels in (X)jeRepresenting a super-pixel SjThe e-th pixel of (1); k is a super pixel Si、SjThe total number of pixels in; m represents the set number of scales; s represents a scale number; δ represents a width parameter, and δ has a large influence on classification accuracy.
Specifically, the calculation formula of the RBF kernel function is as follows: k (F)1,F2)=exp(-||F1-F2||/2δ2) (ii) a From this, the superpixel S at a certain scale can be calculatediAnd SjSimilarity between them<Φ(Si),Φ(Sj)>*
Figure BDA0003058944630000084
Furthermore, any two superpixels S under different scales can be obtained through calculationiAnd SjSimilarity between them<Φ(Si),Φ(Sj)>:
Figure BDA0003058944630000085
Further, a spatial spectrum kernel matrix K can be obtainedppThe following are:
Figure BDA0003058944630000086
s1032: computing any two pixels X in a training set using a polynomial kernel functioni,XjSimilarity between them<Φ(Xi),Φ(Xj)>Further, an original spectrum kernel matrix K composed of all the similarity values between pixels is obtainedyp(ii) a Wherein:
Figure BDA0003058944630000091
wherein d represents the order of the polynomial function, and d is a positive integer; l isi=(li1,li2,…,lic) Representing a pixel XiSpectral values at c bands,/ifRepresenting a pixel XiSpectral values at the f-th band;
Figure BDA0003058944630000092
representing a pixel XiThe mean value of the spectra under c bands; l isj=(lj1,lj2,…,ljc) Representing a pixel XjSpectral values at c bands,/jfRepresenting a pixel XjSpectral values at the f-th band;
Figure BDA0003058944630000093
representing a pixel XjMean of the spectra at c bands.
Specifically, the obtained original spectrum kernel matrix KypThe following are:
Figure BDA0003058944630000094
as an implementation manner, the step S104 specifically includes:
s1041: subjecting the spatial spectrum to a kernel matrix KppWith said original spectral kernel matrix KypObtaining a multi-scale superpixel space spectrum synthesis kernel matrix K by fusion in a weight distribution modeMs-RPSK
KMs-RPSK=μKpp+(1-μ)Kyp
Wherein mu is a weight balance parameter;
s1042: pixel X in training setiWith its label value YiForming a data pair, and further obtaining a training sample set S { (X) of the SVM classifier1,Y1),(X2,Y2),…,(Xn,Yn) }; n represents the number of pixels in the training set;
s1043: and taking the multi-scale superpixel space spectrum synthesis kernel matrix as a kernel function of the SVM classifier, and training by adopting a training sample set S to obtain an SVM classifier model.
The effectiveness and the applicability of the present invention will be described in detail through experiments with reference to fig. 3 to 6.
The experimental data processing part is on an MATLAB R2018a platform, a support vector machine algorithm is used for training a network model, and the computing environment is an AMD Ryzen 4800H CPU 2.90GHz and a memory 16G PC. The experimental comparison algorithm comprises: a multi-scale superpixel spatial spectrum synthesis kernel (Ms-SSSK) method, a single-scale superpixel spatial spectrum synthesis kernel (Ss-SSSK) method, a synthesis kernel with watershed segmentation (WSCSVM) method, an original Spatial Spectrum Kernel (SSK) method, and a segmented wavelength synthesis kernel (CK) method.
In order to verify the effectiveness and the practicability of the invention, an image classification experiment is carried out on data shot by a German airborne reflection optical spectrum imager (Reflective Optics Imaging System), and the specific experiment is as follows:
ROSIS-3 is an image that can be acquired at 610 × 340 pixels size, 115 spectral bands (0.43-0.86m), with spatial resolution up to 1.3 m. To quantitatively evaluate the results of the fusion, the present invention performed simulation experiments on this data: firstly, extracting a first principal component of HIS (hierarchical iterative reconstruction) by PCA (principal component analysis), performing superpixel segmentation on the first principal component by an ERS (error regression) algorithm under four scales of 400, 800, 1600 and 3200, and calculating the similarity between any superpixels by using an RBF (radial basis function) kernel function under each scale to represent the similarity between all pixels in two superpixels to form a kernel matrix. And then, accumulating and averaging the kernel matrixes at all scales to form a final Ms-SSK kernel matrix. The SK kernel matrix is obtained as follows: aiming at any pixel point in the HSI, the pixel point is subjected to mean value calculation under all wave bands, and the similarity between the mean values of any pixels is calculated through an RBF kernel function, so that an SK kernel matrix is formed. And then combining the Ms-SSK core matrix and the SK core matrix through the weight to form Ms-SSSK to realize HSI classification. And finally, taking the given hyperspectral image data set as a reference image, comparing the reference image with other classification methods, and calculating to obtain corresponding performance indexes of quantitative evaluation.
The effectiveness and the feasibility of the Ms-RPSK method are verified by using three HSI data sets, namely, the Pavia University, the Pavia Center and the Washington DC Mall, and the performance of a classification model is verified by a 7-fold cross-validation method. The optimal values of the parameters in the experimental process are obtained by a grid search method, wherein the RBF kernel function parameter value g is 4.5639, the penalty factor c is 16.9873, and the maximum term frequency d of the polynomial kernel function is set to be 3. The results of comparative analysis of the data using the conventional image classification method and the image classification method of the present invention are shown in fig. 4. Wherein (4-1) in FIG. 4 is the result of a classification on the Pavia University dataset; FIG. 4 (4-2) is the result of a classification on the Pavia Center dataset; FIG. 4 (4-3) is the result of classification on Washington DC Mall dataset; FIG. 3 is an original hyperspectral image. As can be seen from the results shown in fig. 3, the Pavia University data set has relatively concentrated bare soil and grassland, relatively dispersed broken stones and bricks, and relatively close distances between the broken stones and the bricks, so that the bricks are easily divided into broken stones when the broken stones have relatively large areas; the trees in the research area are not gathered enough on the spatial pattern and spread in the whole area, and the trees are distributed on the periphery of the asphalt pavement, so that the asphalt pavement is easily scratched into the trees by mistake during classification. Compared with (4-1) in FIG. 4, the classification precision of the asphalt pavement, the macadam and the brick obtained by adopting the Ms-RPSK method is greatly improved, the error is small, and the HIS classification precision is high. The Pavia Center data is more in small-area areas of grasslands, bare soil and trees, the distribution range is wider, the spectral information of the grasslands and the trees is similar, so the grasslands and the trees are easier to be confused in classification, in addition, the bare soil and the grasslands are closer to each other, and when the bare soil area is larger, the grasslands are extremely easy to be classified into the bare soil; the asphalt in the research area is distributed more densely on the spatial pattern, and the bricks are distributed on the periphery of the asphalt, so that the bricks are easily scratched into the asphalt in the classification process. From the classification result of (4-2) in fig. 4, it can be found that the red points in the gray and brown regions of the Ms-RPSK method are obviously less than those of other classification methods, and the classification precision of the brick and the tile obtained by using the Ms-RPSK method is greatly improved, which indicates that the classification precision of the Ms-RPSK method is indeed improved compared with that of other methods. As can be seen from (4-3) in fig. 4, the area occupied by the shadow in the data set is small and the layout is scattered, and when the area of the house is large, the house and the shadow are easily classified into one type; forests and grasslands in a research area are not gathered enough in spatial configuration and spread throughout the area, and have relatively similar spectral characteristics, so that the forests and the grasslands are easy to be confused in classification; the highway occupies a large area in the whole area and is adjacent to pixels such as grasslands, forests, houses and the like, so that the highway is easy to be wrongly classified into the land types. From the classification result in fig. 4(c), it can be found that the classification precision of the house and the shadow obtained by Ms-RPSK classification is greatly improved.
The original images are classified on the test set by using the trained model, the classification results of the whole regions of the three data sets are predicted as shown in (5-1) in fig. 5, (5-2) in fig. 5 and (5-3) in fig. 5, and the region is marked by a square frame in (5-1) in fig. 5 and (5-2) in fig. 5, so that the region classification prediction maps obtained by the Ms-RPSK model are more aggregated on the spatial level, and the ground objects in a small region can be well and accurately classified; as shown in the box labeled area in fig. 5 (5-3), the area classification prediction map obtained by the Ms-RPSK model can better distinguish the feature with similar spectral characteristics from the feature scattered around a large area, such as: grass and forests that are close together, shadows around the perimeter of the home, etc.
Meanwhile, the invention discusses the classification accuracy change condition of the 6 types of methods on sample sets with different scales, the number of training samples of the Pavia University data set is respectively set as 200, 400, 600, 800 and 1000, the number of training samples of the Pavia Center and Washington DC Mall data set is respectively set as 200, 400, 600 and 800, samples with the set values are randomly selected from the sample sets and used for training classification models, the residual samples are used as test sets to verify the model performance, the relative error of each model of the test sets with different scales is shown in the following figure 6, and as can be seen from figure 6, when the sample quantities of the three data sets are respectively increased from 200 to 800, the classification error of each model is continuously reduced. When the number of samples is 200, the relative error of the Ms-RPSK model classification is within 8-9.5%, and the classification error of the model is the largest but lower than that of the other five models; along with the increasing of the number of samples, the classification precision difference is gradually reduced to 0.3-1.3%. The result proves that the spatial spectrum synthesis kernel method of the Ms-RPSK model can learn the similarity characteristics of the samples in the kernel space, further integrates the multidimensional characteristics of the images to obtain more precise and sufficient image information, and can still obtain more ideal HSI classification precision when the training sample set is small in scale.
Table 1 shows the performance index profiles of the inventive and comparative methods. The following performance indicators were used in this experiment: classification precision of various ground objects and Overall classification precision of testing machine (OA)
In table 1, bold numbers indicate the best values in each index, and suboptimal values in each index are indicated by underlined numbers. From the point of view of various objective evaluation indexes of image classification, various indexes of the method provided by the invention are superior to those of other methods.
TABLE 1 results of image classification quantitative evaluation of data sets by different methods
Figure BDA0003058944630000121
Figure BDA0003058944630000131
The experimental results show that the classification precision of the hyperspectral remote sensing image can be well improved by the method of coupling superpixel multi-scale synthesis kernel, the classification prediction images of the regions obtained by using the model are more aggregated on a spatial level, the ground objects in small regions can be well and accurately classified, and the ground objects with similar spectral characteristics and the ground objects scattered around the large area can be better distinguished; meanwhile, when the scale of the training sample set is small, the ideal HSI classification precision can still be obtained.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. The method for classifying the multi-scale superpixel hyperspectral remote sensing images of the coupled spatial spectral features is characterized by comprising the following steps of:
step 1: dividing a hyperspectral remote sensing image data set into a training set and a test set, and performing dimensionality reduction on the training set by using a Principal Component Analysis (PCA) method to obtain an effective spectral band;
step 2: under different scales, respectively using an entropy rate-based superpixel segmentation algorithm ERS to perform superpixel segmentation processing on effective spectral bands;
and step 3: calculating the similarity between any two super pixels through the RBF kernel function, and further obtaining the spatial spectrum kernel matrix K of the training setpp(ii) a Calculating the similarity between any two pixels in the training set through a polynomial kernel function, and further obtaining an original spectrum kernel matrix K of the training setyp
And 4, step 4: the space isSpectral kernel matrix KppWith said original spectral kernel matrix KypPerforming fusion to obtain a multi-scale superpixel space spectrum synthesis kernel matrix, and then training an SVM classifier model;
and 5: and classifying the test set by using the trained SVM classifier model, and outputting to obtain a corresponding ground feature classification image.
2. The method according to claim 1, wherein step 1 is specifically:
step 1.1: setting training set to have n pixels { X1,X2...XnPixel XiHas a tag value of YiCalculating to obtain any two pixels X in the training seti,XjCovariance cov (X) betweeni,Xj) Further obtain the covariance matrix C of the training setn×n
Step 1.2: the covariance matrix C is obtained by calculationn×nIs equal to { λ ═ λ123…λnAnd the feature vector E ═ ξ123…ξn}; then p column vectors corresponding to the first p features with larger eigenvalues are selected from the E to construct a pattern matrix Ep=[ξ123…ξp];
Step 1.3: each pixel in the training set is processed by decentralization to obtain an intermediate matrix
Figure FDA0003058944620000011
The image matrix after the dimension reduction processing is [ E ]p T×QT]T(ii) a T denotes transposition.
3. The method according to claim 1, wherein step 2 is specifically:
step 2.1: construct graph G ═ (V, E) over the available spectral bands, and set the objective function for superpixel segmentation:
Figure FDA0003058944620000021
wherein V is a vertex set corresponding to pixels in the training set, E is an edge set corresponding to the similarity between adjacent pixels, H (A) represents entropy rate, B (A) represents a balance term, A represents an edge set, and lambda is more than or equal to 0 and is the weight of the balance term;
step 2.2: calculating a function value of an edge between each pixel vertex and the adjacent pixel through the objective function;
step 2.3: deleting the side with the maximum function value to ensure that two pixels on the deleted side belong to the same super pixel;
step 2.4: and (4) repeatedly executing the step 2.2 and the step 2.3 until the number of the super pixels reaches the set super pixel value L.
4. The method according to claim 1, wherein step 3 is specifically:
step 3.1: traversing each superpixel position in the training set, and calculating any two superpixels S under different scales by adopting RBF kernel function K ()iAnd SjSimilarity between them<Φ(Si),Φ(Sj)>Further, a spatial spectrum kernel matrix K composed of all the similarity values between the super pixels is obtainedpp(ii) a Wherein:
Figure FDA0003058944620000022
wherein the content of the first and second substances,
Figure FDA0003058944620000023
representing a super-pixel Si={Xi1,Xi2,Xi3...XikMean value of the spectrum of the set of pixels in (X)ieRepresenting a super-pixel SiThe e-th pixel of (1);
Figure FDA0003058944620000024
representing a super-pixel Sj={Xj1,Xj2,Xj3...XjkMean value of the spectrum of the set of pixels in (X)jeRepresenting a super-pixel SjThe e-th pixel of (1); k is a super pixel Si、SjThe total number of pixels in; m represents the set number of scales; s represents a scale number; δ represents a width parameter;
step 3.2: computing any two pixels X in a training set using a polynomial kernel functioni,XjSimilarity between them<Φ(Xi),Φ(Xj)>Further, an original spectrum kernel matrix K composed of all the similarity values between pixels is obtainedyp(ii) a Wherein:
Figure FDA0003058944620000031
wherein d represents the order of the polynomial function, and d is a positive integer; l isi=(li1,li2,...,lic) Representing a pixel XiSpectral values at c bands,/ifRepresenting a pixel XiSpectral values at the f-th band;
Figure FDA0003058944620000032
representing a pixel XiThe mean value of the spectra under c bands; l isj=(lj1,lj2,...,ljc) Representing a pixel XjSpectral values at c bands,/jfRepresenting a pixel XjSpectral values at the f-th band;
Figure FDA0003058944620000033
representing a pixel XjMean of the spectra at c bands.
5. The method according to claim 1, wherein step 4 is specifically:
step 4.1: subjecting the spatial spectrum to a kernel matrix KppWith said original spectral kernel matrix KypFusing through weight distribution mode to obtain multiple scalesSuper-pixel space spectrum synthesis kernel matrix KMs-RPSK
KMs-RPSK=μKpp+(1-μ)Kyp
Wherein mu is a weight balance parameter;
step 4.2: pixel X in training setiWith its label value YiForming a data pair, and further obtaining a training sample set S { (X) of the SVM classifier1,Y1),(X2,Y2),...,(Xn,Yn) }; n represents the number of pixels in the training set;
step 4.3: and taking the multi-scale superpixel space spectrum synthesis kernel matrix as a kernel function of the SVM classifier, and training by adopting a training sample set S to obtain an SVM classifier model.
CN202110507336.7A 2021-05-10 2021-05-10 Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics Pending CN113205143A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110507336.7A CN113205143A (en) 2021-05-10 2021-05-10 Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110507336.7A CN113205143A (en) 2021-05-10 2021-05-10 Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics

Publications (1)

Publication Number Publication Date
CN113205143A true CN113205143A (en) 2021-08-03

Family

ID=77030643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110507336.7A Pending CN113205143A (en) 2021-05-10 2021-05-10 Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics

Country Status (1)

Country Link
CN (1) CN113205143A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569641A (en) * 2021-06-28 2021-10-29 遥聚信息服务(上海)有限公司 Feature data extraction method and device based on remote sensing image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200217A (en) * 2014-08-07 2014-12-10 哈尔滨工程大学 Hyperspectrum classification method based on composite kernel function
CN106503739A (en) * 2016-10-31 2017-03-15 中国地质大学(武汉) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
WO2018045626A1 (en) * 2016-09-07 2018-03-15 深圳大学 Super-pixel level information fusion-based hyperspectral image classification method and system
CN108446582A (en) * 2018-01-25 2018-08-24 西安电子科技大学 Hyperspectral image classification method based on textural characteristics and affine propagation clustering algorithm
CN112116017A (en) * 2020-09-25 2020-12-22 西安电子科技大学 Data dimension reduction method based on kernel maintenance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200217A (en) * 2014-08-07 2014-12-10 哈尔滨工程大学 Hyperspectrum classification method based on composite kernel function
WO2018045626A1 (en) * 2016-09-07 2018-03-15 深圳大学 Super-pixel level information fusion-based hyperspectral image classification method and system
CN106503739A (en) * 2016-10-31 2017-03-15 中国地质大学(武汉) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
CN108446582A (en) * 2018-01-25 2018-08-24 西安电子科技大学 Hyperspectral image classification method based on textural characteristics and affine propagation clustering algorithm
CN112116017A (en) * 2020-09-25 2020-12-22 西安电子科技大学 Data dimension reduction method based on kernel maintenance

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUA WANG等: "Research on land use classification of hyperspectral images based on multiscale superpixels", 《MATHEMATICAL BIOSCIENCES AND ENGINEERING》 *
吴心筱 等: "《视频中人的动作分析与识别》", 30 September 2019, 北京:北京理工大学出版社 *
陈雨时, 哈尔滨:哈尔滨工程大学出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569641A (en) * 2021-06-28 2021-10-29 遥聚信息服务(上海)有限公司 Feature data extraction method and device based on remote sensing image

Similar Documents

Publication Publication Date Title
Kumar et al. Image based leaf segmentation and counting in rosette plants
Sun et al. SLIC_SVM based leaf diseases saliency map extraction of tea plant
CN110321963B (en) Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional space spectrum features
Yuan et al. Remote sensing image segmentation by combining spectral and texture features
CN106339674B (en) The Hyperspectral Image Classification method that model is cut with figure is kept based on edge
CN102622607B (en) Remote sensing image classification method based on multi-feature fusion
CN107909039B (en) High-resolution remote sensing image earth surface coverage classification method based on parallel algorithm
CN106503739A (en) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
CN110399909A (en) A kind of hyperspectral image classification method based on label constraint elastic network(s) graph model
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN104182767B (en) The hyperspectral image classification method that Active Learning and neighborhood information are combined
Huang et al. Local binary patterns and superpixel-based multiple kernels for hyperspectral image classification
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN108427913A (en) The Hyperspectral Image Classification method of combined spectral, space and hierarchy information
CN108960276B (en) Sample expansion and consistency discrimination method for improving spectral image supervision classification performance
CN112949738A (en) Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm
CN112733736A (en) Class imbalance hyperspectral image classification method based on enhanced oversampling
Xia et al. Land resource use classification using deep learning in ecological remote sensing images
Talasila et al. PLRSNet: a semantic segmentation network for segmenting plant leaf region under complex background
CN113205143A (en) Multi-scale superpixel hyperspectral remote sensing image classification method based on space-spectrum coupling characteristics
CN113343900A (en) Combined nuclear remote sensing image target detection method based on combination of CNN and superpixel
CN102622345B (en) High-precision land-utilization remote sensing updating technology with synergistic multisource spatio-temporal data
CN111882573A (en) Cultivated land plot extraction method and system based on high-resolution image data
Dogrusoz et al. Modeling urban structures using graph-based spatial patterns

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210803