CN113850316A - Hyperspectral image classification method and device of combined spectrum space multilayer perceptron - Google Patents

Hyperspectral image classification method and device of combined spectrum space multilayer perceptron Download PDF

Info

Publication number
CN113850316A
CN113850316A CN202111109093.8A CN202111109093A CN113850316A CN 113850316 A CN113850316 A CN 113850316A CN 202111109093 A CN202111109093 A CN 202111109093A CN 113850316 A CN113850316 A CN 113850316A
Authority
CN
China
Prior art keywords
spatial
image
features
hyperspectral image
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111109093.8A
Other languages
Chinese (zh)
Inventor
谭熊
薛志祥
刘冰
魏祥坡
余旭初
张鹏强
张艳
高奎亮
左溪冰
孙一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202111109093.8A priority Critical patent/CN113850316A/en
Publication of CN113850316A publication Critical patent/CN113850316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention belongs to the technical field of hyperspectral image classification, and particularly relates to a hyperspectral image classification method and device combined with a spectrum space multilayer sensor, wherein the method comprises the steps of firstly, extracting global spectrum characteristics from a hyperspectral image by using the spectrum space multilayer sensor; then, extracting local spatial features from the hyperspectral image by using a spatial multilayer sensor; and finally, fusing global spectral features and local spatial features by using a multilayer perceptron to perform combined classification on the hyperspectral images. The method can extract the spectral features and the spatial features from the hyperspectral images, can effectively perform combined classification after the features are fused, and has higher classification precision.

Description

Hyperspectral image classification method and device of combined spectrum space multilayer perceptron
Technical Field
The invention belongs to the technical field of hyperspectral image classification, and particularly relates to a hyperspectral image classification method and device combined with a spectrum space multilayer sensor.
Background
Remote sensing technology is one of the important components of earth observation, and can identify an observation scene by using specific reflection characteristics of the earth observation without contacting an object. The imaging spectrometer can acquire approximately continuous spectrum information in a wavelength range from visible light to infrared, and the acquired hyperspectral images (HSIs) have hundreds of diagnostic spectrum wave bands and can be used for subsequent information extraction. The hyperspectral image classification is the most active research direction in the field of hyperspectral remote sensing and aims to divide each pixel into a specific category. Currently, hyperspectral image classification is widely applied to the fields of land survey, resource management, urban development and the like.
The diversity and complexity of the ground features is a huge challenge facing the task of land cover classification of hyperspectral images. In order to meet the challenge, in recent years, extensive research researchers research the application of various deep learning models in the field, wherein a Convolutional Neural Network (CNNs) can simultaneously extract a plurality of abstract discrimination features, so that the convolutional neural network has attracted extensive attention in the field of hyperspectral image land cover classification. The convolutional neural network classification model can be divided into one-dimensional, two-dimensional and three-dimensional convolutional neural networks according to different types of input features of the convolutional neural networks. To model the sequence relationships in hyperspectral images, Mou et al propose a hyperspectral image spectrum-spatial context information classification model based on a Recurrent Neural Network (RNN). However, CNN classification models are inefficient at exploring spatial relationships between known instantiation parameters (e.g., perspective, size, and orientation). In order to better handle the spectral and spatial features of the spectral-spatial domain, Arun and Paoletti et al propose a capsule network (CapsNet) model based on spectral-spatial capsules for hyperspectral image classification.
In fact, the simultaneous use of the entire spectral features of the spectral domain and the local spatial features of the spatial domain facilitates the interpretation of the remote sensing image. However, the existing classification method has limitations in processing long-range correlation of spectral dimensions and extracting local spatial features from a spatial domain at the same time, and these diversity features are crucial to the characterization of hyperspectral images. In addition, since the diversity characteristics have complementary information in the land cover classification, the spectrum and space collaborative classification will significantly improve the classification performance.
Disclosure of Invention
Aiming at the problem that the classification performance of the hyperspectral images is limited due to the fact that the existing deep learning-based method is limited in the aspects of spectrum and space feature representation of the hyperspectral images, the invention provides the hyperspectral image classification method and device combined with the spectrum space multilayer perceptron.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a hyperspectral image classification method combined with a spectral space multilayer perceptron, which comprises the following steps:
extracting global spectral features from the hyperspectral image by using a spectral multilayer sensor;
extracting local spatial features from the hyperspectral image by using a spatial multilayer sensor;
and (4) fusing global spectral features and local spatial features by using a multilayer perceptron to perform combined classification on the hyperspectral images.
Further, the hyperspectral image patch R input to the spectrum multilayer perceptronH×W×CExpanding along the space dimension to obtain a two-dimensional input X epsilon RS×CWhere S ═ HW, H and W denote the image length and width, respectively, and C is the spectral dimension.
Further, the spectral multi-layer perceptron includes two MLP blocks, respectively a spatial hybrid MLP block and a channel hybrid MLP block;
in the spatial hybrid MLP block, the input X is transposed first, then the spatial hybrid multi-layer perceptron is applied to the columns of X and parameters are shared among all columns to realize
Figure BDA0003273444340000031
Mapping;
after the spatial mixing multi-layer perceptron, the channel mixing multi-layer perceptron acts on the X rows and shares parameters among all the rows to realize
Figure BDA0003273444340000032
And (6) mapping.
Further, each MLP block includes two fully-connected layers and a nonlinear activation function GELU, so the structure of the two MLP blocks is expressed as:
Figure BDA0003273444340000033
where σ is a nonlinear activation function, DSIs the total number of pixels in the patch image, DCTo the spectral dimension, U*,iRepresenting spatially mixed multi-layer perceptron output, Yj,*Representing channel-mixed multi-layer sensor output, X*,iIth column vector, U, representing the blob imagej,*J-th row vector, W, representing the blob image1、W2、W3、W4The weight coefficients of the two hybrid multi-layer perceptrons are respectively represented, and i and j respectively represent the ith column vector and the jth row vector of the plaque image.
Further, the input of the spatial multi-layer perceptron is image patches which are subjected to dimension reduction processing and spatial feature extraction by adopting invariant attribute profiles.
Further, the invariant attribute profile extracts spatial features of the hyperspectral image in a spatial domain and a frequency domain, specifically, in the spatial domain, a directional isotropic filter or a convolution kernel is used for extracting robust convolution features of the hyperspectral image, and then a spatial clustering method is used for extracting the spatial invariant features; in a frequency domain, modeling translation and rotation invariant features by utilizing a continuous gradient histogram in a Fourier polar coordinate, and finally combining to obtain the space-frequency invariant features of the hyperspectral image.
Further, the spatial multi-layer perceptron extracts local spatial features from the hyperspectral image, and comprises the following steps:
performing dimensionality reduction processing and spatial feature extraction on the hyperspectral image, and then segmenting the processed image into a plurality of image patches;
taking S non-overlapping image patches as input, taking the dimensionality of each image patch as C, and generating a two-dimensional input X e RS ×CIf the resolution of the original input hyperspectral image is (H, W) and the resolution of each patch is (P, P), the number of patches is S ═ HW/P2All patches are linearly projected with the same projection matrix;
and extracting local spatial features of each image patch by adopting a spatial mixed MLP block and a channel mixed MLP block, wherein a spatial mixed multilayer sensor acts on each row of X, a channel mixed multilayer sensor acts on each column of X, and the expressions of the spatial mixed MLP block and the channel mixed MLP block are as shown in formula (1).
The invention also provides a hyperspectral image classification device combined with the spectral space multilayer perceptron, which comprises:
the global spectral feature extraction module is used for extracting global spectral features from the hyperspectral image by utilizing the spectral multilayer perceptron;
the local spatial feature extraction module is used for extracting local spatial features from the hyperspectral image by using the spatial multilayer perceptron;
and the classification module is used for performing combined classification on the hyperspectral images by utilizing the multilayer perceptron to fuse the global spectral features and the local spatial features.
Compared with the prior art, the invention has the following advantages:
the invention relates to a hyperspectral image classification method of a combined spectrum space multilayer sensor, which mainly comprises the following two aspects: the method has the advantages that firstly, the multi-layer perceptron is utilized to extract the spectral characteristics and the spatial characteristics of the hyperspectral image, particularly, the multi-layer perceptron is utilized to extract the global spectral characteristics, the multi-layer perceptron is utilized to extract the local spatial characteristics, the global spectral information and the local spatial information of the hyperspectral image can be effectively represented, secondly, the method can effectively fuse various spectra and spatial characteristics and carry out combined classification, and classification performance is further improved; therefore, the spectrum space multilayer sensor network can extract more diagnostic features, and can effectively fuse various features to perform land coverage classification so as to obtain higher classification accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of a hyperspectral image classification method of a combined spectral-spatial multi-layered sensor according to an embodiment of the invention;
FIG. 2 is a schematic diagram of the structure of a spectral multilayer sensor according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a spatial multi-layered sensor according to an embodiment of the present invention;
FIG. 4 is a graph of classification results of different classification methods of the WHU-Hi-LongKou dataset according to an embodiment of the present invention;
FIG. 5 is a graph of the classification results of different classification methods for Houston2013 data sets according to an embodiment of the present invention, wherein (a) false color images; (b) ground reference data; (c) an SVM; (d) CDCNN; (e) an SSRN; (f) DBDA; (g) spectral MLP; (h) spatialMLP; (i) SSMLP.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In view of the limitation of the deep learning classification method for hyperspectral images in the aspects of spectrum and spatial feature representation, the present embodiment provides a hyperspectral image classification method combining a spectrum spatial multi-layered sensor (SSMLP), as shown in fig. 1, the method adopts a network structure of a dual-branch multi-layered sensor combining a spectrum spatial multi-layered sensor and a spatial multi-layered sensor to extract abundant spectrum feature information and spatial feature information in a hyperspectral image respectively for land coverage classification, and specifically includes the following steps:
in step S11, the diagnostic spectral features in the hyperspectral image are helpful for improving the classification performance of the land cover categories, so the spectral multilayer perceptron is used to extract global spectral features from the hyperspectral image, as shown in fig. 2.
Specifically, because the spectral band of the hyperspectral image has a natural sequence structure, the input hyperspectral image patch RH×W×CExpanding along the space dimension to obtain a two-dimensional input X epsilon RS×CWhere S ═ HW, H and W denote the image length and width, respectively, and C is the spectral dimension.
The spectral multilayer perceptron includes two MLP blocks, a spatial hybrid MLP block (MLP1) and a channel hybrid MLP block (MLP2), respectively. In the spatial hybrid MLP block, the input X is transposed first, then the spatial hybrid multi-layer perceptron is applied to the columns of X and parameters are shared among all columns to realize
Figure BDA0003273444340000061
And (6) mapping.
After the spatial mixing multi-layer perceptron, the channel mixing multi-layer perceptron acts on the X rows and shares parameters among all the rows to realize
Figure BDA0003273444340000062
And (6) mapping.
Each multi-layered perceptron block contains two fully-connected layers and one nonlinear activation function GELU (which performs a nonlinear operation on each line of input data), so the structure of two MLP blocks is expressed as:
Figure BDA0003273444340000063
where σ is a nonlinear activation function GELU, expressed as GELU (x) where x Φ (x) represents a probability function of a positive-probability distribution, which can be approximated as
Figure BDA0003273444340000064
DSAnd DCAdjustable hidden width in spatial hybrid multi-layer perceptron and channel hybrid multi-layer perceptron, respectively, since the input patch is a linearly stretched two-dimensional input, DSIs the total number of pixels in the patch image, DCTo the spectral dimension, U*,iRepresenting spatially mixed multi-layer perceptron output, Yj,*Representing channel-mixed multi-layer sensor output, X*,iIth column vector, U, representing the blob imagej,*J-th row vector, W, representing the blob image1、W2、W3、W4The weight coefficients of the two hybrid multi-layer perceptrons are respectively represented, and i and j respectively represent the ith column vector and the jth row vector of the plaque image. The spectral multilayer sensor has a whole spectral dimension receptive field, so that diagnostic spectral features of a whole waveband can be extracted.
In step S12, in the spatial dimension, the local spatial features have a large influence on the land cover classification performance, so the local spatial features are extracted from the hyperspectral image by using the spatial multi-layer sensor, as shown in fig. 3.
Specifically, firstly, spatial feature extraction and dimension reduction processing are performed on a hyperspectral image by using an invariant attribute profile, and then the image is divided into a plurality of image patches.
Then, S non-overlapping image patches are used as input, the dimensionality of each image patch is C, and a two-dimensional input X e R is generatedS×CIf the resolution of the original input hyperspectral image is (H, W) and the resolution of each patch is (P, P), the number of patches is S ═ HW/P2All plaquesLinear projection is performed with the same projection matrix.
And finally, extracting local spatial features of each image patch by adopting a spatial mixed MLP block and a channel mixed MLP block, wherein a spatial mixed multilayer sensor acts on each row of X, a channel mixed multilayer sensor acts on each column of X, and the channel mixed MLP can provide position invariance, which is very important for extracting the local spatial features, and the expressions of the spatial mixed MLP block and the channel mixed MLP block are shown as formula (1). Meanwhile, in order to train the network model better, the spatial multi-layer perceptron also adopts jump connection and layer normalization processing.
The invariant attribute profile extracts spatial features of the hyperspectral image in a spatial domain and a frequency domain, specifically, in the spatial domain, a directional isotropic filter or a convolution kernel is used for extracting robust convolution features of the hyperspectral image, and then a spatial clustering method is used for extracting the spatial invariant features; in a frequency domain, invariant features such as translation, rotation and the like are modeled by using a continuous gradient histogram in a Fourier polar coordinate, and finally, the invariant features of space-frequency of the hyperspectral image are obtained through combination. The specific calculation steps for extracting the feature of the invariant attribute profile are as follows:
the method comprises the steps of firstly, grouping hyperspectral images by using a k-means and other clustering algorithms and calculating horizontal and vertical gradients of the hyperspectral images, wherein the grouping number can be determined by marking cross validation on a sample set;
secondly, extracting polarization Fourier characteristics through gradient;
thirdly, constructing region descriptors in a space domain and a frequency domain, namely respectively establishing a region-based representation model by using an isotropic spatial filter and a Fourier convolution kernel;
the fourth step: generating invariant property profile features, FIAPs=[FSIFs,FFIFs]。
In the formula (I), the compound is shown in the specification,
Figure BDA0003273444340000081
representing a spatially invariant feature, N represents the number of pixels,
Figure BDA0003273444340000082
FRCF=[F1,…Fk,…,FD],
Figure BDA0003273444340000083
Ikdenotes the kth band, D denotes the number of bands, KconvFor a convolution kernel, NqIndicates the number of pixels in the qth super pixel, phii,qDefined as the set of pixels containing the ith target pixel (the qth super-pixel).
Figure BDA0003273444340000084
Which represents a frequency domain invariant feature that is,
Figure BDA0003273444340000085
a region descriptor representing the jth convolution kernel, whose expression is:
Figure BDA0003273444340000086
in the formula (I), the compound is shown in the specification,
Figure BDA0003273444340000087
representing m amplitude features corresponding to m different Fourier orders obtained by performing polarization Fourier transform on each pixel of an input image;
Figure BDA0003273444340000088
representing the absolute rotation invariant feature, Fm(x, y) represents an m-th order fourier transform, m ═ m';
Figure BDA0003273444340000089
indicating a relative rotation invariance characteristic, m ≠ m', r1And r2Representing the radius, sign, of two different convolution kernels
Figure BDA00032734443400000810
Representing a complex conjugate.
And step S13, after the spectrum and the spatial features are extracted, performing combined classification on the hyperspectral images by using global average pooling operation and fusing global spectrum features and local spatial features by using a multilayer perceptron.
In the following, classification experimental studies are performed using two sets of reference hyperspectral image datasets.
1. Description of data
(1) WHU-Hi-LongKou hyperspectral dataset
The WHU-Hi-LongKou hyperspectral data set is a hyperspectral image of a Longkown town of Hubei province acquired by a Headwall nanoscale imaging spectrometer, the wavelength range is 400-1000 nm, the number of wave bands is 270, the image size is 550 x 400 pixels, and the spatial resolution is about 0.463 m. The data set is a typical agricultural scenario, containing 9 different land cover categories. The ground truth categories and corresponding sample numbers are shown in table 1.
TABLE 1 land cover types and samples of WHU-Hi-LongKou dataset
Figure BDA0003273444340000091
(2) Houston2013 data set
The Houston2013 dataset is a hyperspectral image of the Houston university campus and surrounding areas obtained by the national science foundation airborne laser mapping center (NCALM). The hyperspectral image has 144 wave bands, the wavelength range is 380-1050 nm, the image size is 349 × 1905 pixels, and the spatial resolution is 2.5 m. The image covers a typical urban land cover category, there are 15 separable species within the image coverage, and detailed sample information is shown in table 2.
TABLE 2 land cover types and samples for Houston2013 dataset
Figure BDA0003273444340000092
Figure BDA0003273444340000101
2. Parameter setting
To evaluate the hyperspectral image classification performance of the proposed method, we performed comparative experiments with widely used SVMs (Melgani and Bruzzone 2004), CDCNNs (Lee and Kwon 2017), SSRNs (Zhonget al.2017) and DBDA (Li et al.2020). To keep the spectral and spatial characteristics as balanced as possible, we immobilize the inputs of the spectral and spatial branches. In SSMLP, the first three IAPs are selected as inputs to the spatial branch, and the spatial dimensions in both branches are set to 256, and all classification experiments are performed 10 times.
3. Results of the experiment
The mean and standard deviation of the OA, AA, kappa coefficient (kappa) and class accuracy of the WHU-Hi-LongKou and Houston2013 data sets for the different classification methods are shown in tables 3 and 4, respectively. As can be seen from the table: (1) the deep models have higher classification accuracy than the models with fewer layers, because the deeper deep learning method can extract more distinctive features for classification; (2) spectral-spatial classification methods (such as SSRN, DBDA and SSMLP) have better classification performance than spectral or spatial classification methods (such as SVM, CDCNN, spectral MLP and spatial MLP), which also proves that the method proposed by the inventor is an effective spectral-spatial classification model; (3) among all classification methods, the SSMLP method has the highest classification accuracy, which means that the method proposed herein has strong feature expression capability and classification capability on hyperspectral images.
TABLE 3 OA, AA, Kappa number and class accuracy (%) -for different methods of WHU-Hi-LongKou data set
Figure BDA0003273444340000111
TABLE 4 OA, AA, Kappa coefficients and class accuracies (%) -for Houston2013 data set by different methods
Figure BDA0003273444340000112
In order to evaluate the classification performance from a visual point of view, classification result graphs obtained by different classification methods of two data sets are respectively shown in fig. 4 and fig. 5. The ground truth sample distribution map is displayed together with the classification map to enhance visual comparison between different methods, each color corresponding to a particular land cover category. From the classification result graphs obtained by different classification methods, we can see that: the classification result graph generated by the SSMLP has the least class noise and the best classification effect. The SSMLP network model can extract spectral features and spatial features with higher resolution from a hyperspectral image, and can effectively fuse the features to obtain a better classification result. The experimental results show that: the overall accuracy of the SSMLP method reaches 99.12% and 99.49% respectively, and the effectiveness and the advancement of the method in the aspect of collaborative classification accuracy are verified.
Correspondingly to the above hyperspectral image classification method of the combined spectrum space multilayer sensor, this embodiment further provides a hyperspectral image classification device of the combined spectrum space multilayer sensor, including:
the global spectral feature extraction module is used for extracting global spectral features from the hyperspectral image by utilizing the spectral multilayer perceptron;
the local spatial feature extraction module is used for extracting local spatial features from the hyperspectral image by using the spatial multilayer perceptron;
and the classification module is used for performing combined classification on the hyperspectral images by utilizing the multilayer perceptron to fuse the global spectral features and the local spatial features.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. A hyperspectral image classification method combined with a spectral space multilayer sensor is characterized by comprising the following steps:
extracting global spectral features from the hyperspectral image by using a spectral multilayer sensor;
extracting local spatial features from the hyperspectral image by using a spatial multilayer sensor;
and (4) fusing global spectral features and local spatial features by using a multilayer perceptron to perform combined classification on the hyperspectral images.
2. The hyperspectral image classification method based on the combined spectrum space multilayer perceptron of claim 1, characterized in that the hyperspectral image patches R input to the spectrum space multilayer perceptronH×W×CExpanding along the space dimension to obtain a two-dimensional input X epsilon RS×CWhere S ═ HW, H and W denote the image length and width, respectively, and C is the spectral dimension.
3. The hyperspectral image classification method of the combined spectrum space multilayer perceptron according to claim 2, wherein the spectrum multilayer perceptron comprises two MLP blocks, respectively a space-mixed MLP block and a channel-mixed MLP block;
in the spatial hybrid MLP block, the input X is transposed first, then the spatial hybrid multi-layer perceptron is applied to the columns of X and parameters are shared among all columns to realize
Figure FDA0003273444330000011
Mapping;
after the spatial mixing multi-layer perceptron, the channel mixing multi-layer perceptron acts on the X rows and shares parameters among all the rows to realize
Figure FDA0003273444330000012
And (6) mapping.
4. The method for classifying hyperspectral images of a combined spectral-spatial multi-layered perceptron according to claim 3, wherein each MLP block comprises two fully-connected layers and a nonlinear activation function GELU, so that the structure of the two MLP blocks is expressed as:
Figure FDA0003273444330000013
where σ is a nonlinear activation function, DSIs the total number of pixels in the patch image, DCTo the spectral dimension, U*,iRepresenting spatially mixed multi-layer perceptron output, Yj,*Representing channel-mixed multi-layer sensor output, X*,iIth column vector, U, representing the blob imagej,*J-th row vector, W, representing the blob image1、W2、W3、W4The weight coefficients of the two hybrid multi-layer perceptrons are respectively represented, and i and j respectively represent the ith column vector and the jth row vector of the plaque image.
5. The hyperspectral image classification method of the combined spectrum-space multilayer perceptron according to claim 4 is characterized in that the input of the space multilayer perceptron is an image patch which adopts a constant attribute profile to perform dimension reduction processing and spatial feature extraction.
6. The hyperspectral image classification method of the combined spectrum-space multilayer perceptron according to claim 5 is characterized in that the invariant property profile extracts spatial features of the hyperspectral image in a spatial domain and a frequency domain, specifically, in the spatial domain, a directional isotropic filter or a convolution kernel is used for extracting robust convolution features of the hyperspectral image, and then a spatial clustering method is used for extracting the spatial invariant features; in a frequency domain, modeling translation and rotation invariant features by utilizing a continuous gradient histogram in a Fourier polar coordinate, and finally combining to obtain the space-frequency invariant features of the hyperspectral image.
7. The hyperspectral image classification method based on the combined spectrum space multilayer sensor according to claim 6, wherein the space multilayer sensor extracts local spatial features from a hyperspectral image, and the method comprises the following steps:
performing dimensionality reduction and spatial feature extraction on the hyperspectral image by using an invariant attribute profile, and then segmenting the processed image into a plurality of image patches;
taking S non-overlapping image patches as input, taking the dimensionality of each image patch as C, and generating a two-dimensional input X e RS×CIf the resolution of the original input hyperspectral image is (H, W) and the resolution of each patch is (P, P), the number of patches is S ═ HW/P2All patches are linearly projected with the same projection matrix;
and extracting local spatial features of each image patch by adopting a spatial mixed MLP block and a channel mixed MLP block, wherein a spatial mixed multilayer sensor acts on each row of X, a channel mixed multilayer sensor acts on each column of X, and the expressions of the spatial mixed MLP block and the channel mixed MLP block are as shown in formula (1).
8. The utility model provides a high spectral image sorter of joint spectrum space multilayer perceptron which characterized in that includes:
the global spectral feature extraction module is used for extracting global spectral features from the hyperspectral image by utilizing the spectral multilayer perceptron;
the local spatial feature extraction module is used for extracting local spatial features from the hyperspectral image by using the spatial multilayer perceptron;
and the classification module is used for performing combined classification on the hyperspectral images by utilizing the multilayer perceptron to fuse the global spectral features and the local spatial features.
CN202111109093.8A 2021-09-22 2021-09-22 Hyperspectral image classification method and device of combined spectrum space multilayer perceptron Pending CN113850316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111109093.8A CN113850316A (en) 2021-09-22 2021-09-22 Hyperspectral image classification method and device of combined spectrum space multilayer perceptron

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111109093.8A CN113850316A (en) 2021-09-22 2021-09-22 Hyperspectral image classification method and device of combined spectrum space multilayer perceptron

Publications (1)

Publication Number Publication Date
CN113850316A true CN113850316A (en) 2021-12-28

Family

ID=78974961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111109093.8A Pending CN113850316A (en) 2021-09-22 2021-09-22 Hyperspectral image classification method and device of combined spectrum space multilayer perceptron

Country Status (1)

Country Link
CN (1) CN113850316A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758203A (en) * 2022-03-31 2022-07-15 长江三峡技术经济发展有限公司 Residual dense visual transformation method and system for hyperspectral image classification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758203A (en) * 2022-03-31 2022-07-15 长江三峡技术经济发展有限公司 Residual dense visual transformation method and system for hyperspectral image classification
CN114758203B (en) * 2022-03-31 2023-01-10 长江三峡技术经济发展有限公司 Residual intensive visual transformation method and system for hyperspectral image classification

Similar Documents

Publication Publication Date Title
Mei et al. Hyperspectral image classification using group-aware hierarchical transformer
Zhang et al. Three-dimensional convolutional neural network model for tree species classification using airborne hyperspectral images
Hosseinpour et al. CMGFNet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images
Wang et al. Spectral–spatial multi-feature-based deep learning for hyperspectral remote sensing image classification
Jiang et al. Multi-spectral RGB-NIR image classification using double-channel CNN
Paul et al. SSNET: an improved deep hybrid network for hyperspectral image classification
Ahmad et al. Artifacts of different dimension reduction methods on hybrid CNN feature hierarchy for hyperspectral image classification
CN110533077A (en) Form adaptive convolution deep neural network method for classification hyperspectral imagery
He et al. DsTer: A dense spectral transformer for remote sensing spectral super-resolution
Meng et al. Investigation and evaluation of algorithms for unmanned aerial vehicle multispectral image registration
Wang et al. Hyperspectral image classification based on capsule network
Liu et al. A sparse tensor-based classification method of hyperspectral image
Fırat et al. Spatial-spectral classification of hyperspectral remote sensing images using 3D CNN based LeNet-5 architecture
Pham et al. Airborne object detection using hyperspectral imaging: Deep learning review
Ma et al. Spatial regularized local manifold learning for classification of hyperspectral images
CN115240072A (en) Hyperspectral multi-class change detection method based on multidirectional multi-scale spectrum-space residual convolution neural network
Shao et al. Residual networks with multi-attention mechanism for hyperspectral image classification
Nyasaka et al. Learning hyperspectral feature extraction and classification with resnext network
Zou et al. Cluster-based deep convolutional networks for spectral reconstruction from RGB images
Shi et al. F 3 Net: Fast Fourier filter network for hyperspectral image classification
Ma et al. A collaborative correlation-matching network for multimodality remote sensing image classification
CN113850316A (en) Hyperspectral image classification method and device of combined spectrum space multilayer perceptron
Wang et al. Hybrid network model based on 3D convolutional neural network and scalable graph convolutional network for hyperspectral image classification
Hamouda et al. Modified convolutional neural network based on adaptive patch extraction for hyperspectral image classification
Song et al. HDTFF-Net: Hierarchical deep texture features fusion network for high-resolution remote sensing scene classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination