CN112232137A - Hyperspectral image processing method and device - Google Patents

Hyperspectral image processing method and device Download PDF

Info

Publication number
CN112232137A
CN112232137A CN202011018797.XA CN202011018797A CN112232137A CN 112232137 A CN112232137 A CN 112232137A CN 202011018797 A CN202011018797 A CN 202011018797A CN 112232137 A CN112232137 A CN 112232137A
Authority
CN
China
Prior art keywords
image
features
spatial
spectral
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011018797.XA
Other languages
Chinese (zh)
Inventor
吴发国
张筱
牛子佳
姚望
郑志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202011018797.XA priority Critical patent/CN112232137A/en
Publication of CN112232137A publication Critical patent/CN112232137A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a hyperspectral image processing method, a hyperspectral image processing device, electronic equipment and a computer readable program. The processing method comprises the following steps: performing dimensionality reduction on the original hyperspectral image to acquire image data and an image tag; extracting spatial features of the image data and the image labels by adopting a deep Lab network; processing the original hyperspectral image by adopting a stacked self-encoder to extract spectral features; and performing weighted fusion on the spatial features and the spectral features to obtain spatial-spectral fusion features. The hyperspectral image processing method is an unsupervised processing method, the hyperspectral image is extracted by using a DeepLab network structure, the spectral characteristics of the hyperspectral image are extracted by using a stacked self-encoder, and marked samples of the hyperspectral image are not required.

Description

Hyperspectral image processing method and device
Technical Field
The application relates to the technical field of hyperspectral remote sensing, in particular to a hyperspectral image processing method and device, electronic equipment and a computer readable program.
Background
In the hyperspectral images obtained by the hyperspectral remote sensing technology, each pixel contains abundant spectral information, and the spectral characteristics embodied by the pixel can be used for distinguishing different ground objects in a target area, so that some ground objects which cannot be identified in the traditional multispectral images can be identified and classified through the hyperspectral images. Therefore, the hyperspectral remote sensing technology is widely applied to various fields such as agriculture, forestry, minerals and the like. The hyperspectral image classification is an important technology, and valuable information can be acquired from the hyperspectral image through a series of processing.
The traditional hyperspectral image classification method is mainly used for classifying images through extracted spectral dimension features based on abundant spectral features. However, the classification method ignores the dependency relationship among the pixels of the hyperspectral image. In general, there will be some correlation between the categories to which adjacent image elements in an image belong. Therefore, the method combining the spectral feature and the spatial feature is applied to the field of hyperspectral image classification. In addition, some deep learning algorithms and network structures are applied to the field of hyperspectral image classification, and a superior effect is achieved.
However, since most deep learning network structures contain a large number of parameters, training for these parameters requires the use of a large number of labeled samples. However, manual labeling of samples in hyperspectral images is difficult, requiring a lot of manpower and time. Therefore, the phenomenon that the labeled samples of the hyperspectral images are insufficient severely limits the application of the deep learning algorithm in the field of hyperspectral image classification.
Disclosure of Invention
Based on the above, the application provides an unsupervised hyperspectral image processing method, the spatial features of the hyperspectral images are extracted by using a DeepLab network structure, the spectral features of the hyperspectral images are extracted by using a stacked self-encoder, and the labeled samples of the hyperspectral images are not needed when the spatial-spectral fusion features are finally classified by using k-means clustering.
According to a first aspect of the present application, there is provided a method comprising:
performing dimensionality reduction on the original hyperspectral image to acquire image data and an image tag;
extracting spatial features of the image data and the image labels by adopting a deep Lab network;
processing the original hyperspectral image by adopting a stacked self-encoder to extract spectral features;
and performing weighted fusion on the spatial features and the spectral features to obtain spatial-spectral fusion features.
According to some embodiments of the present application, the performing dimension reduction processing on the original hyperspectral image to obtain image data and an image tag includes:
performing dimensionality reduction on the original hyperspectral image, reserving the first three main components and taking the main components as the image data;
and performing dimensionality reduction on the original hyperspectral image again, reserving a first principal component and taking the first principal component as the image label.
According to some embodiments of the present application, extracting spatial features from the image data and the image labels using a deep lab network comprises:
cutting the image data and the image label into small blocks, and inputting the small blocks into a deep Lab network for training;
and inputting the image data into a trained DeepLab network, and extracting the spatial features.
According to some embodiments of the present application, extracting spatial features from the image data and the image labels using a deep lab network further comprises:
before training the deep lab network, converting each pixel value in the image label from a floating point number to an integer using the following formula:
Figure BDA0002699995190000021
according to some embodiments of the application, the weighted fusion of the spatial feature and the spectral feature obtains a spatial-spectral fusion feature, including:
performing one-dimensional reconstruction on the spatial features;
normalizing the spectral features and the reconstructed spatial features;
and performing weighted fusion on the normalized spatial features and spectral features.
According to some embodiments of the application, the normalizing comprises:
the z-score was used for normalization.
According to some embodiments of the application, the weighted fusion of the normalized spatial and spectral features comprises:
the weighted fusion is performed according to the following formula,
S=[(1-λ)f(S1),λf(S2)]
wherein f (S)1) For the normalized spectral feature to be f (S)1),f(S2) For the normalized spatial feature to be f (S)2) λ is a weight coefficient, and S is a space spectrum fusion characteristic.
According to some embodiments of the application, the processing method further comprises: and classifying the spatial spectrum fusion characteristics.
According to some embodiments of the application, the classification process comprises: and performing classification processing by using a k-means clustering algorithm.
According to some embodiments of the application, the dimension reduction process comprises: and (5) PCA dimension reduction processing.
According to a second aspect of the present application, there is provided a hyperspectral image processing apparatus comprising:
the preprocessing module is used for performing dimensionality reduction processing on the original hyperspectral image to acquire image data and an image tag;
the first feature extraction module is used for extracting spatial features of the image data and the image labels by adopting a deep Lab network;
the second characteristic extraction module is used for processing the original hyperspectral image by adopting a stacked self-encoder to extract spectral characteristics;
and the characteristic fusion module is used for performing weighted fusion on the spatial characteristic and the spectral characteristic to obtain a spatial-spectral fusion characteristic.
According to some embodiments of the application, the first feature extraction module comprises:
the training module is used for inputting the image data and the image label into a deep Lab network for training after cutting the image data and the image label into small blocks;
and the extraction module is used for inputting the image data into the trained deep Lab network and extracting the spatial features.
According to some embodiments of the application, the feature fusion module comprises:
a normalization module for normalizing the spatial features and the spectral features;
and the fusion module is used for performing weighted fusion on the normalized spatial features and the normalized spectral features.
According to some embodiments of the application, the processing device further comprises:
a classification processing module: the spatial spectrum fusion features are classified by using a k-means clustering algorithm.
According to a third aspect of the present application, there is provided a hyperspectral image processing electronic device comprising:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the processing method described above.
According to a fourth aspect of the present application, there is provided a computer-readable medium, on which a computer program is stored, which program, when executed by a processor, implements the processing method described above.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without exceeding the protection scope of the present application.
Fig. 1 shows a flow chart of a hyperspectral image processing method according to an example embodiment of the application.
Fig. 2 illustrates a deplab network architecture diagram according to an exemplary embodiment of the present application.
Fig. 3 shows a schematic diagram of a stacked self-encoder according to an example embodiment of the present application.
FIG. 4 shows a flow chart of a method for processing a hyperspectral image according to another example embodiment of the application.
FIG. 5 is a data processing process diagram of a hyperspectral image processing method according to an exemplary embodiment of the application.
FIG. 6 shows a block diagram of a hyperspectral image processing apparatus according to an example embodiment of the application.
FIG. 7 shows a block diagram of a hyperspectral image processing apparatus according to another example embodiment of the application.
FIG. 8 shows a block diagram of hyperspectral image processing electronics according to an example embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
It will be understood that, although the terms first, second, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the present concepts. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Those skilled in the art will appreciate that the drawings are merely schematic representations of exemplary embodiments, which may not be to scale. The blocks or flows in the drawings are not necessarily required to practice the present application and therefore should not be used to limit the scope of the present application.
Aiming at the problem that a large number of marked samples are needed in the existing hyperspectral image classification method based on combination of spectral features and spatial features of deep learning, the application provides an unsupervised hyperspectral image classification method based on deep learning, and in the process of extracting the spatial features and the spectral features and performing cluster analysis on the spatial-spectral fusion features, the classification result of the hyperspectral image can be obtained without using a large number of marked samples.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flow chart of a hyperspectral image processing method according to an example embodiment of the application.
As shown in fig. 1, the present application provides a method for processing a hyperspectral image, including:
in step S110, a dimensionality reduction process is performed on the original hyperspectral image to obtain image data and an image tag.
According to the hyperspectral image processing method, data preprocessing is carried out on an original hyperspectral image, dimension reduction is carried out on spectral information of the original hyperspectral image, and main components are extracted. The extraction of the spatial features is performed using the extracted principal components as image data and image labels. According to some embodiments of the present application, performing dimension reduction processing on an original hyperspectral image to obtain image data and an image tag includes: performing dimensionality reduction on the original hyperspectral image, reserving the first three main components and taking the main components as the image data; and performing dimensionality reduction on the original hyperspectral image again, reserving a first principal component and taking the first principal component as the image label. According to some embodiments of the present application, the method of dimension reduction processing may employ Principal Component Analysis (PCA).
Taking a hyperspectral image with the size of X multiplied by Y multiplied by Z as an example, firstly, PCA is used for carrying out first data dimension reduction on an original hyperspectral image. After the first dimensionality reduction, the first three principal components are retained, i.e., the size of the original hyperspectral image is reduced to X Y X3. And performing dimension reduction on the original hyperspectral image again, and reserving a first principal component, namely reducing the size of the original image to be X multiplied by Y. For example, an original hyperspectral image with a size of 610 × 340 × 103 is subjected to first PCA to obtain an image with a reduced dimension of 610 × 340 × 3 (the first three principal components), and is subjected to second PCA to obtain an image with a reduced dimension of 610 × 340 (the first principal component).
In step S120, a deep lab network is used to extract spatial features for the image data and the image labels.
Deplab is an excellent method in the field of semantic segmentation, and fig. 2 shows a structure diagram of a deplab network according to an exemplary embodiment of the present application. As shown in fig. 2, the first four convolutional layers (convolutional layer 1, convolutional layer 2, convolutional layer 3, and convolutional layer 4) and the first hole convolutional layer (hole convolutional layer 5) in the deep lab network structure each contain several sub-layers, such as several convolutional layers and one pooling layer. In the embodiment of the present application, the sizes of the convolution kernels in these five layers are all 3 × 3. In the convolutional layer 4 and the void convolutional layer 5, the strings of the pooling layer is 1. The convolution kernel expansion rate r of the hole convolution layer 5 is 2, that is, it is equivalent to inserting 1 hole (i.e., 0 value) between two adjacent values of the convolution kernel therein.
One parallel structure connected behind the hollow convolution layer 5 is an Airborne Spatial Pyramid Pooling (ASPP). Each branch of which uses the same size convolution kernel and a different expansion ratio. For example, the convolution kernel size of 4 parallel hole convolution layers 6 is 3 × 3. The convolution layers 7 and 8 are parallel and have a convolution kernel size of 1 × 1. The expansion ratios r of the 4 parallel hole convolution layers 6 are 6, 12, 18, and 24, respectively. And performing multi-scale feature extraction on the input image according to the size and the expansion rate of the convolution kernel, and inputting the finally extracted features of each branch into a last softmax layer after fusing, so as to obtain the spatial features of the image.
In the stage of extracting the spatial features, the first three principal components of the image obtained in the data preprocessing stage are used as image data, the first principal component is used as an image label and input to a DeepLab network for training, and then the trained DeepLab network is used for processing the image data to extract the spatial features. In the embodiment of the present application, the size of the image data is 610 × 340 × 3, and the size of the image tag is 610 × 340.
In the process of extracting the spatial features of the image data and the image labels by adopting the deep Lab network, firstly, the image data and the image labels are cut into small blocks, and then the small blocks are input into the deep Lab network to train the small blocks.
After the PCA is reduced, the value of each pixel element in the first principal component as an image tag is a floating point number, so that the pixel element needs to be converted into an integer by the following formula:
Figure BDA0002699995190000071
according to some embodiments of the present application, before training the deep lab network, image data of size 610 × 340 × 3 is sliced into data blocks of size 45 × 45 × 3, image tags of size 610 × 340 are sliced into data blocks of size 45 × 45, and the step size of the slicing is 5 (pel). And forming a training set by the cut training image data block and the cut image label data block as input of the deep Lab network.
After the deep lab network is trained, the image data (with the size of 610 × 340 × 3) as the first three main components is input into the deep lab network to extract the spatial features. Assuming that the pixel values of the first principal component of the image after dimensionality reduction are converted into integers, wherein different integers have C, the size of the image finally output by the DeepLab network is X multiplied by Y multiplied by C. Then, image data with the size of 610 × 340 × 3 in the present application is input, and the image size is obtained through a deep lab network as 610 × 340 × C. The output can be used as the spatial characteristic of the hyperspectral image.
According to the processing method, the first three main components after the original hyperspectral image dimension reduction processing are used as data, the first main component is used as a label and is input into a DeepLab network structure to extract spatial features, and therefore the use of a large number of marked samples is effectively avoided.
In step S130, the raw hyperspectral image is processed by a stacked self-encoder to extract spectral features.
The stacked self-encoder is a deep learning model commonly used in the field of deep learning and is formed by stacking a plurality of self-encoders in series. The purpose of stacking the multilayer self-encoder is to extract high-order features of input data layer by layer, in the process, the dimensionality of the input data is reduced layer by layer, complex input data is converted into a series of simple high-order features, and then the high-order features are input into a classifier or a clustering device to be classified or clustered. Stacked auto-encoders are also an unsupervised feature extraction method.
Fig. 3 shows a schematic diagram of a stacked self-encoder according to an example embodiment of the present application. The stacked self-encoder as used herein includes two self-encoders, namely one input layer, two hidden layers and one output layer. In the exemplary embodiment of the present application, the feature extraction is performed using a stacked self-encoder, and the feature extracted from the hidden layer of the second self-encoder is directly used as the spectral feature without using a decoder. Accordingly, as shown in fig. 3, the stacked self-encoder in the present exemplary embodiment includes only an input layer and an implicit layer.
In terms of spectral dimensions, each pixel of a hyperspectral image can be viewed as a column vector. In an embodiment of the present application, a hyperspectral image of size 610 × 340 × 103 may be reconstructed into 207400 × 103 data blocks. Where each pel can be regarded as a 1 × 103 column vector, the whole data block contains 207400 column vectors. These column vectors are taken as input to a stacked self-encoder, which performs feature extraction. The output is 207400 column vectors of 1 × H, where H is the extracted spectral feature dimension, which is a hyperparameter. And finally, integrating all the outputs into an 207400 multiplied by H data block according to the reconstruction sequence of the original hyperspectral image, and taking the data block as the spectral feature of the original hyperspectral image.
In step S140, the spatial features and the spectral features are weighted and fused to obtain spatial-spectral fusion features.
According to an example embodiment of the application, the size of the spectral feature extracted by the stacked self-encoder is 207400 × H, which is recorded as:
Figure BDA0002699995190000081
the spatial feature size of the hyperspectral image obtained by the deep lab network is 610 × 340 × C. Firstly, one-dimensional reconstruction is required to be carried out on the spatial features, and the spatial features are spread into a one-dimensional form, namely, for the spatial features of three-dimensional data, the first two dimensions are spread into a one-dimensional form. For example, for a spatial feature with a size of X Y Z, the data size obtained by one-dimensional reconstruction of the spatial size is n X Z, where n X Y. For the exemplary embodiment of the present application, the process of reconstructing the spatial feature in one dimension is to reconstruct the spatial feature with the size of 610 × 340 × C as 207400 × C, and note that the spatial feature is
Figure BDA0002699995190000091
Then, the S can be paired1And S2Fusion is performed.
Before feature fusion, S is needed to be processed1And S2Normalization is performed. According to some embodiments of the present application, z-score pairs S may be used1And S2Normalization is performed. The formula is as follows:
Figure BDA0002699995190000092
where μ represents the overall mean, x- μ represents the mean deviation, and σ represents the overall standard deviation.
Using z-scores, for S1And S2The column-by-column standardization is carried out, and the specific formula is as follows:
Figure BDA0002699995190000093
then S1Or S2In other words, the number of elements in each column is the same, and is n (in the example embodiment of the present application, n is 207400). x is the number of(ij)Denotes any element in the j-th column, μ(j)And σ(j)Mean and standard deviation of j-th column, respectively. f (-) represents the z-score. By using z-scores, spectral and spatial features can be normalized to the same scale, facilitating subsequent classification.
Assume that the normalized spectral feature is f (S)1) Normalized spatial feature is f (S)2) The following formula is adopted to carry out weighted fusion on the fusion:
S=[(1-λ)f(S1),λf(S2)]
wherein, λ is a weighting coefficient and is a hyper-parameter; and S is a space spectrum fusion characteristic. The weighting factor λ may be obtained experimentally based on a selected initial value. In an exemplary embodiment of the present application, the weight coefficient λ may be selected to be 0.6.
FIG. 4 shows a flow chart of a method for processing a hyperspectral image according to another example embodiment of the application.
According to another example embodiment of the present application, the method for processing the hyperspectral image may further include:
in step S150, the spatial spectrum fusion features are classified. According to some embodiments of the present application, the obtained null-spectrum fusion features are subjected to dimensionality reduction again before the classification process, and 90% of the information is retained. And (5) taking the empty spectrum fusion characteristics after dimensionality reduction as the input of a clustering algorithm to obtain the classification result of the hyperspectral image. In an exemplary embodiment of the present application, the clustering algorithm employed is a k-means clustering algorithm.
The k-means clustering algorithm is also an unsupervised algorithm. Therefore, the whole hyperspectral image processing process is unsupervised, and marked samples are not needed.
FIG. 5 is a data processing process diagram of a hyperspectral image processing method according to an exemplary embodiment of the application.
In the hyperspectral image processing method provided by the application, the data processing process is as follows:
on one hand, the original hyperspectral image is taken as input data, and data preprocessing is firstly carried out: and (3) performing dimensionality reduction on the original hyperspectral image by using PCA, obtaining the first three principal components as image data, and obtaining the first principal component as an image label.
Next, the image data and the image label are used as input, and spatial feature extraction is performed by using a deep lab network to obtain spatial features.
On the other hand, the original hyperspectral image is used as input data, and spectral features are extracted by using a stacked self-encoder to obtain the spectral features.
And (4) taking the obtained spatial features and spectral features as input data, standardizing by using Z-fraction, and then performing weighted fusion to obtain spatial-spectral fusion features.
And (4) taking the space spectrum fusion characteristic as input data, and further performing dimensionality reduction by using PCA.
And finally, classifying the space spectrum fusion characteristics after dimensionality reduction by using a K-means clustering algorithm to obtain a classification result of the hyperspectral image.
FIG. 6 shows a block diagram of a hyperspectral image processing apparatus according to an example embodiment of the application.
The present application further provides a hyperspectral image processing apparatus 100, as shown in fig. 6, including a preprocessing module 110, a first feature extraction module 120, a second feature extraction module 130, and a feature fusion module 140.
And the preprocessing module 110 is configured to perform dimension reduction processing on the original hyperspectral image to obtain image data and an image tag. According to some embodiments of the present application, performing dimension reduction processing on an original hyperspectral image to obtain image data and an image tag includes: performing dimensionality reduction on the original hyperspectral image, reserving the first three main components and taking the main components as the image data; and performing dimensionality reduction on the original hyperspectral image again, and reserving a first principal component as the image label. According to some embodiments of the present application, the method of dimension reduction processing may employ Principal Component Analysis (PCA).
A first feature extraction module 120, configured to extract spatial features from the image data and the image labels using a deep lab network. According to some embodiments of the present application, the first feature extraction module 120 includes a training module and an extraction module. And the training module is used for inputting the image data and the image label into a deep Lab network for training after cutting the image data and the image label into small blocks. And the extraction module is used for inputting the image data into the trained deep Lab network and extracting the spatial features.
A second feature extraction module 130, configured to process the original hyperspectral image by using a stacked self-encoder to extract a spectral feature. In terms of spectral dimensions, each pixel of a hyperspectral image can be viewed as a column vector. These column vectors are used as input to a stacked self-encoder, which performs feature extraction.
And the feature fusion module 140 is configured to perform weighted fusion on the spatial features and the spectral features to obtain spatial-spectral fusion features. According to some embodiments of the present application, the feature fusion module 140 includes a normalization module and a fusion module. A normalization module for normalizing the spatial features and the spectral features. According to some embodiments of the present application, the spatial and spectral features may be normalized using z-scores. And the fusion module is used for performing weighted fusion on the normalized spatial features and the normalized spectral features.
FIG. 7 shows a block diagram of a hyperspectral image processing apparatus according to another example embodiment of the application.
According to another embodiment of the present application, there is also provided a hyperspectral image processing apparatus 200, which further includes a classification processing module 150 in addition to the modules in fig. 6.
The classification processing module 150 is configured to perform classification processing on the spatial spectrum fusion features. Before classification processing, dimension reduction processing is carried out on the obtained empty spectrum fusion features again, and 90% of information is reserved. And taking the space spectrum fusion characteristics after dimensionality reduction as input of a k-means clustering algorithm to obtain a classification result of the hyperspectral image.
FIG. 8 shows a block diagram of hyperspectral image processing electronics according to an example embodiment of the application.
The present application further provides a hyperspectral image processing electronic device 700. The electronic device 700 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, electronic device 700 is embodied in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: at least one processing unit 710, at least one memory unit 720, a bus 730 that couples various system components including the memory unit 720 and the processing unit 710, and the like.
The storage unit 720 stores program codes, which can be executed by the processing unit 710, so that the processing unit 710 executes the processing methods according to the above-mentioned embodiments of the present application described in the present specification.
The storage unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)7201 and/or a cache memory unit 7202, and may further include a read only memory unit (ROM) 7203.
The storage unit 720 may also include a program/utility 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 7001 (e.g., touch screen, keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 760. The network adapter 760 may communicate with other modules of the electronic device 700 via the bus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The present application also provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the above-described processing method.
According to the hyperspectral image processing method, when the spatial features of the hyperspectral image are extracted by using the deep Lab network structure, the use of a large number of marked samples can be avoided, the reduction of spatial resolution can be avoided, and the image features can be extracted from multiple scales. In addition, when the spectral features of the hyperspectral images are extracted by using the stacked self-encoder and the spatial-spectral fusion features are finally classified by using k-means clustering, marked samples of the hyperspectral images are not needed, the method belongs to a completely unsupervised classification processing method, and the defect that the marked samples of the hyperspectral images are insufficient is effectively overcome.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the description of the embodiments is only intended to facilitate the understanding of the methods and their core concepts of the present application. Meanwhile, a person skilled in the art should, according to the idea of the present application, change or modify the embodiments and applications of the present application based on the scope of the present application. In view of the above, the description should not be taken as limiting the application.

Claims (10)

1. A method for processing a hyperspectral image, comprising:
performing dimensionality reduction on the original hyperspectral image to acquire image data and an image tag;
extracting spatial features of the image data and the image labels by adopting a deep Lab network;
processing the original hyperspectral image by adopting a stacked self-encoder to extract spectral features;
and performing weighted fusion on the spatial features and the spectral features to obtain spatial-spectral fusion features.
2. The processing method according to claim 1, wherein the performing dimension reduction processing on the original hyperspectral image to obtain image data and an image tag comprises:
performing dimensionality reduction on the original hyperspectral image, reserving the first three main components and taking the main components as the image data;
and performing dimensionality reduction on the original hyperspectral image again, reserving a first principal component and taking the first principal component as the image label.
3. The processing method according to claim 1, wherein extracting spatial features from the image data and the image labels using a deep lab network comprises:
cutting the image data and the image label into small blocks, and inputting the small blocks into a deep Lab network for training;
and inputting the image data into a trained DeepLab network, and extracting the spatial features.
4. The processing method according to claim 3, wherein extracting spatial features from the image data and the image labels using a deep Lab network further comprises:
before training the deep lab network, converting each pixel value in the image label from a floating point number to an integer using the following formula:
Figure FDA0002699995180000011
5. the processing method according to claim 1, wherein the weighted fusion of the spatial features and the spectral features to obtain spatial-spectral fusion features comprises:
performing one-dimensional reconstruction on the spatial features;
normalizing the spectral features and the reconstructed spatial features;
and performing weighted fusion on the normalized spatial features and spectral features.
6. The processing method according to claim 5, characterized in that said normalization comprises:
the z-score was used for normalization.
7. The processing method according to claim 5, wherein the weighted fusion of the normalized spatial and spectral features comprises:
the weighted fusion is performed according to the following formula,
S=[(1-λ)f(S1),λf(S2)]
wherein f (S)1) For the normalized spectral feature to be f (S)1),f(S2) For the normalized spatial feature to be f (S)2) λ is a weight coefficient, and S is a space spectrum fusion characteristic.
8. The processing method of claim 1, further comprising:
and classifying the spatial spectrum fusion characteristics.
9. The processing method according to claim 8, characterized in that said classification process comprises:
and performing classification processing by using a k-means clustering algorithm.
10. The process of claim 1, wherein the dimension reduction process comprises:
and (5) PCA dimension reduction processing.
CN202011018797.XA 2020-09-24 2020-09-24 Hyperspectral image processing method and device Pending CN112232137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011018797.XA CN112232137A (en) 2020-09-24 2020-09-24 Hyperspectral image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011018797.XA CN112232137A (en) 2020-09-24 2020-09-24 Hyperspectral image processing method and device

Publications (1)

Publication Number Publication Date
CN112232137A true CN112232137A (en) 2021-01-15

Family

ID=74108126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011018797.XA Pending CN112232137A (en) 2020-09-24 2020-09-24 Hyperspectral image processing method and device

Country Status (1)

Country Link
CN (1) CN112232137A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822209A (en) * 2021-09-27 2021-12-21 海南长光卫星信息技术有限公司 Hyperspectral image recognition method and device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654117A (en) * 2015-12-25 2016-06-08 西北工业大学 Hyperspectral image spectral-spatial cooperative classification method based on SAE depth network
CN109598306A (en) * 2018-12-06 2019-04-09 西安电子科技大学 Hyperspectral image classification method based on SRCM and convolutional neural networks
CN110298414A (en) * 2019-07-09 2019-10-01 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guiding filtering
CN111695467A (en) * 2020-06-01 2020-09-22 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654117A (en) * 2015-12-25 2016-06-08 西北工业大学 Hyperspectral image spectral-spatial cooperative classification method based on SAE depth network
CN109598306A (en) * 2018-12-06 2019-04-09 西安电子科技大学 Hyperspectral image classification method based on SRCM and convolutional neural networks
CN110298414A (en) * 2019-07-09 2019-10-01 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guiding filtering
CN111695467A (en) * 2020-06-01 2020-09-22 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUSHI CHEN等: "Deep Learning-Based Classification of Hyperspectral Data", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
ZIJIA NIU等: "DeepLab-Based Spatial Feature Extraction for Hyperspectral Image Classification", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS,》 *
张国东等: "基于栈式自编码神经网络对高光谱遥感图像分类研究", 《红外技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822209A (en) * 2021-09-27 2021-12-21 海南长光卫星信息技术有限公司 Hyperspectral image recognition method and device, electronic equipment and readable storage medium
CN113822209B (en) * 2021-09-27 2023-11-14 海南长光卫星信息技术有限公司 Hyperspectral image recognition method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
Lu et al. Remote sensing scene classification by unsupervised representation learning
Qu et al. uDAS: An untied denoising autoencoder with sparsity for spectral unmixing
Wu et al. A scene change detection framework for multi-temporal very high resolution remote sensing images
Negrel et al. Evaluation of second-order visual features for land-use classification
Kekre et al. Improved texture feature based image retrieval using Kekre’s fast codebook generation algorithm
Qayyum et al. Scene classification for aerial images based on CNN using sparse coding technique
CN104317902B (en) Image search method based on local holding iterative quantization Hash
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN113657425B (en) Multi-label image classification method based on multi-scale and cross-modal attention mechanism
Fan et al. Superpixel guided deep-sparse-representation learning for hyperspectral image classification
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN102750385B (en) Correlation-quality sequencing image retrieval method based on tag retrieval
Chen et al. A survey of deep nonnegative matrix factorization
CN108460400B (en) Hyperspectral image classification method combining various characteristic information
Gao et al. Small sample classification of hyperspectral image using model-agnostic meta-learning algorithm and convolutional neural network
Champ et al. A comparative study of fine-grained classification methods in the context of the LifeCLEF plant identification challenge 2015
CN111160273A (en) Hyperspectral image space spectrum combined classification method and device
Duarte-Carvajalino et al. Multiscale representation and segmentation of hyperspectral imagery using geometric partial differential equations and algebraic multigrid methods
Abe et al. Experimental comparison of support vector machines with random forests for hyperspectral image land cover classification
CN112163114B (en) Image retrieval method based on feature fusion
Naveena et al. Image retrieval using combination of color, texture and shape descriptor
Byeon et al. Scene analysis by mid-level attribute learning using 2D LSTM networks and an application to web-image tagging
Alshehri A content-based image retrieval method using neural network-based prediction technique
CN111680579A (en) Remote sensing image classification method for adaptive weight multi-view metric learning
CN107273919A (en) A kind of EO-1 hyperion unsupervised segmentation method that generic dictionary is constructed based on confidence level

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115