CN113239938B - Hyperspectral classification method and hyperspectral classification system based on graph structure - Google Patents

Hyperspectral classification method and hyperspectral classification system based on graph structure Download PDF

Info

Publication number
CN113239938B
CN113239938B CN202110510184.6A CN202110510184A CN113239938B CN 113239938 B CN113239938 B CN 113239938B CN 202110510184 A CN202110510184 A CN 202110510184A CN 113239938 B CN113239938 B CN 113239938B
Authority
CN
China
Prior art keywords
hyperspectral
super
image
graph
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110510184.6A
Other languages
Chinese (zh)
Other versions
CN113239938A (en
Inventor
赵晓枫
丁遥
牛家辉
张志利
蔡伟
仲启媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Force University of Engineering of PLA
Original Assignee
Rocket Force University of Engineering of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Force University of Engineering of PLA filed Critical Rocket Force University of Engineering of PLA
Priority to CN202110510184.6A priority Critical patent/CN113239938B/en
Publication of CN113239938A publication Critical patent/CN113239938A/en
Application granted granted Critical
Publication of CN113239938B publication Critical patent/CN113239938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a hyperspectral classification method and a hyperspectral classification system based on a graph structure, wherein the method comprises the following steps: dividing the hyperspectral image into N super pixels; each super pixel comprises a plurality of pixels; constructing an adjacency matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels; according to the adjacency matrix, carrying out feature extraction on the hyperspectral image by using a double-layer graph convolution algorithm to obtain first features of each super pixel; learning the first characteristic of each super pixel by using a self-attention mechanism to obtain the second characteristic of each super pixel; and classifying each super pixel in the hyperspectral image according to each second characteristic. The method improves the accuracy of hyperspectral classification.

Description

Hyperspectral classification method and hyperspectral classification system based on graph structure
Technical Field
The invention relates to the field of spectrum classification, in particular to a hyperspectral classification method and a hyperspectral classification system based on a graph structure.
Background
The hyperspectral image has a large number of spectrum bands, contains abundant space and spectrum information, and can accurately identify objects containing different materials. Hyperspectral image analysis can more effectively identify object features than multispectral image or RGB (red, green, and blue) analysis. Accordingly, hyperspectral image classification, which classifies each image pixel as a specific label, is of great interest in many areas, such as agricultural monitoring, military reconnaissance, and disaster prevention and control. However, the problems of multiple bands of hyperspectral images, space variability of spectral features, difficulty in acquiring labels and the like bring great difficulty to hyperspectral classification.
Over the past few decades, traditional machine learning algorithms, such as support vector machines, random forests, and K-nearest neighbors, have enjoyed tremendous success in hyperspectral classification. However, the conventional machine learning algorithm depends on the expert knowledge of the person to a great extent, and has problems of insufficient feature extraction, poor classification effect and the like.
Inspired by the successful application of deep learning convolutional neural networks in image processing, convolutional neural networks have also been applied to hyperspectral classification. The main advantage of convolutional neural networks is the ability to automatically learn the effective feature representation of the problem domain, thereby avoiding complex manual feature engineering. However, the convolutional neural network method requires a large number of training labels, and the hyperspectral image label has a small data size, so that a large number of training samples are difficult to provide. Furthermore, convolutional neural network kernels are designed primarily for regular pattern recognition, so they cannot adaptively capture irregular geometric variations of different object regions in hyperspectral images. Finally, the weight of the convolution kernel of the convolutional neural network is fixed, which causes the network to cause edge loss during feature extraction and possibly classification errors during classification.
Therefore, convolutional neural networks in either traditional machine learning methods or deep learning face certain limitations in hyperspectral classification.
Disclosure of Invention
The invention aims to provide a hyperspectral classification method and a hyperspectral classification system based on a graph structure, which improve classification accuracy.
In order to achieve the above object, the present invention provides the following solutions:
a hyperspectral classification method based on graph structure, comprising:
dividing the hyperspectral image into N super pixels; each super pixel comprises a plurality of pixels;
constructing an adjacency matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels;
according to the adjacency matrix, carrying out feature extraction on the hyperspectral image by using a double-layer graph convolution algorithm to obtain first features of each super pixel;
learning the first characteristic of each super pixel by using a self-attention mechanism to obtain the second characteristic of each super pixel;
and classifying each super pixel in the hyperspectral image according to each second characteristic.
Optionally, the splitting the hyperspectral image into N superpixels specifically includes:
performing dimension reduction on the hyperspectral image by adopting an unsupervised principal component analysis algorithm;
generating a basic image by using the first principal component obtained by dimension reduction;
and dividing the basic image by adopting a linear iterative clustering algorithm to obtain N super pixels.
Optionally, the relationship between the superpixel i and the superpixel j in the adjacency matrix is expressed as:
wherein x is i Features representing superpixels i, x j Features representing superpixels j, N (x j ) Representing the set of neighbor nodes of the superpixel j, and γ represents the empirical coefficient.
Optionally, the classifying each superpixel in the hyperspectral chart according to each second feature specifically includes:
and classifying each super pixel in the hyperspectral image by adopting a cross entropy function according to each second characteristic.
The invention also discloses a hyperspectral classification system based on the graph structure, which comprises:
the hyperspectral image segmentation module is used for segmenting the hyperspectral image into N super pixels; each super pixel comprises a plurality of pixels;
the adjacent matrix construction module of the graph is used for constructing an adjacent matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels;
the first feature extraction module is used for carrying out feature extraction on the hyperspectral image by utilizing a double-layer graph convolution algorithm according to the adjacency matrix to obtain first features of each super pixel;
the second feature extraction module is used for learning the first features of the super pixels by using a self-attention mechanism and obtaining the second features of the super pixels;
and the super-pixel classification module is used for classifying each super-pixel in the hyperspectral image according to each second characteristic.
Optionally, the hyperspectral image segmentation module specifically includes:
the image dimension reduction unit is used for reducing dimension of the hyperspectral image by adopting an unsupervised principal component analysis algorithm;
a basic image obtaining unit for generating a basic image from the first principal component obtained by the dimension reduction;
and the image segmentation unit is used for segmenting the basic image by adopting a linear iterative clustering algorithm to obtain N super pixels.
Optionally, the relationship between the superpixel i and the superpixel j in the adjacency matrix is expressed as:
wherein x is i Features representing superpixels i, x j Features representing superpixels j, N (x j ) Representing the set of neighbor nodes of the superpixel j, and γ represents the empirical coefficient.
Optionally, the super-pixel classification module specifically includes:
and the super-pixel classification unit is used for classifying each super-pixel in the hyperspectral image by adopting a cross entropy function according to each second characteristic.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention uses the adjacency matrix of the graph to represent the relation between the characteristics of each super pixel in the hyperspectral image, uses the interrelation between the super pixels, adopts the self-attention mechanism to learn and extract the effective characteristics, simplifies the calculated amount and improves the classification accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a hyperspectral classification method based on a graph structure;
FIG. 2 is a schematic diagram of the hyperspectral image preprocessing flow of the present invention;
FIG. 3 is a schematic diagram of a hyperspectral classification method based on a graph structure according to the present invention;
fig. 4 is a schematic structural diagram of a hyperspectral classification system based on a graph structure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a hyperspectral classification method and a hyperspectral classification system based on a graph structure, which improve classification accuracy.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 is a schematic flow chart of a hyperspectral classification method based on a graph structure according to the present invention, as shown in fig. 1,
a hyperspectral classification method based on graph structure, comprising:
step 101: dividing the hyperspectral image into N super pixels; each of the super pixels includes a plurality of pixels.
The step 101 specifically includes:
performing dimension reduction on the hyperspectral image by adopting an unsupervised principal component analysis algorithm;
generating a basic image by using the first principal component obtained by dimension reduction;
and dividing the basic image by adopting a linear iterative clustering algorithm to obtain N super pixels.
Step 102: constructing an adjacency matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels.
The relationship between superpixel i and superpixel j in the adjacency matrix is expressed as:
wherein x is i Features representing superpixels i, x j Features representing superpixels j, N (x j ) Representing the set of neighbor nodes of the superpixel j, and γ represents the empirical coefficient.
Step 103: and according to the adjacency matrix, carrying out feature extraction on the hyperspectral image by using a double-layer graph convolution algorithm to obtain first features of each super pixel.
Step 104: and learning the first characteristic of each super pixel by using a self-attention mechanism, and obtaining the second characteristic of each super pixel.
During self-attention mechanism learning, the characteristics of the ith node (super pixel) output by the ith network layer are expressed as follows:
where σ represents the activation function, l represents the network layer, a ij Representing the edge between superpixel i and superpixel jWeight, N i Representing a set of neighbor nodes for superpixel i, W T Represents the transpose of W,features representing the ith node of the first layer.
Step 105: and classifying each super pixel in the hyperspectral image according to each second characteristic.
And classifying each super pixel in the hyperspectral image by adopting a cross entropy function according to each second characteristic.
The cross entropy function calculates the probability that each super pixel belongs to each category, and the category with the highest probability is taken as the category of the super pixel.
The following describes a hyperspectral classification method based on a graph structure according to a specific embodiment, which specifically comprises the following steps:
step1: hyperspectral image segmentation. The hyperspectral image is divided into a small number of super pixels by adopting a division algorithm, and the pixels in the super pixels have strong spectrum-space similarity.
Hyperspectral images contain a large number of pixels in the spatial dimension, which can lead to a large computational effort if each pixel is used as a graph node for subsequent convolution and classification, which presents a great challenge to the algorithm utility and necessitates dimension reduction of the spatial nodes. In practice, it is found that adjacent pixels are more likely to belong to the same feature type. Thus, assume a hyperspectral image dataset comprising B-band and m pixels The superpixel is represented as:
wherein: n represents the number of super pixels contained in the hyperspectral image; s is S i Representing superpixels i, S j Representing a superpixel j, superpixel i comprising n i Individual pixels, R B A hyperspectral image containing B bands is shown.
Since classical segmentation methods were originally designed for segmentation of RGB images, it is not straightforward to segment hyperspectral images into superpixels using classical segmentation algorithms. In order to divide a hyperspectral image into superpixels, it is necessary to perform spectral dimension reduction on the hyperspectral image in advance. In this embodiment, an unsupervised Principal Component Analysis (PCA) algorithm is used to reduce the dimension of the hyperspectral image, and a first principal component is used to generate a base image that contains rich raw hyperspectral image information. The basic hyperspectral image is then segmented using a simple linear iterative clustering algorithm (SLIC). Finally, the average spectrum value of the pixels contained in the super pixel is used as the characteristic vector of the graph node to be input into a subsequent processing network, so that the noise influence can be restrained, and the spectrum characteristics of the super pixel are reserved.
Step2: the construction of the graph is essentially to construct the adjacency matrix A of the graph, the relationship A between vertex i (superpixel i) and vertex j (superpixel j) ij Expressed as:
in which x is i And x j Features representing two graph nodes i, j, N (x j ) Is the set of neighbor nodes for j, and γ is an empirical coefficient, set to 0.2 in the algorithm.
Step3: and extracting features of the established hyperspectral image by using a double-layer image convolution algorithm, wherein the relationship between the classified points (super pixel points) and surrounding nodes is obtained by using the double-layer image convolution algorithm, and the relationship between the classified points and the nodes of the more distant images can be obtained by using a second-layer classification image convolution algorithm. The graph volume integration algorithm is as follows:
assume a graphRepresenting the set of vertices +.>Epsilon represents the edge set. />Is the adjacency matrix of the graph, if there is an edge between vertex i and vertex j, then use a ij Representing the weight of the edge between vertex i and vertex j. Given A, a graph Laplace matrix L corresponding to A is created as follows:
L=D-A, (1)
wherein: d is the degree matrix of the graph. Symmetrical normalized Laplace matrix L corresponding to (1) sym The following are listed below
Wherein: i N Is an identity matrix.
Given two functions f and g, their convolution can be expressed as:
wherein: τ is the distance moved, ∈ represents the convolution operation.
Converting the formula (3) into:
convolving the plot may be converted to a fourier transformOr find a set of basis functions.
L can be determined according to formula (3) sym The method comprises the following steps of:
wherein: u is L sym Feature vector matrix, i.e. UU T =e, the basis of the fourier transform of the plot; lambda (lambda) n Representative of characteristic value, u n Representing the eigenvector, E representing the identity matrix.
According to equation (5), a graph of function fThe transformation can be expressed as +.>Figure->The inverse transform can be expressed asThen equation (4) the convolution of functions f and g can be expressed as
If let U T g=g θ Then the formula (6) can be converted into
Wherein: g θ Is L sym A function of the eigenvalue Λ.
However, (7) a large amount of calculation is computationally intensive, and the convolution kernel is approximated for the sake of calculation simplification, namely:
wherein T is k For chebyshev polynomials, k=1 is a first order approximation with chebyshev polynomials. G can be obtained by converting the formula (8) θ (Λ) is taken as a polynomial of Λ.
Using equation (8), can convert (7) to:
when k=1, the convolution layer can be reduced to:
with the addition of an activation layer, the graph roll transfer function can be expressed as follows:
wherein H is (l+1) 、H (l) The values of the l+1 and l layers are respectively represented, and W is a weight matrix.
Step4: the graph is self-attention mechanism. To obtain global context features in the graph, a graph attention mechanism is added to the network to extract different degrees of association between different nodes (superpixels), the relationship between any two nodes in the graph being calculated by the graph attention mechanism. To obtain the corresponding transformation between input and output, a weight matrix is trained for all nodes: w epsilon R F′×F The relationship between the input feature F and the output feature F' is represented. Node-to-node correlation may be learned through the network layer.
e ij =(LeakyReLU(a T [Wx i ||Wx j ]))
In e ij Node x is shown j To node x i Importance of a), a T ∈R 2F Is a parameter vector of the network, |represents a concatenation operation, and LeakyReLU (·) is a nonlinear layer.
Then by a softmax functionWill e ij Normalize and convert it to a probability output a ij
The graph convolution output for each node may be expressed as follows:
where σ is the activation function, l denotes the network layer, a ij Is the learning attention weight (the weight of the edge between superpixel i and superpixel j),features representing the ith node of the first layer.
Step5: and outputting a result. In this embodiment, a cross entropy function is employed to penalize the difference between the network output and the original tag label, i.e
Wherein y is G Is a set of tags; c represents the number of classes, Y zf Is a matrix of training tags that are to be used,representing the characteristics of the final network layer output, L representing the difference. The embodiment can perform end-to-end training and update the network parameters of the embodiment by adopting Adam.
According to the invention, after the original hyperspectral data is subjected to main component analysis (PCA) dimensionality reduction, a hyperspectral image is segmented by using a linear iterative clustering algorithm (SLIC) to form superpixels, and the spectral characteristics of each superpixel are extracted to serve as the input characteristics of a subsequent network; then constructing a super-pixel graph network by utilizing graph theory; then, feature extraction is carried out on the constructed graph network by utilizing a graph algorithm; and finally, learning useful features of each node by using an attention mechanism to finish classification of each node of the hyperspectral image. The biggest difference between the method and the prior art is that the hyperspectral images are classified by using a graph structure algorithm, and classification accuracy achieved by the large label training data volume of the original deep learning convolutional neural network can be achieved by using the interrelationship between nodes. Firstly, high-precision semi-supervised classification of hyperspectral images under the condition of ultra-small samples can be realized; secondly, the calculated amount can be simplified; and thirdly, the hyperspectral image characteristics can be extracted in a self-adaptive mode. The invention can automatically extract hyperspectral features and complete classification, and the classification accuracy reaches more than 90%.
Fig. 4 is a schematic structural diagram of a hyperspectral classification system based on a graph structure according to the present invention, as shown in fig. 4, a hyperspectral classification system based on a graph structure, including:
a hyperspectral image segmentation module 201, configured to segment a hyperspectral image into N superpixels; each super pixel comprises a plurality of pixels;
an adjacency matrix construction module 202 of the graph, configured to construct an adjacency matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels;
the first feature extraction module 203 is configured to perform feature extraction on the hyperspectral image by using a double-layer graph convolution algorithm according to the adjacency matrix, so as to obtain first features of each super pixel;
a second feature extraction module 204, configured to learn the first feature of each superpixel by using a self-attention mechanism, and obtain a second feature of each superpixel;
the super-pixel classification module 205 is configured to classify each super-pixel in the hyperspectral image according to each second feature.
The hyperspectral image segmentation module 201 specifically includes:
the image dimension reduction unit is used for reducing dimension of the hyperspectral image by adopting an unsupervised principal component analysis algorithm;
a basic image obtaining unit for generating a basic image using the first principal component obtained by the dimension reduction;
and the image segmentation unit is used for segmenting the basic image by adopting a linear iterative clustering algorithm to obtain N super pixels.
The relationship between superpixel i and superpixel j in the adjacency matrix is expressed as:
wherein x is i Features representing superpixels i, x j Features representing superpixels j, N (x j ) Representing the set of neighbor nodes of the superpixel j, and γ represents the empirical coefficient.
The super-pixel classification module 205 specifically includes: and the super-pixel classification unit is used for classifying each super-pixel in the hyperspectral image by adopting a cross entropy function according to each second characteristic.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. A hyperspectral classification method based on a graph structure, comprising:
dividing the hyperspectral image into N super pixels; each super pixel comprises a plurality of pixels;
constructing an adjacency matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels;
according to the adjacency matrix, carrying out feature extraction on the hyperspectral image by using a double-layer graph convolution algorithm to obtain first features of each super pixel;
learning the first characteristic of each super pixel by using a self-attention mechanism to obtain the second characteristic of each super pixel;
and classifying each super pixel in the hyperspectral image according to each second characteristic.
2. The hyperspectral classification method based on graph structure as claimed in claim 1, wherein the dividing the hyperspectral image into N superpixels specifically includes:
performing dimension reduction on the hyperspectral image by adopting an unsupervised principal component analysis algorithm;
generating a basic image by using the first principal component obtained by dimension reduction;
and dividing the basic image by adopting a linear iterative clustering algorithm to obtain N super pixels.
3. The hyperspectral classification method based on graph structure as claimed in claim 1, wherein the relationship between the superpixel i and the superpixel j in the adjacency matrix is expressed as:
wherein x is i Features representing superpixels i, x j Features representing superpixels j, N (x j ) Representing the set of neighbor nodes of the superpixel j, and γ represents the empirical coefficient.
4. The hyperspectral classification method based on graph structure as claimed in claim 1, wherein the classifying each superpixel in the hyperspectral image according to each second feature specifically includes:
and classifying each super pixel in the hyperspectral image by adopting a cross entropy function according to each second characteristic.
5. A hyperspectral classification system based on a graph structure, comprising:
the hyperspectral image segmentation module is used for segmenting the hyperspectral image into N super pixels; each super pixel comprises a plurality of pixels;
the adjacent matrix construction module of the graph is used for constructing an adjacent matrix of the graph according to the N super pixels; each element in the adjacency matrix represents a relationship between features of each of the superpixels;
the first feature extraction module is used for carrying out feature extraction on the hyperspectral image by utilizing a double-layer graph convolution algorithm according to the adjacency matrix to obtain first features of each super pixel;
the second feature extraction module is used for learning the first features of the super pixels by using a self-attention mechanism and obtaining the second features of the super pixels;
and the super-pixel classification module is used for classifying each super-pixel in the hyperspectral image according to each second characteristic.
6. The hyperspectral classification system based on graph structure as claimed in claim 5, wherein the hyperspectral image segmentation module specifically comprises:
the image dimension reduction unit is used for reducing dimension of the hyperspectral image by adopting an unsupervised principal component analysis algorithm;
a basic image obtaining unit for generating a basic image using the first principal component obtained by the dimension reduction;
and the image segmentation unit is used for segmenting the basic image by adopting a linear iterative clustering algorithm to obtain N super pixels.
7. The graph-structure-based hyperspectral classification system of claim 5 wherein the relationship between superpixel i and superpixel j in the adjacency matrix is represented as:
wherein x is i Features representing superpixels i, x j Features representing superpixels j, N (x j ) Representing the set of neighbor nodes of the superpixel j, and γ represents the empirical coefficient.
8. The hyperspectral classification system based on graph structure of claim 5, wherein the superpixel classification module specifically comprises:
and the super-pixel classification unit is used for classifying each super-pixel in the hyperspectral image by adopting a cross entropy function according to each second characteristic.
CN202110510184.6A 2021-05-11 2021-05-11 Hyperspectral classification method and hyperspectral classification system based on graph structure Active CN113239938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110510184.6A CN113239938B (en) 2021-05-11 2021-05-11 Hyperspectral classification method and hyperspectral classification system based on graph structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110510184.6A CN113239938B (en) 2021-05-11 2021-05-11 Hyperspectral classification method and hyperspectral classification system based on graph structure

Publications (2)

Publication Number Publication Date
CN113239938A CN113239938A (en) 2021-08-10
CN113239938B true CN113239938B (en) 2024-01-09

Family

ID=77133221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110510184.6A Active CN113239938B (en) 2021-05-11 2021-05-11 Hyperspectral classification method and hyperspectral classification system based on graph structure

Country Status (1)

Country Link
CN (1) CN113239938B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723255B (en) * 2021-08-24 2023-09-01 中国地质大学(武汉) Hyperspectral image classification method and storage medium
CN113920442B (en) * 2021-09-29 2024-06-18 中国人民解放军火箭军工程大学 Hyperspectral classification method combining graph structure and convolutional neural network
CN116883692A (en) * 2023-06-06 2023-10-13 中国地质大学(武汉) Spectrum feature extraction method, device and storage medium of multispectral remote sensing image
CN117671531A (en) * 2023-12-05 2024-03-08 吉林省鑫科测绘有限公司 Unmanned aerial vehicle aerial survey data processing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695636A (en) * 2020-06-15 2020-09-22 北京师范大学 Hyperspectral image classification method based on graph neural network
CN112381144A (en) * 2020-11-13 2021-02-19 南京理工大学 Heterogeneous deep network method for non-European and European domain space spectrum feature learning
CN112633386A (en) * 2020-12-26 2021-04-09 北京工业大学 SACVAEGAN-based hyperspectral image classification method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946931B2 (en) * 2015-04-20 2018-04-17 Los Alamos National Security, Llc Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery
US10580170B2 (en) * 2015-10-19 2020-03-03 National Ict Australia Limited Spectral reconstruction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695636A (en) * 2020-06-15 2020-09-22 北京师范大学 Hyperspectral image classification method based on graph neural network
CN112381144A (en) * 2020-11-13 2021-02-19 南京理工大学 Heterogeneous deep network method for non-European and European domain space spectrum feature learning
CN112633386A (en) * 2020-12-26 2021-04-09 北京工业大学 SACVAEGAN-based hyperspectral image classification method

Also Published As

Publication number Publication date
CN113239938A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113239938B (en) Hyperspectral classification method and hyperspectral classification system based on graph structure
CN110399909B (en) Hyperspectral image classification method based on label constraint elastic network graph model
CN111768432B (en) Moving target segmentation method and system based on twin deep neural network
Luo et al. Fire smoke detection algorithm based on motion characteristic and convolutional neural networks
CN107408211B (en) Method for re-identification of objects
CN111428781A (en) Remote sensing image ground object classification method and system
CN111695469A (en) Hyperspectral image classification method of lightweight depth separable convolution feature fusion network
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN111986125A (en) Method for multi-target task instance segmentation
CN110929099B (en) Short video frame semantic extraction method and system based on multi-task learning
Ye et al. Hyperspectral image classification using principal components-based smooth ordering and multiple 1-D interpolation
Chen et al. Hyperspectral remote sensing image classification based on dense residual three-dimensional convolutional neural network
CN114155443A (en) Hyperspectral image classification method based on multi-receptive-field attention network
Fu et al. Contextual online dictionary learning for hyperspectral image classification
Golestaneh et al. No-reference image quality assessment via feature fusion and multi-task learning
CN116645579A (en) Feature fusion method based on heterogeneous graph attention mechanism
CN115984223A (en) Image oil spill detection method based on PCANet and multi-classifier fusion
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization
Uma Maheswari et al. A novel QIM-DCT based fusion approach for classification of remote sensing images via PSO and SVM models
Bressan et al. Semantic segmentation with labeling uncertainty and class imbalance
Yao A compressed deep convolutional neural networks for face recognition
CN116596891A (en) Wood floor color classification and defect detection method based on semi-supervised multitasking detection
Muthusamy et al. Deep belief network for solving the image quality assessment in full reference and no reference model
Rout et al. A deep learning approach for SAR image fusion
CN114998725A (en) Hyperspectral image classification method based on adaptive spatial spectrum attention kernel generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant