CN117934975A - Full-variation regular guide graph convolution unsupervised hyperspectral image classification method - Google Patents

Full-variation regular guide graph convolution unsupervised hyperspectral image classification method Download PDF

Info

Publication number
CN117934975A
CN117934975A CN202410328183.3A CN202410328183A CN117934975A CN 117934975 A CN117934975 A CN 117934975A CN 202410328183 A CN202410328183 A CN 202410328183A CN 117934975 A CN117934975 A CN 117934975A
Authority
CN
China
Prior art keywords
module
convolution
graph
encoder
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410328183.3A
Other languages
Chinese (zh)
Other versions
CN117934975B (en
Inventor
徐凯
朱洲
汪安铃
潘如杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202410328183.3A priority Critical patent/CN117934975B/en
Publication of CN117934975A publication Critical patent/CN117934975A/en
Application granted granted Critical
Publication of CN117934975B publication Critical patent/CN117934975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Remote Sensing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an unsupervised hyperspectral image classification method based on total variation regular guide graph convolution. Compared with the prior art, the method solves the problem that the hyperspectral image of the complex scene is difficult to accurately classify under the condition of no label. The invention comprises the following steps: denoising pretreatment of hyperspectral images based on relative total variation; constructing a joint model of a total variation regularization guide graph convolution module and a spatial spectrum self-encoder module; training a joint model of a total variation regularization guide graph convolution module and a spatial spectrum self-encoder module; and obtaining an unsupervised classification result of the hyperspectral image. The method aims to solve the problem of lack of sample labels in hyperspectral image classification, and by utilizing a total variation regularization term, the smoothness and consistency of an image space are maintained in a graph rolling network in a learning process, and spatial spectrum self-encoder modules are utilized to extract spatial spectrum local context information, so that the model can maintain robustness and generalization in complex unsupervised classification tasks.

Description

Full-variation regular guide graph convolution unsupervised hyperspectral image classification method
Technical Field
The invention relates to the technical field of hyperspectral remote sensing image processing, in particular to an unsupervised hyperspectral image classification method based on total variation regular guide graph convolution.
Background
In recent years, with rapid development of hyperspectral imaging technology and continuous expansion of application fields, hyperspectral images are widely and deeply applied in the fields of agriculture, environmental monitoring, geological exploration and the like. The method is unique in that the information of multispectral wave bands can provide extremely rich details and spectral characteristics of the surface of the object, and the unprecedented information depth is brought to remote sensing image analysis. However, while hyperspectral images represent a great potential in a variety of fields, their processing and analysis still face a number of challenges, one of the most prominent being the problem of unsupervised classification of hyperspectral images.
In the field of hyperspectral image classification, traditional methods rely mainly on supervised learning, i.e. a large number of marked samples are required for training the classifier. However, the process of obtaining these large-scale labeled samples is not only time consuming and laborious, but also in some special scenarios it is difficult to obtain sufficient label information. To cope with this dilemma, unsupervised learning is becoming an important approach to solve the problem of hyperspectral image classification. The unsupervised learning method does not depend on a pre-marked sample, but performs image classification by mining the inherent structure and characteristics of the data, so that different scenes and application requirements can be well adapted.
However, there are still some problems to be solved in the hyperspectral image classification in the current unsupervised learning method. First, since hyperspectral images have high dimensionality and complex spectral features, conventional feature extraction methods have difficulty in fully mining potential information of images. Second, the noise, illumination variation, and similarity between different categories present in the image make it difficult for unsupervised learning to achieve accurate classification. In order to overcome these problems, it is needed to provide a hyperspectral image classification method which is insensitive to noise and adapts to complex scenes, so as to better meet the requirements of complex practical application scenes. This involves the introduction of advanced techniques such as deep learning, feature embedding, etc., and more detailed and accurate analysis of large-scale hyperspectral image data.
Disclosure of Invention
The invention aims to overcome the limitations of the existing method, improve the unsupervised classification accuracy and robustness of hyperspectral images, and provide a full-variation regular guide graph convolution unsupervised hyperspectral image classification method.
In order to achieve the above object, the technical scheme of the present invention is as follows:
An unsupervised hyperspectral image classification method of total variation regular guide graph convolution comprises the following steps:
11 Hyperspectral image denoising preprocessing based on relative total variation): obtaining a hyperspectral remote sensing image of a region to be classified, and carrying out relative total variation denoising on the obtained hyperspectral image;
12 Constructing a joint model of the total variation regularization guide graph convolution module and the spatial spectrum self-encoder module: an unsupervised hyperspectral image classification model of a total variation canonical guide-graph convolution, comprising three parts: the first is a spatial spectrum self-encoder module for local context extraction and feature dimension compression; the second is a total variation guide graph rolling module for global context extraction and feature fusion; the third is a K-means algorithm module used for obtaining classification results only in the test process;
13 Training a joint model of the total variation canonical guide map convolution module and the spatial spectrum self-encoder module: the hyperspectral image is segmented into image blocks, the image blocks are input into a joint model, space and spectrum context extraction is carried out firstly, global context modeling is carried out through a graph neural network, and model training is finished through a back propagation algorithm;
14 Obtaining an unsupervised classification result of the hyperspectral image: and applying the model to the whole hyperspectral image to obtain an unsupervised hyperspectral image classification result.
The hyperspectral image acquisition and preprocessing comprises the following steps:
21 Obtaining hyperspectral remote sensing satellite images of the areas to be classified;
22 Normalizing the pixel value of the hyperspectral remote sensing satellite image;
23 Relative total variation denoising of hyperspectral remote sensing satellite images
24 Cutting into image blocks with the size of 7 multiplied by 7 by taking each pixel point of the hyperspectral remote sensing satellite image as a center, and filling the boundary area into 0;
25 The processed image is led out to be in a tif format;
26 Dividing all image blocks into a training set and a verification set according to the ratio of 7:3.
The construction of the joint model of the total variation regularization guide graph convolution module and the spatial spectrum self-encoder module comprises the following steps:
31 Setting an unsupervised hyperspectral image classification model which comprises a spatial spectrum self-encoder module for feature dimension compression and a total variation guide graph convolution module, wherein the features output by the encoder in the spatial spectrum self-encoder are used as node features of each sample to construct a graph structure, the node features are input into the graph convolution module to finish feature extraction after the graph embedding is finished, and finally, in a test stage, a K-means algorithm is applied to each sample to obtain a final classification result after the feature extraction is finished;
32 Setting a spatial spectrum self-encoder module network structure comprising an encoder and a corresponding decoder;
the network structure of the encoder includes:
First 2D convolution: the convolution kernel size is 3×3, the step size is 2, and the padding is 0;
a second 2D convolution: the convolution kernel size is 3×3, the step size is 2, and the padding is 0;
third 2D convolution: the convolution kernel size is 1×1, the step size is 1, and the padding is 0;
the network structure of the decoder includes:
the first 1D convolution: the convolution kernel size is 1×3, the step size is 1, and the padding is 1;
The second 1D convolution: the convolution kernel size is 1×3, the step size is 1, and the padding is 1;
third 2D convolution: the convolution kernel size is 1×1, the step size is 1, and the padding is 0;
33 Setting a full variation guide graph convolution network structure, which comprises the following specific steps:
331 A map embedding module structure:
Is provided with In the real number domain of the number,The size of (a) is the number of nodes of the graph,The size of the channel is the dimension size of the encoder output channel of the spatial spectrum self-encoder;
the input of the graph embedding module is The adjacency matrix is calculated by the following formulaIs the value of (1):
Wherein the method comprises the steps of Representing a smoothing coefficient, set to 0.5; representing an L2 norm operation of the matrix;
332 A set-up graph convolution module network structure:
the graph convolution module accepts an undirected graph WhereinRepresenting a set of vertices of a set of points,Representing the edge set, the output of the graph embedding module is taken as the input of the first graph rolling unitRepresented as module inputsRepresented as an adjacency matrix
The output of the first picture convolution unitCan be expressed as the following formula:
Wherein the method comprises the steps of In order to activate the function,Is the offset; And is also provided with Is a unit matrix; is a network-learnable parameter; And is also provided with Calculated from the following formula:
the full-variational guide graph convolution includes two graph convolution units, then the output of the full-variational guide graph convolution as a whole Can be expressed by the following formula:
Wherein the method comprises the steps of Meaning of (2)The same; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
Graph total variation regularization term loss after one forward propagation in graph convolution module Can be calculated by the following formula:
Wherein the method comprises the steps of The weight of the total variation loss is set to be 0.01 by default; is a slicing operation on tensors; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
34 Setting K-means algorithm parameters, wherein the method comprises the following specific steps:
341 Randomly selecting K cluster centers, and recording as And the input of the K-means algorithm is the output of the total variation guide chart convolution module is recorded as
342 Judging whether classification is finished or not through whether the loss function converges or not in the iterative process; the loss function is represented by the following formula:
Wherein the total variation guide graph convolution module outputs The time period is not changed in the iterative process,Representing the first in the iterative processCluster class to which the individual samples belong;
343 Instruction) command Is the number of iterative steps, whereinThe maximum iteration number is set to 1000;
344 For each sample) It is assigned to the cluster center closest to it during each iteration, expressed by the following formula:
345 For each cluster center) The value of the cluster center is updated during each iteration based on the samples belonging to the cluster center, and can be expressed by the following formula:
The joint model for training the total variation regularization guide graph convolution module and the spatial spectrum self-encoder module comprises the following steps:
41 Only the spatial spectrum self-encoder module and the total variation guide graph rolling module participate in gradient propagation and parameter updating in the training process; after the network completes one forward propagation, calculating the total loss;
42 Determining the gradient direction of parameter update through back propagation of total loss, and updating the parameters to complete iteration coherence of one network training;
43 And stopping the network training and saving the model parameters when the network training round reaches the preset training round number.
The method for obtaining the non-supervision classification result of the hyperspectral image comprises the following steps:
51 For a processed hyperspectral image to be classified, inputting all divided image blocks into a model;
52 Defining a spatial spectrum self-encoder module and a total variation guide graph convolution module, loading trained network parameters and freezing parameter updating of a network;
53 The number of nodes in the graph embedding module is fixed and equal to the number of batches propagated forward in the training process, and then the nodes are propagated forward in the same batch number in the testing process of the network;
If the total number of the image blocks cannot divide the number of the batches, carrying out forward propagation on some samples again, and finally averaging the extracted features of the two times;
54 After obtaining the characteristics of all the regional samples to be classified, iteratively clustering the sample characteristics by using K-means;
55 Obtaining K classification results of all samples of the region to be classified.
Advantageous effects
The invention relates to an unsupervised hyperspectral image classification method of total variation regular guide graph convolution,
Compared with the prior art, the method has the advantages that firstly, the relative total variation denoising operation is introduced, so that noise in the image is effectively restrained, and the classification accuracy is improved. And secondly, the hyperspectral image is subjected to joint modeling by adopting a graph convolution network and a spatial spectrum self-encoder, so that the spatial and spectral information of the image is better captured, and the effect of feature extraction is improved. Finally, the method is an unsupervised learning method, avoids the complicated process of relying on a large number of marked samples, and is more flexible and suitable for different scenes and data sets. Therefore, the invention fills the blank of the current unsupervised hyperspectral image classification method, and has higher practical value and wide application prospect.
The hyperspectral image classification method based on the full-variation regular guide graph convolution has the advantages that the hyperspectral image classification problem is solved, smoothness and consistency of an image space can be kept in the learning process, the hyperspectral image classification method is insensitive to high-noise and high-dimensional conditions, no label labeling is needed, and higher classification accuracy, robustness and universality are achieved. The method is expected to provide a more reliable hyperspectral image classification solution for the fields of agriculture, environmental monitoring, geological exploration and the like, and promotes development and application of hyperspectral image processing technology.
Drawings
FIG. 1 is a flow chart of an unsupervised hyperspectral image classification method for full variation regular guide-graph convolution;
FIG. 2 is an overall block diagram of an unsupervised hyperspectral image classification method for total variation canonical guide-graph convolution;
FIG. 3 is a frame diagram of a self-encoder module in accordance with the present invention;
fig. 4 is a block diagram of a full variation boot graph convolution module in accordance with the present invention.
Detailed Description
For a further understanding and appreciation of the structural features and advantages achieved by the present invention, the following description is provided in connection with the accompanying drawings, which are presently preferred embodiments and are incorporated in the accompanying drawings, in which:
As shown in FIG. 1, the method for classifying the unsupervised hyperspectral image by the full-variation regular guide graph convolution comprises the following steps:
The first step, hyperspectral image denoising pretreatment based on relative total variation:
The hyperspectral remote sensing images of the areas to be classified are obtained, and normalization pretreatment is carried out on the hyperspectral remote sensing satellite images, so that the models can be converged rapidly and stably, and the classification precision is improved; carrying out relative total variation denoising on the hyperspectral remote sensing satellite image; cutting each pixel point of the hyperspectral remote sensing satellite image as a center into image blocks with the same size, dividing a training set and a verification set, and acquiring the context information of a local space by each sample in the process of extracting the characteristics, wherein the specific steps are as follows;
(1) The hyperspectral image of the region to be classified is subjected to relative total variation denoising, and the specific steps are as follows:
(1-1) using pixel points in the image Calculating the total variation of the window in the horizontal direction of the pixel pointTotal variation of window in vertical directionAndThe calculation process of (2) is as follows:
Wherein the method comprises the steps of Representing a pixel point set in a window, wherein the size of the window is 7 multiplied by 7; representing a pixel within the window; Representing a weight coefficient; Representing the denoised image; And Respectively represent the denoised imagesRelative toGradients in the horizontal and vertical directions; Representing an absolute value taking operation;
Wherein the gradient is in the horizontal direction And a vertical gradientThe calculation formulas of (a) are respectively as follows:
Wherein the method comprises the steps of Pixel values representing specific locations on the denoised image; And Respectively representRelative toOffset in the horizontal and vertical directions;
Weight coefficient The calculation formula of (2) is as follows:
Wherein the method comprises the steps of Representing an indexing operation; And Respectively representRelative toOffset in the horizontal and vertical directions; Representing a smoothing coefficient, set to 0.5;
(1-2) using pixel points in the image Calculating the inherent variation of the window in the horizontal direction of the pixel pointWindow inherent variation in vertical directionAndThe calculation process of (2) is as follows:
wherein the meaning of each symbol is the same as that of each symbol in the total variation of the window;
(1-3) outputting a denoised image, the process of which can be expressed by the following formula;
Wherein the method comprises the steps of Is the original hyperspectral image, the image is the original hyperspectral image,The image is denoised; The smoothing coefficient is set to 0.01; To prevent the denominator from being a constant of 0, 0.001 is set; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
(2) Normalizing the pixel value of the hyperspectral remote sensing satellite image;
(3) Cutting each pixel point of the hyperspectral remote sensing satellite image as a center into an image block with the size of 7 multiplied by 7, and filling a boundary area into 0;
(4) The processed image is exported to be in a tif format;
(5) All the image blocks are divided into training sets and verification sets according to the ratio of 7:3.
Secondly, constructing a joint model of a total variation regularization guide graph convolution module and a spatial spectrum self-encoder module:
Setting an unsupervised hyperspectral image classification model comprising a spatial spectrum self-encoder module network structure for feature dimension compression and a total variation guide graph convolution module network structure, wherein the features output by an encoder in the spatial spectrum self-encoder module are used as node features of each sample construction graph structure, the node features are input into the total variation guide graph convolution module after embedding a graph to finish feature extraction, and finally, in a test stage, a K-means algorithm is applied to each sample after feature extraction is finished to obtain a final classification result; in addition, as shown in fig. 2, the constructed spatial spectrum self-encoder module can effectively extract spatial local context information of a sample and compress spectrum dimensions of the sample, so that the influence of noise is obviously reduced and the condition that classification accuracy is reduced due to a homography and heterology phenomenon caused by a dimension disaster is prevented; as shown in fig. 3, the constructed total variation guide graph convolution module can construct global context relation among all samples, and the influence of noise is further eliminated due to the introduction of total variation regular terms, so that features among classes are sparse, features in the classes are aggregated, and final features of each sample are effectively extracted;
The method comprises the following specific steps:
(1) Setting a spatial spectrum self-encoder module network structure which comprises an encoder and a corresponding decoder;
the network structure of the encoder includes:
First 2D convolution: the convolution kernel size is 3×3, the step size is 2, and the padding is 0;
a second 2D convolution: the convolution kernel size is 3×3, the step size is 2, and the padding is 0;
third 2D convolution: the convolution kernel size is 1×1, the step size is 1, and the padding is 0;
the network structure of the decoder includes:
the first 1D convolution: the convolution kernel size is 1×3, the step size is 1, and the padding is 1;
The second 1D convolution: the convolution kernel size is 1×3, the step size is 1, and the padding is 1;
third 2D convolution: the convolution kernel size is 1×1, the step size is 1, and the padding is 0;
(2) The method comprises the following specific steps of:
(2-1) configuration map embedding module structure:
The graph embedding module receives the characteristics of a batch of samples, calculates an adjacent matrix through the similarity of the characteristics between every two samples of the batch, and completes the graph topological structure embedding; the characteristics of each node depend on the output result of the encoder in the spatial spectrum self-encoder module;
Is provided with In the real number domain of the number,The size of (a) is the number of nodes of the graph,The size of the channel is the dimension size of the encoder output channel of the spatial spectrum self-encoder;
the input of the graph embedding module is The adjacency matrix is calculated by the following formulaIs the value of (1):
Wherein the method comprises the steps of Representing a smoothing coefficient, set to 0.5; representing an L2 norm operation of the matrix;
(2-2) setting a graph convolution module network structure:
the graph convolution module accepts an undirected graph WhereinRepresenting a set of vertices of a set of points,Representing the edge set, the output of the graph embedding module is taken as the input of the first graph rolling unitRepresented as module inputsRepresented as an adjacency matrix
The output of the first picture convolution unitCan be expressed as the following formula:
Wherein the method comprises the steps of In order to activate the function,Is the offset; And is also provided with Is a unit matrix; is a network-learnable parameter; And is also provided with Calculated from the following formula:
the full-variational guide graph convolution includes two graph convolution units, then the output of the full-variational guide graph convolution as a whole Can be expressed by the following formula:
Wherein the method comprises the steps of Meaning of (2)The same; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
Graph total variation regularization term loss after one forward propagation in graph convolution module Can be calculated by the following formula:
Wherein the method comprises the steps of The weight of the total variation loss is set to be 0.01 by default; is a slicing operation on tensors; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
(3) Setting K-means algorithm parameters:
dividing a given plurality of sample features into K clusters, and continuously updating the center point of each cluster while dividing; the value of K is set as the classification category number;
(3-1) randomly selecting K cluster centers, denoted as And the input of the K-means algorithm is the output of the total variation guide chart convolution module is recorded as
(3-2) Judging whether classification is completed by whether the loss function converges or not in the iterative process; the loss function is represented by the following formula:
Wherein the method comprises the steps of The time period is not changed in the iterative process,Representing the first in the iterative processCluster class to which the individual samples belong;
(3-3) order Is the number of iterative steps, whereinThe maximum iteration number is set to 1000;
(3-4) for each sample It is assigned to the cluster center closest to it during each iteration, expressed by the following formula:
(3-5) for each cluster center The value of the cluster center is updated during each iteration based on the samples belonging to the cluster center, and can be expressed by the following formula:
Training a joint model of the total variation regularization guide graph convolution module and the spatial spectrum self-encoder module:
Obtaining segmented hyperspectral remote sensing image blocks, and inputting the blocks into a spatial spectrum self-encoder module and a total variation guide chart convolution module for model training;
The method comprises the following specific steps:
(1) Sampling sample points of the processed hyperspectral image randomly and dividing the hyperspectral image into image blocks with the size of 7 multiplied by C, and inputting a training set and a verification set into a network; wherein C is the number of spectrum segments of the hyperspectral image;
(2) Inputting a batch of image blocks into a spatial spectrum self-encoder module, wherein tensor dimension output by the encoder is changed into 1 multiplied by B, and the tensor dimension is used as input of a decoder of the spatial spectrum self-encoder module and a graph embedding module of a total variation guide graph convolution module; wherein B is the number of compression bands of the encoder;
(3) The decoder receives as input the output of the encoder, upsamples a tensor of dimension 1 x B to 1 x C, and calculates the L1 loss at the center sample point of the sample image block;
(4) The method comprises the steps that a feature vector with the size of 1 multiplied by B firstly extracts spectral dimension local context information of the feature vector through two 1D convolutions with the step length equal to 1 and the convolution kernel size larger than 1, and then realizes spectral dimension up-sampling through 2D convolutions with the convolution kernel size of 1 multiplied by 1 and the convolution kernel number of C to finish spectral reconstruction;
(5) Spectral reconstruction loss by calculating L1 loss update spatial spectrum self-encoder module parameters for reconstructed spectrum and original base center point spectral vector Can be expressed by the following formula:
Wherein the method comprises the steps of The size of the graph node is the same as the number of the graph nodes; Representing an L1 norm operation of the matrix;
(6) The graph embedding module accepts as input the output of the encoder, which takes the compression characteristics After the topology graph is constructed, the topology graph is input into a graph convolution network to perform feature extraction, and the output result calculates graph total variation regularization term loss
(7) Only the spatial spectrum self-encoder module and the total variation guide graph rolling module participate in gradient propagation and parameter updating in the training process; after the network completes one forward propagation, the total lossIs represented by the following formula:
Training of the spatial spectrum self-encoder module and the total variation guide map convolution module may be performed simultaneously, with spectral reconstruction loss Only used for updating the module parameters of the spatial spectrum self-encoder, and the graph total variation regularization term lossThe parameters of the rolling module of the full variation guide chart are only updated and can be respectively expressed by the following formulas:
Wherein the method comprises the steps of Representing the self-space spectrum encoder module parameters;
Wherein the method comprises the steps of Representing parameters of a full variation guide graph convolution module;
(8) By total loss of Determining the gradient direction of parameter updating, and updating the parameters to complete the iterative turn of one-time network training;
(9) And stopping the network training and saving the model parameters when the network training round reaches the preset training round number.
Fourth, obtaining the non-supervision classification result of the hyperspectral image:
Acquiring hyperspectral remote sensing satellite images of a region to be classified, inputting all the classified hyperspectral remote sensing satellite images into a trained unsupervised hyperspectral image classification model convolved by a total variation regular guide map for forward propagation, and obtaining a classification map;
The method comprises the following specific steps:
(1) For a processed hyperspectral image of the region to be classified, all sample points of the hyperspectral image are divided into image blocks with the size of 7 multiplied by C;
(2) Defining a spatial spectrum self-encoder module and a total variation guide graph convolution module, loading trained network parameters and freezing parameter updating of a network;
(3) The number of nodes in the graph embedding module is fixed and equal to the number of batches propagated forward in the training process, and the forward propagation is carried out in the same batch number in the test process of the network;
if the total number of the image blocks cannot divide the number of the batches, carrying out forward propagation on some samples again, and finally averaging the extracted features of the two times;
(4) After obtaining the characteristics of all the regional samples to be classified, iteratively clustering the sample characteristics by using K-means;
(5) And obtaining K classification results of all samples of the region to be classified.
The space spectrum self-encoder module and the total variation guide graph convolution module provided by the method remarkably reduce the influence of noise, prevent the condition of classification accuracy reduction caused by the phenomenon of homography and heterology caused by dimension disasters, and integrate global context information to perform more effective feature extraction. Compared with the existing unsupervised hyperspectral classification technology, the method solves the problem of difficult classification caused by large noise, complex scene and dimensional redundancy in hyperspectral remote sensing images.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention, which is defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. The unsupervised hyperspectral image classification method based on total variation regular guide graph convolution is characterized by comprising the following steps of:
11 Hyperspectral image denoising preprocessing based on relative total variation): obtaining a hyperspectral remote sensing image of a region to be classified, and carrying out relative total variation denoising on the obtained hyperspectral image;
12 Constructing a joint model of the total variation regularization guide graph convolution module and the spatial spectrum self-encoder module: an unsupervised hyperspectral image classification model of a total variation canonical guide-graph convolution, comprising three parts: the first is a spatial spectrum self-encoder module for local context extraction and feature dimension compression; the second is a total variation guide graph rolling module for global context extraction and feature fusion; the third is a K-means algorithm module used for obtaining classification results only in the test process;
13 Training a joint model of the total variation canonical guide map convolution module and the spatial spectrum self-encoder module: the hyperspectral image is segmented into image blocks, the image blocks are input into a joint model, space and spectrum context extraction is carried out firstly, global context modeling is carried out through a graph neural network, and model training is finished through a back propagation algorithm;
14 Obtaining an unsupervised classification result of the hyperspectral image: and applying the model to the whole hyperspectral image to obtain an unsupervised hyperspectral image classification result.
2. The method for classifying the full-variation regular guide graph convolved non-supervision hyperspectral image according to claim 1, wherein the hyperspectral image acquisition and the denoising preprocessing based on the relative full variation comprise the following steps:
21 The hyperspectral image of the region to be classified is subjected to relative total variation denoising, and the specific steps are as follows:
211 With pixels in the image Calculating the total variation/>, of the window in the horizontal direction of the pixel point as the centerAnd total variation of window in vertical direction/>,/>And/>The calculation process of (2) is as follows:
Wherein the method comprises the steps of Representing a pixel point set in a window, wherein the size of the window is 7 multiplied by 7; /(I)Representing a pixel within the window; /(I)Representing a weight coefficient; /(I)Representing the denoised image; /(I)And/>Respectively represent/>, on the denoised imageRelative to/>Gradients in the horizontal and vertical directions; /(I)Representing an absolute value taking operation;
Wherein the gradient is in the horizontal direction And vertical gradient/>The calculation formulas of (a) are respectively as follows:
Wherein the method comprises the steps of Pixel values representing specific locations on the denoised image; /(I)And/>Respectively represent/>Relative to/>Offset in the horizontal and vertical directions;
Weight coefficient The calculation formula of (2) is as follows:
Wherein the method comprises the steps of Representing an indexing operation; /(I)And/>Respectively represent/>Relative to/>Offset in the horizontal and vertical directions; /(I)Representing a smoothing coefficient, set to 0.5;
212 With pixels in the image Calculating the inherent variation/>, of the window in the horizontal direction of the pixel pointWindow intrinsic variation in vertical direction/>,/>And/>The calculation process of (2) is as follows:
wherein the meaning of each symbol is the same as that of each symbol in the total variation of the window;
213 Outputting the denoised image, the process of which can be expressed by the following formula;
Wherein the method comprises the steps of Is the original hyperspectral image,/>The image is denoised; /(I)The smoothing coefficient is set to 0.01; /(I)To prevent the denominator from being a constant of 0, 0.001 is set; the meaning of each symbol is the same as the meaning of the corresponding symbol.
3. The method for classifying the unsupervised hyperspectral image by the full-variation regular guide-map convolution according to claim 1, wherein the constructing the joint model of the full-variation regular guide-map convolution module and the spatial spectrum self-encoder module comprises the following steps:
31 Setting an unsupervised hyperspectral image classification model comprising a spatial spectrum self-encoder module and a total variation guide graph convolution module, wherein the characteristics output by the encoder in the spatial spectrum self-encoder are used as node characteristics of each sample to construct a graph structure, the node characteristics are input into the graph convolution module to finish characteristic extraction after the graph is embedded, and finally, in a test stage, a K-means algorithm is applied to each sample to obtain a final classification result after the characteristic extraction is finished;
32 Setting a spatial-spectral self-encoder network structure comprising an encoder and a corresponding decoder;
the network structure of the encoder includes:
First 2D convolution: the convolution kernel size is 3×3, the step size is 2, and the padding is 0;
a second 2D convolution: the convolution kernel size is 3×3, the step size is 2, and the padding is 0;
Third 2D convolution: the convolution kernel size is 1×1, the step size is 1, and the padding is 0;
the network structure of the decoder includes:
The first 1D convolution: the convolution kernel size is 1×3, the step size is 1, and the padding is 1;
The second 1D convolution: the convolution kernel size is 1×3, the step size is 1, and the padding is 1;
Third 2D convolution: the convolution kernel size is 1×1, the step size is 1, and the padding is 0;
33 Setting a full variation guide graph convolution network structure, which comprises the following specific steps:
331 A map embedding module structure:
The graph embedding module receives the characteristics of a batch of samples, calculates an adjacent matrix through the similarity of the characteristics between every two samples of the batch, and completes the graph topological structure embedding; the characteristics of each node depend on the output result of the encoder in the spatial spectrum self-encoder module;
Is provided with Is real number domain,/>The size of (a) is the number of graph nodes,/>The size of the channel is the dimension size of the encoder output channel of the spatial spectrum self-encoder;
the input of the graph embedding module is The adjacency matrix is calculated by the following formulaIs the value of (1):
Wherein the method comprises the steps of Representing a smoothing coefficient, set to 0.5; /(I)Representing an L2 norm operation of the matrix;
332 A set-up graph convolution module network structure:
the graph convolution module accepts an undirected graph Wherein/>Representing the set of vertices,/>Representing a set of edges; the output of the graph embedding module is used as the input of the first graph rolling unit, and/>Expressed as module input/>,/>Expressed as adjacency matrix/>
The output of the first picture convolution unitCan be expressed as the following formula:
Wherein the method comprises the steps of To activate the function,/>Is the offset; /(I)And/>Is a unit matrix; /(I)Is a network-learnable parameter; /(I)And/>Calculated from the following formula:
the full variant guide graph convolution includes two graph convolution units, and then the output of the full variant guide graph convolution module Can be expressed by the following formula:
Wherein the method comprises the steps of For bias amount,/>Is a network-learnable parameter; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
Graph total variation regularization term loss after one forward propagation in graph convolution module Can be calculated by the following formula:
Wherein the method comprises the steps of The total variation loss weight is set to be 0.01; /(I)Is a slicing operation on tensors; the meaning of each other symbol is the same as the meaning of the corresponding symbol;
34 Setting K-means algorithm parameters, wherein the method comprises the following specific steps:
The core goal of K-means is to divide a given dataset into K clusters and to continually update the center point of each cluster while dividing; the value of K is set as the classification category number;
341 Randomly selecting K cluster centers, and recording as And the input of the K-means algorithm is the output of the total variation guide graph convolution module, which is recorded as/>
342 K-means is an iterative algorithm, and whether classification is completed can be judged by whether a loss function converges or not in the iterative process; the loss function is represented by the following formula:
Wherein the total variation guide graph convolution module outputs Unchanged during iteration,/>Representing the/>, in an iterative processCluster class to which the individual samples belong;
343 Instruction) command Is the iteration step number, wherein/>The maximum iteration number is set to 1000;
344 For each sample) It is assigned to the cluster center closest to it during each iteration, expressed by the following formula:
345 For each cluster center) The value of the cluster center is updated during each iteration based on the samples belonging to the cluster center, and can be expressed by the following formula:
4. The method for classifying an unsupervised hyperspectral image of a full-variant regular guide-map convolution according to claim 1 wherein the joint model of the training full-variant regular guide-map convolution module and the spatial-spectrum self-encoder module comprises the steps of:
41 Randomly sampling the processed hyperspectral image to obtain sample points and dividing the sample points into image blocks with the size of 7 multiplied by C, and inputting the image blocks into a network according to a training set and a verification set; wherein C is the number of spectrum segments of the hyperspectral image;
42A block of image blocks is input into a spatial spectrum self-encoder module, each 7 x C size image block is changed into a1 x B feature vector through an encoder; firstly, extracting space dimension local context information of a center sample through two 2D convolutions with the step length larger than 1 and realizing a downsampling process of an image block with the size of 7 multiplied by C; wherein the length and width variations input through each 2D convolution module can be expressed by the following formulas, respectively:
Wherein the method comprises the steps of Extracting spatial dimension local context information of a center sample for product and realizing,/>Is the longitudinal dimension of the output; /(I)、/>、/>The longitudinal convolution kernel size, the longitudinal filling size and the longitudinal convolution step length of the 2D convolution operation are respectively represented;
Wherein the method comprises the steps of For the lateral dimension of the input,/>Is the lateral dimension of the output; /(I)、/>、/>Respectively representing the transverse convolution kernel size and the transverse filling size of the 2D convolution operation and the transverse convolution step length;
The tensor dimension of the encoder output is changed into 1×1×b as the input of the decoder and the graph embedding module of the spatial spectrum self-encoder module; wherein B is the number of bands after compression by the encoder;
421 A decoder receives as input the output of the encoder, upsamples a tensor of dimension 1 x B to 1 x C, and calculates an L1 loss for the center sample point of the sample image block;
4211 Firstly, extracting spectral dimension local context information of the feature vector through two 1D convolutions with the step length equal to 1 and the convolution kernel size larger than 1, and then realizing spectral dimension up-sampling through 2D convolutions with the convolution kernel size of 1X 1 and the convolution kernel number of C to finish spectral reconstruction;
4212 Spectrum after reconstruction and original base center point spectral vector by calculating L1 loss update spatial spectrum self-encoder module parameters, spectral reconstruction loss Can be expressed by the following formula:
Wherein the method comprises the steps of The size of the graph node is the same as the number of the graph nodes; /(I)Representing an L1 norm operation of the matrix;
422 A graph embedding module accepts as input the output of the encoder, which takes the compression characteristics After the topology graph is built, the topology graph is input into a graph convolution network to perform feature extraction, and the result is output to calculate graph total variation regularization term loss/>
43 During training, only the spatial spectrum self-encoder module and the total variation guide graph convolution module participate in gradient propagation and parameter updating, and after the network completes one forward propagation, the total loss is reducedIs represented by the following formula:
Training of the spatial spectrum self-encoder module and the total variation guide map convolution module may be performed simultaneously, with spectral reconstruction loss Only used for updating the module parameters of the spatial spectrum self-encoder, and the graph total variation regularization term loss/>The parameters of the rolling module of the full variation guide chart are only updated and can be respectively expressed by the following formulas:
Wherein the method comprises the steps of Representing the self-space spectrum encoder module parameters;
Wherein the method comprises the steps of Representing parameters of a full variation guide graph convolution module;
44 Through total loss Determining the gradient direction of parameter updating, and updating the parameters to complete the iterative turn of one-time network training;
45 And stopping the network training and saving the model parameters when the network training round reaches the preset training round number.
5. The method for classifying the full-variation regular guide graph convolved non-supervision hyperspectral image according to claim 1, wherein the obtaining of the hyperspectral image non-supervision classification result comprises the following steps:
51 For a processed hyperspectral image to be classified, all sample points of the hyperspectral image are divided into image blocks with the size of 7 multiplied by C;
52 Defining a spatial spectrum self-encoder module and a total variation guide graph convolution module, loading trained network parameters and freezing parameter updating of a network;
53 The number of nodes in the graph embedding module is fixed and equal to the number of batches propagated forward in the training process, and then the nodes are propagated forward in the same batch number in the testing process of the network;
If the total number of the image blocks cannot divide the number of the batches, carrying out forward propagation on some samples again, and finally averaging the extracted features of the two times;
54 After obtaining the characteristics of all samples to be classified, iteratively clustering the sample characteristics using K-means;
55 Obtaining K classification results of all samples to be classified.
CN202410328183.3A 2024-03-21 2024-03-21 Full-variation regular guide graph convolution unsupervised hyperspectral image classification method Active CN117934975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410328183.3A CN117934975B (en) 2024-03-21 2024-03-21 Full-variation regular guide graph convolution unsupervised hyperspectral image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410328183.3A CN117934975B (en) 2024-03-21 2024-03-21 Full-variation regular guide graph convolution unsupervised hyperspectral image classification method

Publications (2)

Publication Number Publication Date
CN117934975A true CN117934975A (en) 2024-04-26
CN117934975B CN117934975B (en) 2024-06-07

Family

ID=90764987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410328183.3A Active CN117934975B (en) 2024-03-21 2024-03-21 Full-variation regular guide graph convolution unsupervised hyperspectral image classification method

Country Status (1)

Country Link
CN (1) CN117934975B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985357A (en) * 2018-06-29 2018-12-11 湖南理工学院 The hyperspectral image classification method of set empirical mode decomposition based on characteristics of image
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN111161199A (en) * 2019-12-13 2020-05-15 中国地质大学(武汉) Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method
CN111368691A (en) * 2020-02-28 2020-07-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN111754593A (en) * 2020-06-28 2020-10-09 西安航空学院 Multi-hypothesis prediction hyperspectral image compressed sensing reconstruction method based on spatial-spectral combination
CN113343942A (en) * 2021-07-21 2021-09-03 西安电子科技大学 Remote sensing image defect detection method
CN113743429A (en) * 2020-05-28 2021-12-03 中国人民解放军战略支援部队信息工程大学 Hyperspectral image classification method and device
CN115331105A (en) * 2022-08-19 2022-11-11 西安石油大学 Hyperspectral image classification method and system
CN115565071A (en) * 2022-10-26 2023-01-03 深圳大学 Hyperspectral image transform network training and classifying method
CN115731135A (en) * 2022-11-24 2023-03-03 电子科技大学长三角研究院(湖州) Hyperspectral image denoising method and system based on low-rank tensor decomposition and adaptive graph total variation
US20230114877A1 (en) * 2020-06-29 2023-04-13 Southwest Electronics Technology Research Institute ( China Electronics Technology Group Corporation Unsupervised Latent Low-Rank Projection Learning Method for Feature Extraction of Hyperspectral Images
CN116310459A (en) * 2023-03-28 2023-06-23 中国地质大学(武汉) Hyperspectral image subspace clustering method based on multi-view spatial spectrum combination
WO2023125456A1 (en) * 2021-12-28 2023-07-06 苏州大学 Multi-level variational autoencoder-based hyperspectral image feature extraction method
CN116403046A (en) * 2023-04-13 2023-07-07 中国人民解放军海军航空大学 Hyperspectral image classification device and method
US20230252644A1 (en) * 2022-02-08 2023-08-10 Ping An Technology (Shenzhen) Co., Ltd. System and method for unsupervised superpixel-driven instance segmentation of remote sensing image

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985357A (en) * 2018-06-29 2018-12-11 湖南理工学院 The hyperspectral image classification method of set empirical mode decomposition based on characteristics of image
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN111161199A (en) * 2019-12-13 2020-05-15 中国地质大学(武汉) Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method
CN111368691A (en) * 2020-02-28 2020-07-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN113743429A (en) * 2020-05-28 2021-12-03 中国人民解放军战略支援部队信息工程大学 Hyperspectral image classification method and device
CN111754593A (en) * 2020-06-28 2020-10-09 西安航空学院 Multi-hypothesis prediction hyperspectral image compressed sensing reconstruction method based on spatial-spectral combination
US20230114877A1 (en) * 2020-06-29 2023-04-13 Southwest Electronics Technology Research Institute ( China Electronics Technology Group Corporation Unsupervised Latent Low-Rank Projection Learning Method for Feature Extraction of Hyperspectral Images
CN113343942A (en) * 2021-07-21 2021-09-03 西安电子科技大学 Remote sensing image defect detection method
WO2023125456A1 (en) * 2021-12-28 2023-07-06 苏州大学 Multi-level variational autoencoder-based hyperspectral image feature extraction method
US20230252644A1 (en) * 2022-02-08 2023-08-10 Ping An Technology (Shenzhen) Co., Ltd. System and method for unsupervised superpixel-driven instance segmentation of remote sensing image
CN115331105A (en) * 2022-08-19 2022-11-11 西安石油大学 Hyperspectral image classification method and system
CN115565071A (en) * 2022-10-26 2023-01-03 深圳大学 Hyperspectral image transform network training and classifying method
CN115731135A (en) * 2022-11-24 2023-03-03 电子科技大学长三角研究院(湖州) Hyperspectral image denoising method and system based on low-rank tensor decomposition and adaptive graph total variation
CN116310459A (en) * 2023-03-28 2023-06-23 中国地质大学(武汉) Hyperspectral image subspace clustering method based on multi-view spatial spectrum combination
CN116403046A (en) * 2023-04-13 2023-07-07 中国人民解放军海军航空大学 Hyperspectral image classification device and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QICHAO LIU ET.AL: "CNN-Enhanced Graph Convolutional Network With Pixel- and Superpixel-Level Feature Fusion for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 59, no. 10, 24 November 2020 (2020-11-24), pages 8657, XP011879869, DOI: 10.1109/TGRS.2020.3037361 *
ZHI GONG ET.AL: "Superpixel Spectral–Spatial Feature Fusion Graph Convolution Network for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 60, 16 August 2022 (2022-08-16), pages 1 - 16, XP011919617, DOI: 10.1109/TGRS.2022.3198931 *
王婷婷 等: "基于Gabor滤波和级联GCN与CNN的高光谱图像分类", 《 应用科技 》, vol. 50, no. 2, 6 June 2023 (2023-06-06), pages 79 - 85 *

Also Published As

Publication number Publication date
CN117934975B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
Chen et al. Single image shadow detection and removal based on feature fusion and multiple dictionary learning
CN111860612B (en) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
CN110287849B (en) Lightweight depth network image target detection method suitable for raspberry pi
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN114022759B (en) Airspace finite pixel target detection system and method integrating neural network space-time characteristics
CN111340881B (en) Direct method visual positioning method based on semantic segmentation in dynamic scene
CN109871875B (en) Building change detection method based on deep learning
CN111368691B (en) Unsupervised hyperspectral remote sensing image space spectrum feature extraction method
CN111652892A (en) Remote sensing image building vector extraction and optimization method based on deep learning
CN108921120B (en) Cigarette identification method suitable for wide retail scene
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN105608471A (en) Robust transductive label estimation and data classification method and system
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN107169117B (en) Hand-drawn human motion retrieval method based on automatic encoder and DTW
CN110598613B (en) Expressway agglomerate fog monitoring method
CN116797787B (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN113033432A (en) Remote sensing image residential area extraction method based on progressive supervision
CN113269224A (en) Scene image classification method, system and storage medium
CN113192076A (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN115496950A (en) Neighborhood information embedded semi-supervised discrimination dictionary pair learning image classification method
CN111639697A (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN107292268A (en) The SAR image semantic segmentation method of quick ridge ripple deconvolution Structure learning model
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
CN116129280B (en) Method for detecting snow in remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant