CN113902013A - Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation - Google Patents

Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation Download PDF

Info

Publication number
CN113902013A
CN113902013A CN202111179286.0A CN202111179286A CN113902013A CN 113902013 A CN113902013 A CN 113902013A CN 202111179286 A CN202111179286 A CN 202111179286A CN 113902013 A CN113902013 A CN 113902013A
Authority
CN
China
Prior art keywords
hyperspectral
neural network
convolutional neural
remote sensing
dimensional convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111179286.0A
Other languages
Chinese (zh)
Inventor
国强
王亚妮
彭龙
戚连刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Yugu Technology Co ltd
Original Assignee
Heilongjiang Yugu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Yugu Technology Co ltd filed Critical Heilongjiang Yugu Technology Co ltd
Priority to CN202111179286.0A priority Critical patent/CN113902013A/en
Publication of CN113902013A publication Critical patent/CN113902013A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral classification method based on a three-dimensional convolutional neural network and superpixel segmentation, which comprises the following steps of: inputting a hyperspectral remote sensing image data set; performing dimensionality reduction processing on the image in the hyperspectral remote sensing image dataset; based on a superpixel segmentation method, extracting spatial features of the hyperspectral remote sensing images after dimensionality reduction to obtain a spatial region division result graph containing the spatial features; constructing a space-spectrum characteristic combined data set by utilizing the space region division result graph and the image in the hyperspectral remote sensing image data set; constructing a test set and a training set by utilizing a space-spectrum feature combined data set; building a three-dimensional convolutional neural network architecture, and training to obtain a hyperspectral classifier; and classifying the hyperspectral remote sensing images. The hyperspectral image classification method based on the spatial feature extraction solves the problem that the classification precision is low due to the fact that the spatial features are not fully utilized in the existing hyperspectral classification, and improves a superpixel segmentation algorithm to solve the problem that the hyperspectral region is not fully divided.

Description

Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hyperspectral classification method based on a three-dimensional convolutional neural network and superpixel segmentation.
Background
At present, the rapid development of remote sensing technology is advancing the remote sensing technology to a new direction, and multispectral and hyperspectral remote sensing technologies as important branches of the remote sensing technology are also rapidly developing, wherein the hyperspectral remote sensing image technology can collect multiband spectral data with more than 100 wave bands of different surface feature spectrums, and can perform subsequent analysis on the multiband spectral data to complete subsequent applications such as target identification, target detection, mineral resource exploration, forest resource detection and the like required by the remote sensing technology. The imaging quality and the contained internal details of the hyperspectrum push the remote sensing technology to a new development direction, and the data dimension reduction of the hyperspectrum to reduce the processing difficulty, the spectrum identification of the hyperspectrum to determine the ground object type and the spectrum unmixing of the mixed pixels are several important directions of the hyperspectral processing technology. By researching and utilizing the method, the purposes of complementation of remote sensing technology and improvement of target classification and identification precision can be achieved.
The existing hyperspectral classification method is mostly classified based on hyperspectral spectral information, and ground object spatial distribution information contained in a hyperspectral image is not utilized, so that the classification accuracy of hyperspectrum is easy and the classification accuracy is not high under the condition of the phenomenon of foreign matters in a complex scene, the same object, the different spectrum and the same spectrum.
Disclosure of Invention
In order to solve the technical problems, the invention provides a hyperspectral classification method based on a three-dimensional convolutional neural network and superpixel segmentation.
The technical scheme for solving the technical problems is as follows:
a hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation is characterized by comprising the following steps:
s1, inputting a hyperspectral remote sensing image data set;
s2, performing dimensionality reduction on the image in the hyperspectral remote sensing image data set to obtain a dimensionality-reduced hyperspectral remote sensing image;
s3, based on a super-pixel segmentation method, extracting spatial features of the dimensionality-reduced hyperspectral remote sensing image to obtain a spatial region division result graph containing the spatial features;
s4, constructing a space-spectrum characteristic combined data set by utilizing the space region division result graph and the corresponding image in the hyperspectral remote sensing image data set;
s5, constructing a test set and a training set by utilizing the space-spectrum feature combined data set;
s6, building a three-dimensional convolutional neural network architecture, and training to obtain a hyperspectral classifier;
and S7, classifying the hyperspectral remote sensing images by using the trained hyperspectral classifier.
Further, the step S2 includes the following steps:
and analyzing and processing the images in the hyperspectral remote sensing image data set on the spectral dimension by adopting a principal component analysis method, extracting the first three principal component components, and forming the image after dimension reduction.
Further, the step S3 includes the following steps:
s31, performing simple linear iterative clustering superpixel segmentation on the dimensionality-reduced hyperspectral remote sensing image to obtain a plurality of superpixel segmentation areas;
s32, carrying out region merging on the obtained multiple super-pixel segmentation regions by adopting fuzzy C-means clustering according to the following formula:
Figure BDA0003295092870000021
wherein l represents a super-pixel division region, SlRepresenting the l-th super pixel region RlN represents the number of super-pixel regions of the input image after simple linear iterative clustering, C is the initial number of clusters, and m is setConstant weighting coefficients, which set the cluster membership matrix uklDegree of importance of, heIs a pixel of an image; membership matrix uklAnd cluster center ckFormula (2) and formula (3):
Figure BDA0003295092870000031
Figure BDA0003295092870000032
and S33, obtaining a space region division result graph containing space characteristics.
Further, the step S4 includes the following steps:
and carrying out vector linear superposition on the space region division result graph and the corresponding image in the hyperspectral remote sensing image data set to obtain sample data of feature union and form a space-spectrum feature union data set.
Further, the step S5 includes the following steps:
randomly selecting 20% from the space-spectrum feature combined data to form a hyperspectral training set; and (5) composing the residual data into a hyperspectral test set.
Further, the S6 includes the following steps:
s61, building a three-dimensional convolution neural network;
s62, inputting the training set into a three-dimensional convolutional neural network for training until the three-dimensional convolutional neural network is converged to obtain an initial three-dimensional convolutional neural network model;
and S63, inputting test set data, and calibrating parameters in the initial three-dimensional convolutional neural network model by using the test set data to obtain the hyperspectral classifier.
Compared with the prior art, the invention has the beneficial effects that:
(1) according to the method, by fuzzy C-means clustering, further region division and combination and fusion of related superpixels are carried out on the superpixel segmentation result, the spatial representativeness of the spatial characteristics represented by the subsequent fusion result is enhanced, the problem that the hyperspectral space segmentation is not sufficient by the conventional superpixel segmentation is solved, and meanwhile, the superpixel results are further fused, so that the training speed of a neural network is accelerated, and the neural network can be converged more quickly;
(2) according to the method, the three-dimensional convolutional neural network is built, the spectral dimensional characteristics and the spatial characteristics of the hyperspectrum are fully utilized by utilizing the spatial characteristics and the spectral characteristic extraction capability of the three-dimensional convolutional neural network, meanwhile, the spectral characteristics and the spatial characteristic information of the neighborhood space can be better utilized in a slicing mode, the problem of limited classification precision caused by the fact that the spectral characteristics are not jointly utilized in the prior art is solved, the spatial characteristic extraction capability of the network can be enhanced, and the classification accuracy is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following, the concept involved in the embodiments of the present disclosure is introduced, and Principal Component Analysis (PCA), which is an unsupervised linear dimension reduction algorithm, is used to analyze Principal Components of data, and the Principal component Analysis selects a few or dozens of feature vectors with the largest variance from the high-dimensional feature vectors by performing linear transformation on the multi-dimensional variables of the high-dimensional feature vectors for identification and classification.
Simple Linear Iterative Clustering (SLIC) is a method for expressing picture features by replacing a large number of pixels with a small number of super-pixels by an irregular pixel block having a certain visual significance, which is formed by adjacent pixels having similar texture, color, brightness and other features, thereby greatly reducing the complexity of image post-processing.
And a fuzzy C-means clustering algorithm (FCMA or FCM) obtains the membership degree of each sample point to all class centers by optimizing a target function, so that the class of the sample points is determined to achieve the purpose of automatically classifying the sample data.
The invention aims to provide a hyperspectral classification method based on a three-dimensional convolutional neural network and superpixel segmentation, which solves the problem of low classification precision caused by insufficient utilization of space features by the existing hyperspectral classification, and improves a superpixel segmentation algorithm to solve the problem of insufficient division of hyperspectral regions.
Fig. 1 is a schematic flow chart of a hyperspectral classification method based on a three-dimensional convolutional neural network and superpixel segmentation in an embodiment of the present disclosure. As shown in FIG. 1, the invention discloses a hyperspectral classification method based on a three-dimensional convolutional neural network and superpixel segmentation, which comprises the following steps: s1, inputting a hyperspectral remote sensing image data set; s2, performing dimensionality reduction on the image in the hyperspectral remote sensing image data set to obtain a dimensionality-reduced hyperspectral remote sensing image; s3, based on a super-pixel segmentation method, extracting spatial features of the dimensionality-reduced hyperspectral remote sensing image to obtain a spatial region division result graph containing the spatial features; s4, constructing a space-spectrum characteristic combined data set by utilizing the space region division result graph and the image in the hyperspectral remote sensing image data set; s5, constructing a test set and a training set by utilizing the space-spectrum feature combined data set; s6, building a three-dimensional convolutional neural network architecture, and training to obtain a hyperspectral classifier; and S7, classifying the hyperspectral remote sensing images by using the trained hyperspectral classifier.
In some embodiments, the step S2 further includes the steps of:
and analyzing and processing the images in the hyperspectral remote sensing image data set on the spectral dimension by adopting a principal component analysis method, extracting the first three principal component components, and forming the image after dimension reduction.
Here, the first three principal component components are extracted in the principal component analysis method, the first three principal component components contain much spatial information, and include more than 95% of main information of the hyperspectral image, and the first ten components can also be selected as advanced spatial information of the principal information components, but information redundancy is caused, so the first three component components are selected as data sources for extracting the spatial information and performing superpixel segmentation.
In some embodiments, the step S3 further includes the steps of:
s31, performing simple linear iterative clustering superpixel segmentation on the dimensionality-reduced hyperspectral remote sensing image to obtain a plurality of superpixel segmentation areas;
s32, carrying out region merging on the obtained multiple super-pixel segmentation regions by adopting fuzzy C-means clustering according to the following formula (1):
Figure BDA0003295092870000051
wherein l represents a super-pixel division region, SlRepresenting the l-th super pixel region RlN represents the number of super-pixel areas of the input image after simple linear iterative clustering, C is the initial number of clusters, m is a set weighting coefficient, and a cluster membership matrix u is setklDegree of importance of, heIs a pixel of an image; membership matrix uklAnd cluster center ckFormula (2) and formula (3):
Figure BDA0003295092870000061
Figure BDA0003295092870000062
and S33, obtaining a space region division result graph containing space characteristics.
In some embodiments, the step S4 further includes the steps of:
and carrying out vector linear superposition on the space region division result graph and the corresponding image in the hyperspectral remote sensing image data set to obtain sample data of feature union and form a space-spectrum feature union data set.
The vector linear superposition refers to superposition of an image of a hyperspectral remote sensing image data set and a space region division result image containing space features, namely the image dimension of an original hyperspectral remote sensing image data set is 340 multiplied by 640 multiplied by 201, the image dimension after the space features of the first three principal component components are added is 340 multiplied by 640 multiplied by 204, and the vector linear superposition is adopted; by utilizing the abstract extraction capability of the three-dimensional convolutional neural network, the original hyperspectral remote sensing image and the spatial features extracted from each component are superposed, the data volume and the information volume of the original hyperspectral image are enriched, and the training efficiency and the classification precision of the three-dimensional convolutional neural network are improved.
In some embodiments, the step S5 further includes the steps of:
randomly selecting 20% from the space-spectrum feature combined data to form a hyperspectral training set; and (5) composing the residual data into a hyperspectral test set.
Here, the selection of the hyperspectral training proportion is generally concentrated on about 20%, once the hyperspectral training is adopted, the phenomenon of overfitting is easily caused, namely the testing precision is very high, but the generalization capability is weak; in addition, the small-proportion training samples are beneficial to further improving the generalization capability of the classifier; in the present invention, slice data is input, and a large amount of data is already input and trained, so that the classifier has a strong classification capability and does not need to perform a large amount of training data.
In some embodiments, the S6 further comprises the following steps:
s61, building a three-dimensional convolution neural network;
s62, inputting the training set into a three-dimensional convolutional neural network for training until the three-dimensional convolutional neural network is converged to obtain an initial three-dimensional convolutional neural network model; in a specific embodiment, the input sample for training is sliced data, and the slice size is 19 × 19; the training frequency is initially set to be 100epoch, and if the relevant parameters are not converged, the training is continued until the parameters of the three-dimensional convolutional neural network are converged;
in the past, classifier training is generally carried out by inputting one-dimensional hyperspectral images, namely 1 multiplied by 204 data, and the classifier can be trained well only by needing more training data; slicing means slicing the periphery of an original data point, that is, the size of input training data is 19 × 19 × 204, so that spatial features in hyperspectrum can be further extracted and utilized, and the size of the slice is generally adjusted according to actual needs.
And S63, inputting test set data, and calibrating parameters in the initial three-dimensional convolutional neural network model by using the test set data to obtain the hyperspectral classifier.
The hyperspectral classification method based on the three-dimensional convolutional neural network and the superpixel segmentation provided by the embodiment of the disclosure can realize the following technical effects: the hyperspectral remote sensing images are subjected to dimensionality reduction processing to obtain the hyperspectral remote sensing images containing main components, the hyperspectral remote sensing images are subjected to superpixel segmentation, the superpixel segmentation results are subjected to further region division and combination and fusion of related superpixels through fuzzy C-means clustering, the spatial representativeness of the spatial features represented by the subsequent fusion results is enhanced, the problem that the hyperspectral space is not sufficiently segmented by the existing superpixel segmentation is solved, meanwhile, the superpixel results are further fused, the training speed of a neural network is accelerated, and the neural network can be converged more quickly; the method also builds the three-dimensional convolutional neural network, utilizes the spatial characteristic and the spectral characteristic extraction capability of the three-dimensional convolutional neural network to fully utilize the spectral dimensional characteristic and the spatial characteristic of the hyperspectral region, adopts a slicing mode, can better utilize the spectral characteristic and the spatial characteristic information of the neighborhood region, overcomes the problem of limited classification precision caused by the fact that the spectral characteristics are not jointly utilized in the prior art, enhances the spatial characteristic extraction capability of the network, and improves the classification accuracy.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation is characterized by comprising the following steps:
s1, inputting a hyperspectral remote sensing image data set;
s2, performing dimensionality reduction on the image in the hyperspectral remote sensing image data set to obtain a dimensionality-reduced hyperspectral remote sensing image;
s3, based on a super-pixel segmentation method, extracting spatial features of the dimensionality-reduced hyperspectral remote sensing image to obtain a spatial region division result graph containing the spatial features;
s4, constructing a space-spectrum characteristic combined data set by utilizing the space region division result graph and the corresponding image in the hyperspectral remote sensing image data set;
s5, constructing a test set and a training set by utilizing the space-spectrum feature combined data set;
s6, building a three-dimensional convolutional neural network architecture, and training to obtain a hyperspectral classifier;
and S7, classifying the hyperspectral remote sensing images by using the trained hyperspectral classifier.
2. The hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation according to claim 1, wherein the step S2 comprises the following steps:
and analyzing and processing the images in the hyperspectral remote sensing image data set on the spectral dimension by adopting a principal component analysis method, extracting the first three principal component components, and forming the image after dimension reduction.
3. The hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation according to claim 1, wherein the step S3 comprises the following steps:
s31, performing simple linear iterative clustering superpixel segmentation on the dimensionality-reduced hyperspectral remote sensing image to obtain a plurality of superpixel segmentation areas;
s32, carrying out region merging on the obtained multiple super-pixel segmentation regions by adopting fuzzy C-means clustering according to the following formula:
Figure FDA0003295092860000011
wherein l represents a super-pixel division region, SlRepresenting the l-th super pixel region RlN represents the number of super-pixel areas of the input image after simple linear iterative clustering, C is the initial number of clusters, m is a set weighting coefficient, and a cluster membership matrix u is setklDegree of importance of, heIs a pixel of an image; membership matrix uklAnd cluster center ckFormula (2) and formula (3):
Figure FDA0003295092860000021
Figure FDA0003295092860000022
and S33, obtaining a space region division result graph containing space characteristics.
4. The hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation according to claim 1, wherein the step S4 comprises the following steps:
and carrying out vector linear superposition on the space region division result graph and the corresponding image in the hyperspectral remote sensing image data set to obtain sample data of feature union and form a space-spectrum feature union data set.
5. The hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation according to claim 1, wherein the step S5 comprises the following steps:
randomly selecting 20% from the space-spectrum feature combined data to form a hyperspectral training set; and (5) composing the residual data into a hyperspectral test set.
6. The hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation according to claim 1, wherein the step S6 comprises the following steps:
s61, building a three-dimensional convolutional neural network architecture;
s62, inputting a training set into a three-dimensional convolutional neural network for training until the three-dimensional convolutional neural network is converged to obtain an initial three-dimensional convolutional neural network model;
and S63, inputting test set data, and calibrating parameters in the initial three-dimensional convolutional neural network model by using the test set data to obtain the hyperspectral classifier.
CN202111179286.0A 2021-10-09 2021-10-09 Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation Pending CN113902013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111179286.0A CN113902013A (en) 2021-10-09 2021-10-09 Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111179286.0A CN113902013A (en) 2021-10-09 2021-10-09 Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation

Publications (1)

Publication Number Publication Date
CN113902013A true CN113902013A (en) 2022-01-07

Family

ID=79190867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111179286.0A Pending CN113902013A (en) 2021-10-09 2021-10-09 Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation

Country Status (1)

Country Link
CN (1) CN113902013A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746079A (en) * 2023-11-15 2024-03-22 中国地质大学(武汉) Clustering prediction method, system, storage medium and equipment for hyperspectral image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746079A (en) * 2023-11-15 2024-03-22 中国地质大学(武汉) Clustering prediction method, system, storage medium and equipment for hyperspectral image
CN117746079B (en) * 2023-11-15 2024-05-14 中国地质大学(武汉) Clustering prediction method, system, storage medium and equipment for hyperspectral image

Similar Documents

Publication Publication Date Title
CN110399909B (en) Hyperspectral image classification method based on label constraint elastic network graph model
Zhao et al. Superpixel-based multiple local CNN for panchromatic and multispectral image classification
CN106469316B (en) Hyperspectral image classification method and system based on superpixel-level information fusion
CN105046276B (en) Hyperspectral image band selection method based on low-rank representation
Fan et al. Superpixel guided deep-sparse-representation learning for hyperspectral image classification
CN110110596B (en) Hyperspectral image feature extraction, classification model construction and classification method
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
Thoonen et al. Multisource classification of color and hyperspectral images using color attribute profiles and composite decision fusion
CN111339924B (en) Polarized SAR image classification method based on superpixel and full convolution network
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
Trivedi et al. Automatic segmentation of plant leaves disease using min-max hue histogram and k-mean clustering
CN111563577B (en) Unet-based intrinsic image decomposition method for skip layer frequency division and multi-scale identification
Hou et al. Spectral–spatial classification of hyperspectral data using 3-D morphological profile
CN115222994A (en) Hyperspectral image classification method based on hybrid spectrum network and multi-head self-attention mechanism
CN111563408A (en) High-resolution image landslide automatic detection method with multi-level perception characteristics and progressive self-learning
CN111027509A (en) Hyperspectral image target detection method based on double-current convolution neural network
CN116091833A (en) Attention and transducer hyperspectral image classification method and system
CN111626380A (en) Polarized SAR image classification method based on super-pixels and convolution network
CN113902013A (en) Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation
CN111738310B (en) Material classification method, device, electronic equipment and storage medium
CN113449603A (en) High-resolution remote sensing image surface element identification method and storage medium
Alam et al. Combining unmixing and deep feature learning for hyperspectral image classification
CN116452872A (en) Forest scene tree classification method based on improved deep pavv3+
CN111832508B (en) DIE _ GA-based low-illumination target detection method
Xu et al. Blind image quality assessment by pairwise ranking image series

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination