CN112115795B - Hyperspectral image classification method based on Triple GAN - Google Patents

Hyperspectral image classification method based on Triple GAN Download PDF

Info

Publication number
CN112115795B
CN112115795B CN202010847535.8A CN202010847535A CN112115795B CN 112115795 B CN112115795 B CN 112115795B CN 202010847535 A CN202010847535 A CN 202010847535A CN 112115795 B CN112115795 B CN 112115795B
Authority
CN
China
Prior art keywords
hyperspectral
sample image
image
classification
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010847535.8A
Other languages
Chinese (zh)
Other versions
CN112115795A (en
Inventor
薛朝辉
郑晓菡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202010847535.8A priority Critical patent/CN112115795B/en
Publication of CN112115795A publication Critical patent/CN112115795A/en
Application granted granted Critical
Publication of CN112115795B publication Critical patent/CN112115795B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a hyperspectral image classification method based on Triple GAN, which comprises the steps of firstly, compressing the spectral characteristics of a hyperspectral image by adopting a principal component algorithm so as to reduce the characteristic dimension and reduce the redundancy; then, acquiring preset various image features to be selected corresponding to the hyperspectral sample image; then, a Triple GAN classification network is applied to complete hyperspectral sample image classification operation based on various image features to be selected, and the image features to be selected corresponding to the highest classification accuracy are selected as target image features corresponding to the sample image set in combination with actual classification of the hyperspectral sample images; finally, the characteristics of the target image are used as input, the actual classification of the hyperspectral sample image is used as output, the training aiming at the Triple GAN classification network is completed, and an image classification model is obtained; in practical application, classification of the target hyperspectral images can be completed by applying the image classification model through target image features, so that efficient classification of the hyperspectral images is realized, and practical working efficiency is guaranteed.

Description

Hyperspectral image classification method based on Triple GAN
Technical Field
The invention relates to a hyperspectral image classification method based on Triple GAN, and belongs to the technical field of remote sensing image processing.
Background
Remote sensing refers to an information acquisition mode of acquiring an electromagnetic spectrum segment of an observation target on an aerospace platform through a specific imaging instrument and imaging to acquire characteristic information of an observation object in various aspects. Since the modern remote sensing technology is put into use in the last 60 th century, the technology plays a great role in various fields, so that the modern remote sensing technology becomes an important mark for measuring the technical development level and comprehensive strength of a country. In order to further enhance the capability of human beings in exploring and developing earth resources, natural environments and extraterrestrial spaces and expand the means of human beings for monitoring various abnormal climates of the earth surface, governments of various countries invest a large amount of resources to research and develop high-tech detecting instruments with high spatial resolution and high spectral resolution, and the high-spectral remote sensing technology is developed accordingly. As a frontier technology in the aspect of current remote sensing, the hyperspectral remote sensing has the characteristic of integrating hyperspectral resolution and a map, and is a great breakthrough in the development history of remote sensing technology.
The hyperspectral remote sensing image is characterized in that the hyperspectral remote sensing image consists of dozens of to hundreds of wave bands, and the spectral resolution is higher than that of multispectral remote sensing and can reach or even exceed 10 nm. Therefore, the hyperspectral remote sensing image can provide more fine spectral information than the traditional remote sensing image, so that many ground feature characteristics existing in a narrow spectral range are found, and the capability of the remote sensing technology for acquiring the ground feature information is improved. When the ground feature space image is obtained, each pixel on the image can extract a continuous spectrum curve containing abundant spectrum characteristics, and the fine spectrum resolution ratio is favorable for accurately identifying and classifying the ground features. At present, hyperspectral remote sensing images are widely applied to the fields of geological mapping, environmental monitoring, vegetation investigation, agricultural remote sensing, ocean remote sensing, atmospheric research and the like, and play an increasingly important role. Currently, the processing research on hyperspectral image data has received wide attention from scholars at home and abroad.
In recent years, deep learning methods have been greatly developed in the aspect of hyperspectral remote sensing image supervised classification, and at present, five deep learning networks have been used for hyperspectral image classification, which are respectively a stack self-encoder sae (stacked auto encoder), a deep Belief network dbn (deep Belief network), a convolutional Neural network cnn (convolutional Neural network), a recurrent Neural network rnn (secure Neural network), and an antagonistic generation network gan (generic adaptive network). In the aspect of GAN, Zhu et al apply the AC-GAN network structure to hyperspectral image classification for the first time, and respectively propose 1D-GAN and 3D-GAN by changing the convolution structure in the network to classify spectral features and spatial spectrum combined features; liu et al use ACGAN as a spectral feature extractor, combine with the spatial features after LBP processing, put into CNN to classify, have obtained better results; the output structure of the discrimination network in the countermeasure generation network is modified by the Giebb to ensure that the operation of judging true and false is not carried out any more, and the image classification task is completed, and the effectiveness of the image classification task is fully proved through experiments on spectral characteristics and space spectrum combination characteristics.
In the aspect of improvement of classification strategies, Ma et al propose that information as much as possible is obtained from unlabeled samples by using a method combining multi-decision labels and deep feature learning to improve the classification effect, wherein the multi-decision labels are derived from local decision labels based on neighborhood weighting information and global decision labels obtained by predicting similar samples through deep learning, the local decision labels and the global decision labels are combined together to screen unlabeled samples with high probability of the same class and add the unlabeled samples into training samples, and then the deep learning is used for performing space spectrum feature extraction and classification. Similar to the above, there is also a method proposed by Li et al for classification decision of features using pixels, in which neighboring pixels and central pixels in a neighborhood patch are put into a depth network for prediction, but finally the category of the central pixel is voted and decided by the prediction results of each pixel in the whole neighborhood patch. Liu et al uses a depth residual 3D-CNN to extract a spatial spectrum joint feature, maps the feature to other spaces with stronger separability through a batch training network, samples of the same category in the spaces are distributed intensively, and samples of different categories are far away, and then directly uses a nearest neighbor classifier to judge the category of the samples.
The semi-supervised method can utilize unmarked samples to assist classification, and is an effective means for solving the problems caused by limited marked samples. Wu et al propose that a constrained Dirichlet hybrid model algorithm is used for clustering training samples to generate proxy labels, then each hyperspectral pixel is taken as a spectral sequence, the spectral sequence and the proxy labels are jointly input into a convolution cyclic neural network for pre-training, and finally a small number of real sample labels are used for fine tuning of input. By the method, a good classification effect can be obtained under the condition of limited marking samples, and the performance of the semi-supervised classification algorithm exceeds the optimal performance at that time. In addition to research on classification strategies, the development of generative models also provides a new idea for semi-supervised classification. He et al first extracts the empty spectral features of hyperspectral images by using a 3D bilateral filter, then trains a semi-supervised GAN to classify by using the extracted features, and the semi-supervised process is realized by adding samples generated by a GAN generator to the training features and expanding the input dimensionality of a classifier. Zhan et al also proposed another semi-supervised classification framework HSGAN based on GAN, in which a label-free sample was used to train 1D-GAN and generate pseudo samples similar to real samples to achieve the effect of extending the training set, which performed well with a small number of training samples.
In addition to the above two strategies, migration learning performs relatively reliably with a small number of labeled samples. Yang et al designs a network structure with two branches by modifying CNN to extract spatial and spectral features respectively, and then puts the extracted features into a full connection layer to generate spatial-spectral combined features and classify the features. However, different from the traditional method, under the condition that the labeled samples are limited, the bottom layer and the middle layer of the network are migrated after being pre-trained on other data sources, and only a small labeled sample is used for training the top layer network, so that the influence of overfitting is effectively reduced.
It can be seen that although the classification of hyperspectral images using deep learning is less studied in the limited labeled samples, the semi-supervised method has a relatively large proportion therein, which is worth intensive study. By integrating the research progress at home and abroad, the deep learning method has great significance in applying to the classification problem of the hyperspectral remote sensing images, the upper limit of the classification precision of the hyperspectral images can be greatly improved, and the combination of the deep learning and the semi-supervision idea is expected to solve the problem that the classification precision is low due to few labeled samples in actual production, so that the application of the hyperspectral images in other fields is further promoted. However, it is worth noting that some problems still exist at present that need to be solved (1) the hyperspectral image classification problem is solved by applying an advanced deep learning network structure, and (2) the hyperspectral image semi-supervised classification based on deep learning.
Disclosure of Invention
The invention aims to solve the technical problem of providing a hyperspectral image classification method based on Triple GAN, adopting a brand new design strategy, being capable of efficiently realizing the classification of hyperspectral images and ensuring the actual working efficiency.
The invention adopts the following technical scheme for solving the technical problems: the invention designs a hyperspectral image classification method based on Triple GAN, which comprises the following steps of A to F, and the acquisition of an image classification model is realized; then, aiming at the target hyperspectral image, applying an image classification model, and executing the following steps I to III to realize the classification of the target hyperspectral image;
step A, collecting each high-spectrum sample image respectively corresponding to each preset actual classification, constructing a sample image set, and then entering step B;
b, respectively executing a principal component analysis method to perform dimensionality reduction operation on each high-spectrum sample image in the sample image set, updating each high-spectrum sample image in the sample image set, and then entering the step C;
step C, acquiring preset various image characteristics to be selected corresponding to the hyperspectral sample images respectively aiming at each hyperspectral sample image in the sample image set, namely acquiring various image characteristics to be selected corresponding to each hyperspectral sample image in the sample image set respectively, and then entering step D;
step D, respectively aiming at various image features to be selected, respectively taking the image features to be selected corresponding to each high-spectrum sample image in the sample image set as input, applying Triple GAN classification network to classify each high-spectrum sample image in the sample image set, obtaining network classification of each high-spectrum sample image in the sample image set based on the image features to be selected, and combining actual classification corresponding to each high-spectrum sample image in the sample image set to obtain classification accuracy of all high-spectrum sample images in the sample image set based on the image features to be selected; then obtaining the classification accuracy of all hyperspectral sample images in the sample image set based on the characteristics of various images to be selected respectively, and then entering the step E;
e, selecting the image features to be selected corresponding to the highest classification accuracy rate as the target image features corresponding to the sample image set according to the classification accuracy rates of all hyperspectral sample images in the sample image set based on the various image features to be selected respectively, and then entering the step F;
step F, taking the target image characteristics corresponding to each high-spectrum sample image in the sample image set as input, taking the actual classification corresponding to each high-spectrum sample image as output, and training aiming at the Triple GAN classification network to obtain the trained classification network, namely forming an image classification model;
step I, executing a principal component analysis method for reducing the dimension aiming at the target hyperspectral image, updating the target hyperspectral image, and then entering step II;
II, obtaining target image characteristics corresponding to the target hyperspectral image, and then entering the step III;
and III, classifying the target hyperspectral images by using the target image characteristics corresponding to the target hyperspectral images as input and applying an image classification model to obtain classification results of the target hyperspectral images.
As a preferred technical scheme of the invention: in the step C, various image features to be selected are preset to include a gray level co-occurrence matrix feature, a Gabor filtering feature, a morphological section feature and a morphological attribute section feature, that is, a gray level co-occurrence matrix feature, a Gabor filtering feature, a morphological section feature and a morphological attribute section feature respectively corresponding to each high spectrum sample image in the sample image set are obtained.
As a preferred technical scheme of the invention: in the step C, obtaining gray level co-occurrence matrix characteristics corresponding to the hyperspectral sample image according to the following steps from alpha 1 to alpha 9;
step alpha 1, performing graying processing on the hyperspectral sample image, updating the hyperspectral sample image, and then entering step alpha 2;
step alpha 2, starting from any pixel in the hyperspectral sample image, counting the number of pixels with various distances delta and various gray values n in the hyperspectral sample image according to the gray value m of the pixel to form a statistical matrix corresponding to the distribution of different gray-scale pixels in the hyperspectral sample image, and then entering step alpha 3;
step alpha 3, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000041
obtaining a gray scale contrast CON corresponding to the hyperspectral sample image, wherein p (m, n) represents the probability of joint distribution between a pixel with a gray scale value m and a pixel with a gray scale value n, and then entering a step alpha 4;
step alpha 4, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000042
obtaining gray scale difference Dis corresponding to the hyperspectral sample image, and then entering a step alpha 5;
step alpha 5, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000043
obtaining gray scale correlation core corresponding to the hyperspectral sample image, wherein mu represents the mean value of all pixel gray scale values in the hyperspectral sample image, and sigma represents the standard deviation of all pixel gray scale values in the hyperspectral sample image, and then entering a step alpha 6;
step alpha 6, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, the following formula is adopted:
Figure BDA0002643578850000051
obtaining the gray homogeneity degree Homo corresponding to the hyperspectral sample image, and then entering a step alpha 7;
step alpha 7, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000052
obtaining a gray angle second moment ASM corresponding to the hyperspectral sample image, and then entering a step alpha 8;
step alpha 8, gray angle second moment ASM corresponding to the hyperspectral sample image is calculated according to the following formula:
Figure BDA0002643578850000053
acquiring gray level Energy corresponding to the hyperspectral sample image, and then entering a step alpha 9;
and step alpha 9, constructing and obtaining gray level co-occurrence matrix characteristics corresponding to the hyperspectral sample images according to gray level contrast CON, gray level difference Dis, gray level correlation Corre, gray level homogeneity Homo, gray level angle second moment ASM and gray level Energy corresponding to the hyperspectral sample images.
As a preferred technical scheme of the invention: in the step C, the following formula of the real part in the two-dimensional Gabor filter is applied to the pixels of each coordinate in the hyperspectral sample image:
Figure BDA0002643578850000054
obtaining the corresponding of the pixel
Figure BDA0002643578850000055
And then finishing the operation on each pixel in the hyperspectral sample image to form a Gabor filtering characteristic corresponding to the hyperspectral sample image, wherein x represents the abscissa of the position in the hyperspectral sample image corresponding to the pixel, y represents the ordinate of the position in the hyperspectral sample image corresponding to the pixel, x '═ xcos theta + ysin theta, y' ═ -xsin theta + ycos theta, theta represents the detection direction of the Gabor filtering kernel function, and lambda represents the wavelength of the Gabor filtering kernel function,
Figure BDA0002643578850000056
represents the phase parameter of the cosine function in the Gabor filter kernel function, σ represents the gaussian standard deviation of the Gabor filter kernel function, and γ represents the spatial aspect ratio of the Gabor filter kernel function.
As a preferred technical scheme of the invention: in the step C, aiming at the pixel X of each coordinate in the hyperspectral sample image, the following formula is adopted:
Figure BDA0002643578850000057
MP (X) corresponding to the pixel X is obtained, and then the operation aiming at each pixel in the hyperspectral sample image is completed to form the morphological section characteristics corresponding to the hyperspectral sample image, wherein,
Figure BDA0002643578850000061
representing the result of an opening operation for pixel X for one time using a structuring element of size d,
Figure BDA0002643578850000062
the result of the closing operation is shown for pixel X l times using a structuring element of size d.
As a preferred technical scheme of the invention: in the step C, morphological attribute section features corresponding to the hyperspectral sample image I are obtained according to the following steps of beta 1 to beta 4;
step β 1. initialize v' ═ 1, then go to step β 2;
step beta 2, based on preset gray level threshold values respectively corresponding to the morphological coarsening and coarsening operations, respectively aiming at each connected component area in the hyperspectral sample image I, judging whether the gray level attribute of the connected component area is larger than the gray level threshold value of the morphological coarsening and coarsening operation at the v' th time, if so, reserving the connected component area, and realizing the morphological coarsening operation; otherwise, merging the connected component region into the connected component region of which the peripheral gray attribute is greater than the gray threshold of the v' th morphological thinning operation, or covering the connected component region by applying a mask to realize the morphological thinning operation; further completing the operation on each connected component region in the hyperspectral sample image I, obtaining a result of performing the v 'th morphological refinement operation and a result of the v' th morphological coarsening operation on the hyperspectral sample image I, and then entering a step beta 3;
step beta 3, judging whether v' is equal to v, if so, entering a step beta 4; otherwise, updating by adding 1 according to the value of v', and then returning to the step beta 2;
step beta 4, aiming at the hyperspectral sample image I, the following formula is adopted:
Figure BDA0002643578850000063
obtaining a morphological attribute section characteristic AP (I) corresponding to the hyperspectral sample image, wherein epsilon v (I) Representing the result of v morphological refinement operations on the hyperspectral sample image I,
Figure BDA0002643578850000064
and showing the result of v morphological coarsening operations performed on the hyperspectral sample image I.
Compared with the prior art, the hyperspectral image classification method based on Triple GAN has the following technical effects:
the hyperspectral image classification method based on Triple GAN is designed, firstly, the spectral features of hyperspectral images are compressed by adopting a principal component algorithm so as to reduce feature dimensions and reduce redundancy; then, acquiring preset various image features to be selected corresponding to the hyperspectral sample image; then, a Triple GAN classification network is applied to complete hyperspectral sample image classification operation based on various image features to be selected, and the image features to be selected corresponding to the highest classification accuracy are selected as target image features corresponding to the sample image set in combination with actual classification of the hyperspectral sample images; finally, the characteristics of the target image are used as input, the actual classification of the hyperspectral sample image is used as output, the training aiming at the Triple GAN classification network is completed, and an image classification model is obtained; in practical application, classification of the target hyperspectral images can be completed by applying the image classification model through target image features, so that efficient classification of the hyperspectral images is realized, and practical working efficiency is guaranteed.
Drawings
FIG. 1 is a schematic flow chart of a hyperspectral image classification method based on Triple GAN designed by the invention.
Detailed Description
The following description will explain embodiments of the present invention in further detail with reference to the accompanying drawings.
The invention designs a hyperspectral image classification method based on Triple GAN, and in practical application, as shown in figure 1, the following steps A to F are firstly executed to realize the acquisition of an image classification model.
And step A, collecting each high-spectrum sample image respectively corresponding to each preset actual classification, constructing a sample image set, and then entering the step B.
And step B, respectively executing a principal component analysis method to perform dimensionality reduction operation on each high-spectrum sample image in the sample image set, updating each high-spectrum sample image in the sample image set, and then entering step C. In practical application, a Principal Component Analysis (PCA) method can be specifically adopted for dimension reduction operation, and spectral features of the hyperspectral image are compressed to reduce feature dimensions and reduce redundancy.
And C, acquiring preset various image characteristics to be selected corresponding to the hyperspectral sample images respectively according to the hyperspectral sample images in the sample image set, namely acquiring various image characteristics to be selected corresponding to the hyperspectral sample images in the sample image set respectively, and then entering the step D.
In practical application, the preset various image features to be selected include gray level co-occurrence matrix features, Gabor filtering features, morphological profile features and morphological attribute profile features, that is, the gray level co-occurrence matrix features, Gabor filtering features, morphological profile features and morphological attribute profile features respectively corresponding to each high spectrum sample image in the sample image set obtained in step C.
In specific implementation, for example, the gray level co-occurrence matrix characteristic corresponding to the hyperspectral sample image is obtained according to the following steps α 1 to α 9.
And step alpha 1, performing graying processing on the hyperspectral sample image, updating the hyperspectral sample image, and then entering step alpha 2.
And step alpha 2, starting from any pixel in the hyperspectral sample image, counting the number of pixels with various distances delta and various gray values n from the pixel in the hyperspectral sample image according to the gray value m of the pixel to form a statistical matrix corresponding to different gray pixel distributions in the hyperspectral sample image, namely obtaining the statistical matrix capable of reflecting the different gray pixel distributions, and then entering the step alpha 3.
Step alpha 3, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000081
obtaining a gray scale contrast CON corresponding to the hyperspectral sample image, wherein p (m, n) represents the probability of joint distribution between the pixel with the gray scale value m and the pixel with the gray scale value n, and then entering step alpha 4.
Step alpha 4, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000082
and obtaining the gray difference Dis corresponding to the hyperspectral sample image, and then entering the step alpha 5.
Step alpha 5, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000083
and obtaining the gray scale correlation core corresponding to the hyperspectral sample image, wherein mu represents the mean value of all pixel gray scale values in the hyperspectral sample image, and sigma represents the standard deviation of all pixel gray scale values in the hyperspectral sample image, and then entering the step alpha 6.
Step alpha 6, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, the following formula is adopted:
Figure BDA0002643578850000084
and obtaining the gray homogeneity degree Homo corresponding to the hyperspectral sample image, and then entering the step alpha 7.
Step alpha 7, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure BDA0002643578850000085
and obtaining a gray angle second moment ASM corresponding to the hyperspectral sample image, and then entering a step alpha 8.
Step alpha 8, gray angle second moment ASM corresponding to the hyperspectral sample image is calculated according to the following formula:
Figure BDA0002643578850000086
and acquiring the gray level Energy corresponding to the hyperspectral sample image, and then entering the step alpha 9.
And step alpha 9, constructing and obtaining gray level co-occurrence matrix characteristics corresponding to the hyperspectral sample images according to gray level contrast CON, gray level difference Dis, gray level correlation Corre, gray level homogeneity Homo, gray level angle second moment ASM and gray level Energy corresponding to the hyperspectral sample images.
For Gabor filtering features, Gabor filters are used in the field of image processing as linear filters for texture analysis, by means of which the content of a particular frequency or a particular direction in the region of interest of the image can be detected. The two-dimensional Gabor filter is a complex exponential function modulated by a Gaussian function, the expression of the two-dimensional Gabor filter comprises a real number part and an imaginary number part, and in practical application, the following formula of the real number part in the two-dimensional Gabor filter is applied respectively for pixels of each coordinate in a hyperspectral sample image:
Figure BDA0002643578850000091
obtaining the corresponding of the pixel
Figure BDA0002643578850000092
And then finishing the operation on each pixel in the hyperspectral sample image to form a Gabor filtering characteristic corresponding to the hyperspectral sample image, wherein x represents the abscissa of the position in the hyperspectral sample image corresponding to the pixel, y represents the ordinate of the position in the hyperspectral sample image corresponding to the pixel, x '═ xcos theta + ysin theta, y' ═ -xsin theta + ycos theta, theta represents the detection direction of the Gabor filtering kernel function, and lambda represents the wavelength of the Gabor filtering kernel function,
Figure BDA0002643578850000093
represents the phase parameter of the cosine function in the Gabor filter kernel function, σ represents the gaussian standard deviation of the Gabor filter kernel function, and γ represents the spatial aspect ratio of the Gabor filter kernel function.
For morphological cross-sectional features, morphology is a classic method applied in the image processing field in the early days, and its basic operations include dilation-erosion and superimposed on-and-off operations based on the dilation-erosion. The Morphological Profiles (MP) are features obtained by repeating on and off operations, and in practical applications, the following formula is given for each coordinate pixel X in the hyperspectral sample image:
Figure BDA0002643578850000094
MP (X) corresponding to the pixel X is obtained, and then the operation aiming at each pixel in the hyperspectral sample image is completed, and the structure is formedForming the morphological section characteristics corresponding to the hyperspectral sample image, wherein,
Figure BDA0002643578850000095
representing the result of an opening operation for pixel X l times using a structuring element of size d,
Figure BDA0002643578850000096
the result of the closing operation is shown for pixel X l times using a structuring element of size d.
Finally, for the feature of the morphological attribute section, the morphological attribute section replaces structural elements with different shapes in the traditional morphological section extraction process with a more general attribute criterion, thereby obtaining a series of filters with different attributes. The attribute filters are utilized to process the image to obtain the attribute structure in the image, so that the spatial information of the image is comprehensively described, and the morphological attribute profile algorithm is used for extracting features based on gray attribute thinning and coarsening operation.
In practical application, the morphological attribute section characteristics corresponding to the hyperspectral sample image I are obtained according to the following steps of beta 1 to beta 4.
Step β 1. initialize v' ═ 1, then proceed to step β 2.
Step beta 2, based on preset gray level threshold values respectively corresponding to the morphological coarsening and coarsening operations, respectively aiming at each connected component area in the hyperspectral sample image I, judging whether the gray level attribute of the connected component area is larger than the gray level threshold value of the morphological coarsening and coarsening operation at the v' th time, if so, reserving the connected component area, and realizing the morphological coarsening operation; otherwise, merging the connected component region into the connected component region of which the peripheral gray attribute is greater than the gray threshold of the v' th morphological thinning operation, or covering the connected component region by applying a mask to realize the morphological thinning operation; and further completing the operation of each connected component region in the hyperspectral sample image I, obtaining a result of performing the v 'th morphological refinement operation and a result of the v' th morphological coarsening operation on the hyperspectral sample image I, and then entering a step beta 3.
Step beta 3, judging whether v' is equal to v, if so, entering a step beta 4; otherwise, updating by adding 1 for the value of v', and then returning to the step beta 2.
Step beta 4, aiming at the hyperspectral sample image I, the following formula is adopted:
Figure BDA0002643578850000101
obtaining a morphological attribute section characteristic AP (I) corresponding to the hyperspectral sample image, wherein epsilon v (I) Represents the result of v morphological refinement operations on the hyperspectral sample image I,
Figure BDA0002643578850000102
and showing the result of v morphological coarsening operations performed on the hyperspectral sample image I.
Step D, respectively aiming at various image features to be selected, respectively taking the image features to be selected corresponding to each high-spectrum sample image in the sample image set as input, applying Triple GAN classification network to classify each high-spectrum sample image in the sample image set, obtaining network classification of each high-spectrum sample image in the sample image set based on the image features to be selected, and combining actual classification corresponding to each high-spectrum sample image in the sample image set to obtain classification accuracy of all high-spectrum sample images in the sample image set based on the image features to be selected; and then obtaining the classification accuracy of all hyperspectral sample images in the sample image set based on the characteristics of various images to be selected respectively, and then entering the step E.
And E, selecting the image features to be selected corresponding to the highest classification accuracy rate as the target image features corresponding to the sample image set according to the classification accuracy rates of all the hyperspectral sample images in the sample image set based on the various image features to be selected respectively, and then entering the step F.
And F, taking the target image characteristics corresponding to each high-spectrum sample image in the sample image set as input, taking the actual classification corresponding to each high-spectrum sample image as output, and training the Triple GAN classification network to obtain the trained classification network, namely forming an image classification model.
Based on the acquisition of the image classification model, the image classification model is applied to the target hyperspectral image, and the following steps I to III are executed to realize the classification of the target hyperspectral image.
And I, executing a principal component analysis method for the target hyperspectral image to perform dimensionality reduction operation, updating the target hyperspectral image, and then entering the step II.
And II, obtaining target image characteristics corresponding to the target hyperspectral image, and then entering the step III.
And III, classifying the target hyperspectral images by using the target image characteristics corresponding to the target hyperspectral images as input and applying an image classification model to obtain classification results of the target hyperspectral images.
The hyperspectral image classification method based on Triple GAN is applied to practice, and the following steps A to F are executed firstly to achieve the acquisition of an image classification model.
And D, firstly, updating each high-spectrum sample image in the sample image set by sequentially performing the step A and the step B through the dimensionality reduction operation of the PCA principal component analysis method, and then entering the step C.
And C, respectively aiming at each high-spectrum sample image in the sample image set, obtaining various image characteristics to be selected of the high-spectrum sample image, wherein the high-spectrum sample image respectively corresponds to the gray level co-occurrence matrix, the Gabor filtering, the morphological section and the morphological attribute section, namely obtaining various image characteristics to be selected corresponding to each high-spectrum sample image in the sample image set.
Extracting gray level co-occurrence matrix statistic values: the extraction of gray level co-occurrence matrix statistics is completed by utilizing a Sklearn library in python, in order to reduce the calculation cost, the first 5 principal components after the dimensionality reduction of PCA are used as extraction objects, a 9 multiplied by 9 neighborhood of each pixel is used as an extraction unit, and an average value obtained by scanning from four directions of 0 degrees, 45 degrees, 90 degrees and 135 degrees is used as the spatial feature of a central pixel.
Secondly, programming is carried out by using the existing functions in the python OpenCV library:
1) the filter kernel of the Gabor filter is defined, in the course of which the filter window size is specified as 9 × 9 and the wavelength λ is specified as
Figure BDA0002643578850000111
The aspect ratio gamma is specified as 0.5, and 4 filter directions are selected in total, which are respectively 0,
Figure BDA0002643578850000112
And
Figure BDA0002643578850000113
2) taking the first 5 principal components subjected to PCA dimensionality reduction as 5 independent gray level images, and performing filtering operation one by one;
3) and stacking the filtering results obtained by the main components together to form a new multiband image.
Thirdly, the existing partial method packet is utilized in the section, and the morphological section is directly programmed on Matlab to be extracted:
1) reducing the dimension of the original image by using a PCA algorithm, and selecting the first 5 principal components as the original image for feature extraction;
2) selecting three structural elements of a disc shape, a diamond shape and a square shape to extract a morphological section;
3) and combining all the extracted morphological sections in a matrix form to obtain the expanded morphological sections.
On the basis of the existing program algorithm package, Matlab programming is adopted to realize the extraction of the image morphological attribute section:
1) using PCA compression spectrum characteristics, and selecting the first five main components as an original image extracted by MAP;
2) selecting two representative attributes of area and moment of inertia, and extracting a morphological attribute section of the image according to a feature extraction algorithm;
3) and combining all the extracted morphological attribute sections in a matrix form to obtain an expanded morphological attribute section EMAP.
Step D is then carried out, wherein in the step D, according to a method executed by design, classification accuracy rates of all hyperspectral sample images in the sample image set based on various image features to be selected respectively are obtained, and then step E is carried out, according to the classification accuracy rates of all hyperspectral sample images in the sample image set based on various image features to be selected respectively, the image feature to be selected corresponding to the highest classification accuracy rate is selected to serve as a target image feature corresponding to the sample image set; and finally, by executing the step F, taking the target image characteristics corresponding to each high-spectrum sample image in the sample image set as input, and the actual classification corresponding to each high-spectrum sample image as output, training the Triple GAN classification network to obtain the trained classification network, namely forming an image classification model.
In practical application, aiming at a target hyperspectral image, an image classification model is applied, and the following steps I to III are executed to realize classification of the target hyperspectral image.
According to the hyperspectral image classification method based on Triple GAN, a principal component algorithm is adopted to compress the spectral features of hyperspectral images so as to reduce feature dimensions and reduce redundancy; then, acquiring preset various image features to be selected corresponding to the hyperspectral sample image; then, a Triple GAN classification network is applied to complete hyperspectral sample image classification operation based on various image features to be selected, and the image features to be selected corresponding to the highest classification accuracy are selected as target image features corresponding to the sample image set in combination with actual classification of the hyperspectral sample images; finally, the characteristics of the target image are used as input, the actual classification of the hyperspectral sample image is used as output, the training aiming at the Triple GAN classification network is completed, and an image classification model is obtained; in practical application, classification of the target hyperspectral images can be completed by applying the image classification model through target image features, so that efficient classification of the hyperspectral images is realized, and practical working efficiency is guaranteed.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (6)

1. A hyperspectral image classification method based on Triple GAN is characterized by comprising the following steps: firstly, executing the following steps A to F to realize the acquisition of an image classification model; then, aiming at the target hyperspectral image, applying an image classification model, and executing the following steps I to III to realize the classification of the target hyperspectral image;
step A, collecting each high-spectrum sample image respectively corresponding to each preset actual classification, constructing a sample image set, and then entering step B;
b, respectively executing a principal component analysis method to perform dimensionality reduction operation on each high-spectrum sample image in the sample image set, updating each high-spectrum sample image in the sample image set, and then entering the step C;
step C, acquiring preset various image characteristics to be selected corresponding to the hyperspectral sample images respectively aiming at each hyperspectral sample image in the sample image set, namely acquiring various image characteristics to be selected corresponding to each hyperspectral sample image in the sample image set respectively, and then entering step D;
step D, respectively aiming at various image features to be selected, respectively taking the image features to be selected corresponding to each high-spectrum sample image in the sample image set as input, applying Triple GAN classification network to classify each high-spectrum sample image in the sample image set, obtaining network classification of each high-spectrum sample image in the sample image set based on the image features to be selected, and combining actual classification corresponding to each high-spectrum sample image in the sample image set to obtain classification accuracy of all high-spectrum sample images in the sample image set based on the image features to be selected; then obtaining the classification accuracy of all hyperspectral sample images in the sample image set based on the characteristics of various images to be selected respectively, and then entering the step E;
e, selecting the image features to be selected corresponding to the highest classification accuracy rate as the target image features corresponding to the sample image set according to the classification accuracy rates of all hyperspectral sample images in the sample image set based on the various image features to be selected respectively, and then entering the step F;
step F, taking the target image characteristics corresponding to each high-spectrum sample image in the sample image set as input, taking the actual classification corresponding to each high-spectrum sample image as output, and training aiming at the Triple GAN classification network to obtain the trained classification network, namely forming an image classification model;
step I, executing a principal component analysis method for reducing the dimension aiming at the target hyperspectral image, updating the target hyperspectral image, and then entering step II;
II, obtaining target image characteristics corresponding to the target hyperspectral image, and then entering the step III;
and III, classifying the target hyperspectral images by using the target image characteristics corresponding to the target hyperspectral images as input and applying an image classification model to obtain classification results of the target hyperspectral images.
2. The hyperspectral image classification method based on Triple GAN according to claim 1, wherein: in the step C, various image features to be selected are preset to include a gray level co-occurrence matrix feature, a Gabor filtering feature, a morphological section feature and a morphological attribute section feature, that is, a gray level co-occurrence matrix feature, a Gabor filtering feature, a morphological section feature and a morphological attribute section feature respectively corresponding to each high spectrum sample image in the sample image set are obtained.
3. The hyperspectral image classification method based on Triple GAN as claimed in claim 2, wherein: in the step C, obtaining gray level co-occurrence matrix characteristics corresponding to the hyperspectral sample image according to the following steps from alpha 1 to alpha 9;
step alpha 1, performing graying processing on the hyperspectral sample image, updating the hyperspectral sample image, and then entering step alpha 2;
step alpha 2, starting from any pixel in the hyperspectral sample image, counting the number of pixels with various distances delta and various gray values n in the hyperspectral sample image according to the gray value m of the pixel to form a statistical matrix corresponding to the distribution of different gray-scale pixels in the hyperspectral sample image, and then entering step alpha 3;
step alpha 3, according to the statistical matrix corresponding to the different gray pixel distribution in the hyperspectral sample image, the following formula is adopted:
Figure FDA0002643578840000021
obtaining a gray scale contrast CON corresponding to the hyperspectral sample image, wherein p (m, n) represents the probability of joint distribution between a pixel with a gray scale value m and a pixel with a gray scale value n, and then entering a step alpha 4;
step alpha 4, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure FDA0002643578840000022
obtaining gray scale difference Dis corresponding to the hyperspectral sample image, and then entering a step alpha 5;
step alpha 5, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure FDA0002643578840000023
obtaining gray scale correlation core corresponding to the hyperspectral sample image, wherein mu represents the mean value of all pixel gray scale values in the hyperspectral sample image, and sigma represents the standard deviation of all pixel gray scale values in the hyperspectral sample image, and then entering a step alpha 6;
step alpha 6, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, the following formula is adopted:
Figure FDA0002643578840000024
obtaining the gray homogeneity degree Homo corresponding to the hyperspectral sample image, and then entering a step alpha 7;
step alpha 7, according to the statistical matrix corresponding to the different gray level pixel distribution in the hyperspectral sample image, according to the following formula:
Figure FDA0002643578840000025
obtaining a gray angle second moment ASM corresponding to the hyperspectral sample image, and then entering a step alpha 8;
step alpha 8, gray angle second moment ASM corresponding to the hyperspectral sample image is calculated according to the following formula:
Figure FDA0002643578840000026
acquiring gray level Energy corresponding to the hyperspectral sample image, and then entering a step alpha 9;
and step alpha 9, constructing and obtaining gray level co-occurrence matrix characteristics corresponding to the hyperspectral sample images according to gray level contrast CON, gray level difference Dis, gray level correlation Corre, gray level homogeneity Homo, gray level angle second moment ASM and gray level Energy corresponding to the hyperspectral sample images.
4. The hyperspectral image classification method based on Triple GAN as claimed in claim 2, wherein: in the step C, the following formula of the real part in the two-dimensional Gabor filter is applied to the pixels of each coordinate in the hyperspectral sample image:
Figure FDA0002643578840000031
obtaining the corresponding of the pixel
Figure FDA0002643578840000036
And then finishing the operation of each pixel in the hyperspectral sample image to form a Gabor filtering characteristic corresponding to the hyperspectral sample image, wherein x represents the abscissa of the position of the pixel in the hyperspectral sample image, y represents the ordinate of the position of the pixel in the hyperspectral sample image, x 'x cos theta + y sin theta, y' x sin theta + ycos theta, theta represents the detection direction of the Gabor filtering kernel function, and lambda represents the wavelength of the Gabor filtering kernel function,
Figure FDA0002643578840000035
represents the phase parameter of the cosine function in the Gabor filter kernel function, σ represents the gaussian standard deviation of the Gabor filter kernel function, and γ represents the spatial aspect ratio of the Gabor filter kernel function.
5. The hyperspectral image classification method based on Triple GAN as claimed in claim 2, wherein: in the step C, aiming at the pixel X of each coordinate in the hyperspectral sample image, the following formula is adopted:
Figure FDA0002643578840000032
MP (X) corresponding to the pixel X is obtained, and then the operation aiming at each pixel in the hyperspectral sample image is completed to form the morphological section characteristics corresponding to the hyperspectral sample image, wherein,
Figure FDA0002643578840000033
representing the result of an opening operation for pixel X l times using a structuring element of size d,
Figure FDA0002643578840000034
the result of the closing operation is shown for pixel X l times using a structuring element of size d.
6. The hyperspectral image classification method based on Triple GAN as claimed in claim 2, wherein: in the step C, morphological attribute section features corresponding to the hyperspectral sample image I are obtained according to the following steps of beta 1 to beta 4; step β 1. initialize v' ═ 1, then go to step β 2;
step beta 2, based on preset gray level threshold values respectively corresponding to the morphological coarsening and coarsening operations, respectively aiming at each connected component area in the hyperspectral sample image I, judging whether the gray level attribute of the connected component area is larger than the gray level threshold value of the morphological coarsening and coarsening operation at the v' th time, if so, reserving the connected component area, and realizing the morphological coarsening operation; otherwise, merging the connected component region into the connected component region of which the peripheral gray attribute is greater than the gray threshold of the v' th morphological thinning operation, or covering the connected component region by applying a mask to realize the morphological thinning operation; further completing the operation on each connected component region in the hyperspectral sample image I, obtaining a result of performing the v 'th morphological refinement operation and a result of the v' th morphological coarsening operation on the hyperspectral sample image I, and then entering a step beta 3;
step beta 3, judging whether v' is equal to v, if so, entering a step beta 4; otherwise, updating by adding 1 according to the value of v', and then returning to the step beta 2;
step beta 4, aiming at the hyperspectral sample image I, the following formula is adopted:
Figure FDA0002643578840000041
obtaining a morphological attribute section characteristic AP (I) corresponding to the hyperspectral sample image, wherein epsilon v (I) Represents the result of v morphological refinement operations on the hyperspectral sample image I,
Figure FDA0002643578840000042
and showing the result of v morphological coarsening operations performed on the hyperspectral sample image I.
CN202010847535.8A 2020-08-21 2020-08-21 Hyperspectral image classification method based on Triple GAN Active CN112115795B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847535.8A CN112115795B (en) 2020-08-21 2020-08-21 Hyperspectral image classification method based on Triple GAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847535.8A CN112115795B (en) 2020-08-21 2020-08-21 Hyperspectral image classification method based on Triple GAN

Publications (2)

Publication Number Publication Date
CN112115795A CN112115795A (en) 2020-12-22
CN112115795B true CN112115795B (en) 2022-08-05

Family

ID=73805329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847535.8A Active CN112115795B (en) 2020-08-21 2020-08-21 Hyperspectral image classification method based on Triple GAN

Country Status (1)

Country Link
CN (1) CN112115795B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052216B (en) * 2021-03-15 2022-04-22 中国石油大学(华东) Oil spill hyperspectral image detection method based on two-way graph U-NET convolutional network
CN113792761B (en) * 2021-08-20 2024-04-05 北京航空航天大学 Remote sensing image classification method based on Gabor features and EMAP features
CN115187590B (en) * 2022-09-08 2022-12-20 山东艾克赛尔机械制造有限公司 Automobile part defect detection method based on machine vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222773A (en) * 2019-06-10 2019-09-10 西北工业大学 Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network
WO2019218313A1 (en) * 2018-05-17 2019-11-21 五邑大学 Progressive dynamic hyperspectral image classification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019218313A1 (en) * 2018-05-17 2019-11-21 五邑大学 Progressive dynamic hyperspectral image classification method
CN110222773A (en) * 2019-06-10 2019-09-10 西北工业大学 Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network

Also Published As

Publication number Publication date
CN112115795A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN114202696B (en) SAR target detection method and device based on context vision and storage medium
Wang et al. Adaptive dropblock-enhanced generative adversarial networks for hyperspectral image classification
CN112115795B (en) Hyperspectral image classification method based on Triple GAN
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
CN110084159A (en) Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
US20090060340A1 (en) Method And Apparatus For Automatic Image Categorization Using Image Texture
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
WO2018076138A1 (en) Target detection method and apparatus based on large-scale high-resolution hyper-spectral image
CN113705580B (en) Hyperspectral image classification method based on deep migration learning
CN110598564B (en) OpenStreetMap-based high-spatial-resolution remote sensing image transfer learning classification method
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN111046772A (en) Multi-temporal satellite remote sensing island shore line and development and utilization information extraction method
CN111310666A (en) High-resolution image ground feature identification and segmentation method based on texture features
Fu et al. A novel spectral-spatial singular spectrum analysis technique for near real-time in situ feature extraction in hyperspectral imaging
CN112200123B (en) Hyperspectral open set classification method combining dense connection network and sample distribution
CN108021890A (en) A kind of high score remote sensing image harbour detection method based on PLSA and BOW
CN114841972A (en) Power transmission line defect identification method based on saliency map and semantic embedded feature pyramid
CN112052758B (en) Hyperspectral image classification method based on attention mechanism and cyclic neural network
CN114255403A (en) Optical remote sensing image data processing method and system based on deep learning
CN112733736A (en) Class imbalance hyperspectral image classification method based on enhanced oversampling
CN113887472A (en) Remote sensing image cloud detection method based on cascade color and texture feature attention
CN111639697A (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
Kazimi et al. Semantic segmentation of manmade landscape structures in digital terrain models
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
CN114037922B (en) Aerial image segmentation method based on hierarchical context network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant