CN112464891A - Hyperspectral image classification method - Google Patents

Hyperspectral image classification method Download PDF

Info

Publication number
CN112464891A
CN112464891A CN202011468157.9A CN202011468157A CN112464891A CN 112464891 A CN112464891 A CN 112464891A CN 202011468157 A CN202011468157 A CN 202011468157A CN 112464891 A CN112464891 A CN 112464891A
Authority
CN
China
Prior art keywords
hyperspectral image
information
spectral
hyperspectral
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011468157.9A
Other languages
Chinese (zh)
Other versions
CN112464891B (en
Inventor
梁联晖
李军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202011468157.9A priority Critical patent/CN112464891B/en
Publication of CN112464891A publication Critical patent/CN112464891A/en
Application granted granted Critical
Publication of CN112464891B publication Critical patent/CN112464891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image classification method, which combines the advantages of 3D Octave convolution and a Bi-RNN attention network, firstly utilizes the 3D Octave convolution to obtain spatial characteristics for a hyperspectral image and simultaneously reduce spatial redundant information, then utilizes the Bi-RNN spectral attention network to extract spectral information of the hyperspectral image, realizes characteristic fusion of the spatial and spectral characteristic diagrams through a full connection layer, and finally outputs a classification result through softmax. The method realizes accurate classification of the hyperspectral remote sensing images under a low training sample, and accelerates the running speed of the model by adopting a parallel data processing mode.

Description

Hyperspectral image classification method
Technical Field
The invention belongs to the field of hyperspectral image processing in the field of remote sensing, and particularly relates to a hyperspectral image classification method.
Background
The hyperspectral remote sensing technology is a technology for crossing multiple subjects such as computer science, geography and the like, and utilizes a hyperspectral imager to image in different electromagnetic wave ranges by utilizing narrow spectral intervals so as to obtain a spectral curve with reverse cultural spectral characteristics. Data of hundreds of spectral bands are recorded at the same spatial resolution to form a three-dimensional hyperspectral image with a large amount of spatial and spectral information. The hyperspectral image uses two-dimensional space imaging to express the reflection effect of a surface object in a single wave band, and the reflection effects of a plurality of wave bands are combined in sequence to form a multi-layer approximately continuous spectral vector dimension. Each hyperspectral pixel point characteristic is composed of the spectral vectors, each pixel data is a continuous spectral curve, and observed ground feature information is recorded in detail. The hyperspectral images can describe the spectral information and the spatial information of the ground object in detail, so with the development of a hyperspectral image classification technology, the hyperspectral image classification is widely applied to the fields of environment monitoring, urban and rural planning, mineral exploitation, national defense construction, accurate agriculture and the like.
The hyperspectral image classification method can be roughly divided into three categories, namely a classification method based on spectral information, a classification method based on space-spectral feature combination and a deep learning classification method. The first method only utilizes spectral dimension information in a hyperspectral image, and ignores the correlation among pixels in space; the second category of methods improves the classification performance of hyperspectral images to some extent, but they depend to a large extent on handmade features. That is, the classification effect is mainly determined by low-level features, but this cannot represent complex content in the hyperspectral image, so that the classification performance is limited; compared with the first two traditional shallow classification methods, the third method has stronger characterization and generalization capabilities, can extract deeper image features, and obtains more distinguishing features to obtain a good classification result. However, although these methods achieve a good classification effect, models based on convolutional neural networks are accompanied by redundancy of a large amount of spatial dimension information, and the model performance is seriously affected to some extent. Meanwhile, in deep learning, a large amount of manpower and material resources are consumed for manual marking of the hyperspectral remote sensing images, and the number of ready-made marking samples is small. Therefore, how to learn the space and spectrum characteristics of the hyperspectral remote sensing images under the condition of reducing the spatial information redundancy and the low training samples has great significance in improving the classification accuracy of the hyperspectral images.
Aiming at all the current model methods for performing hyperspectral image classification by using Octave convolution, the method only solves the problem of reducing the redundancy of spatial characteristic information. However, in terms of the method for extracting the spectral information of the hyperspectral image, the method utilizes either the Octave convolution itself or the convolution neural network-based method to extract the spectral information. The two methods for extracting the spectral information both treat the spectral information of the hyperspectral data as a disordered high-dimensional vector for data processing, which does not accord with the characteristics of the spectral data and can destroy the correlation among the spectra, thereby causing the problems that the extraction of the spectral information is influenced and the spectral feature information cannot be accurately extracted.
The method aims at solving the problem that the existing method for performing hyperspectral classification by using a Bi-RNN (bidirectional recurrent neural network) cannot avoid the redundancy of a large amount of spatial feature information, so that the problem that the information of image spatial dimension cannot be accurately extracted and the classification precision is influenced is caused. The method aims at the problem that the data stream is serial and cannot be processed in parallel when the conventional Octave convolution method is used for spectral image classification.
For example, patent application CN 202010066659.2 discloses a hyperspectral remote sensing image classification method based on 3-dimensional and 2-dimensional mixed convolution, which includes acquiring a hyperspectral remote sensing image to be classified; performing spectral dimensionality reduction by using a principal component analysis method; arranging the spectral bands in the dimensionality-reduced hyperspectral remote sensing images from high to low along the middle of the channel to two sides of the channel according to the spectral information quantity; giving corresponding weight to the spectral band according to the spectral information content contained in the spectral band; taking cubic data with a fixed space size for each pixel point in the spectral band, extracting spectral-spatial characteristics according to the cubic data by using 3-dimensional convolution, and fusing spectral information by using 2-dimensional convolution to obtain a final characteristic diagram; extracting second-order information from the feature map by using a covariance pooling method, and outputting a feature vector; and inputting the feature vectors into a three-layer full-connection network to obtain a prediction classification result. The method combines the advantages of 3-dimensional convolution and 2-dimensional convolution, and realizes accurate classification of the hyperspectral remote sensing images under low training samples. However, the dimension reduction preprocessing is required to be performed on the spectrum in step S2 of the invention, which results in relatively complicated method and model. In addition, the method cannot avoid the problem of spatial information redundancy, and the method has insufficient extraction capability on spectral information.
Therefore, there is a need in the art for a new hyperspectral image classification method to solve the above problems.
Disclosure of Invention
The invention provides a hyperspectral image classification method based on a 3D Octave convolution and a Bi-RNN attention network, which combines the advantages of the 3D Octave convolution and the Bi-RNN attention network and realizes the accurate classification of hyperspectral images under low training samples.
Therefore, the invention provides a hyperspectral image classification method, wherein the hyperspectral image belongs to a remote sensing image acquired by an aerial camera, and the hyperspectral image classification method is based on a 3D Octave convolution and a Bi-RNN attention network, wherein the Bi-RNN is a bidirectional recurrent neural network, and comprises the following steps:
s1, acquiring a hyperspectral remote sensing image to be classified;
step S2, obtaining spatial feature information Z for the hyperspectral image by using continuous 4 or more 3D Octave convolutionsO(ii) a The number of 3D Octave convolutions is preferably 4;
step S3, regarding the hyperspectral data output after the step S1 as an ordered spectral vector, parallel to the step S2, inputting the spectral sequence into the bidirectional hidden layers one by one, and connecting the output state of the forward hidden layer and the output state of the reverse hidden layer through a series function to obtain a vector gn
Step S4, connecting output vector g of bidirectional hidden layernAs input to the attention module; probability weights W derived by random initialization of attention mechanismiAnd vector gnIs multiplied by an offset parameter biAfter the tanh activation function, calculating by a softmax function to obtain an attention weight parameter beta;
step S5, multiplying the attention weight parameter β by the corresponding value of the vector gn obtained in step S3, and then summing them to obtain a new spectral information vector label y;
step S6, extracting the spatial feature information Z of the last full connection layer of the 3D Octave convolution network in the step S2OCombining the new spectral information vector label y obtained from the last full-link layer of the Bi-RNN attention network in the step S5 to form a new full-link layer and outputting a feature vector;
and S7, inputting the feature vectors into more than two layers of full-connection layer networks, preferably 2-5 layers, more preferably 3 layers, and predicting classification results through a softmax layer.
In a specific embodiment, step S2 includes:
let the size of the image used for the hyperspectral image classification be W × H × L;
reshaping the hyperspectral image classification data into X with the size of L multiplied by N, wherein N is W multiplied by H;
the hyperspectral data X is used as the input of a 3D Octave convolutional network, and the input data and the output data of the Octave convolutional network are assumed to be X ═ X respectivelyH,XL},Z={ZH,ZLH and L are respectively expressed as high frequency information and low frequency information; that is, the input hyperspectral data X and the data Z output after the data processing of the 3D Octave convolution network can be respectively represented as the sum of corresponding high-frequency information and low-frequency information;
the Octave convolution model is built as follows:
ZH=ZH→H+ZL→Hand ZL=ZL→L+ZH→L
Wherein Z isH→H,ZL→LRepresenting the updating of hyperspectral image data information in high and low frequency, respectively, ZL→H,ZH→LRespectively representing the conversion of the hyperspectral image data information between low-frequency and high-frequency frequencies and between high-frequency and low-frequency frequencies;
high frequency characteristic information and low frequency for completing hyperspectral imageUpdating and converting the characteristic information, and assuming that the weight parameter corresponding to the Octave convolution model is W ═ WH,WL](ii) a Likewise, the weight parameter WHAnd WLAre respectively defined as WL=[WL→L,WH→L],WH=[WH→H,WL→H]Wherein W isH→H,WL→LIndicating the information update weight, W, within the corresponding frequencyH→L,WL→HRepresenting information conversion weights between corresponding frequencies;
from above to obtain ZHAnd ZLAre respectively:
Figure BDA0002835265310000041
Figure BDA0002835265310000042
wherein, T in the formula (1) and the formula (2) represents matrix transposition, up represents up-sampling operation, and pool represents average pooling operation;
calculating the Octave convolution network output Z, wherein the expression of Z is as follows:
Z=[ZL,ZH]
=[(ZL→L+ZH→L),(ZH→H+ZL→H)]
=[∑(WL)TX,∑(WH)TX]
=[∑(WL→L)TXL+∑(WH→L)Tpool(XH),∑(WH→H)TXH+up(∑(WL→H)TXL)]。
in a specific embodiment, the step S3 includes:
let the hyperspectral input data X be an ordered spectral vector, X ═ X1,X2,X3,...,Xn) Calculating the bidirectional hidden layer output h of the Bi-RNN networknAs follows:
Figure BDA0002835265310000043
Figure BDA0002835265310000051
In the formulae (3) and (4), n represents the range of the spectral band from 1 to m, and the coefficient matrix
Figure BDA0002835265310000052
And
Figure BDA0002835265310000053
the input from the current hidden layer is,
Figure BDA0002835265310000054
indicating the last hidden state hn-1
Figure BDA0002835265310000055
From h in subsequent hidden statesn+1Initially, f is the nonlinear activation function of the hidden layer, and the output of the encoder is taken as vector gnIs input, calculate gnThe following were used:
Figure BDA0002835265310000056
where concat () is a series function between the forward hidden state function and the reverse hidden state function.
In a specific embodiment, the step S4 includes:
acquiring weight values of different spectral information, wherein the weight of the attention layer is calculated as follows:
ein=tanh(Wign+bi) (6)
βin=softmax(Wi'ein+bi') (7)
in the formulae (6) and (7), WiAnd Wi' is a transformation matrix, biAnd bi' is a bias term, and softmax () maps non-normalized output values to probability distributions, with the output values constrained within a (0, 1) interval; equation (6) is a layer of neural network that rearranges the state vector space of Bi-RNN, which is then converted to e by tanh activationinAs a new hidden representation of hn; equation (7) generates the attention weight β by softmax layer, the βinIs a component of one of the attention weight parameters beta, i.e. the ith weight parameter, where we are based on einThe importance of the input is measured by the correlation with another channel vector, einIs an intermediate parameter.
In a specific embodiment, the step S5 includes:
calculate the prediction tag y for pixel Xn
yn=U[gn,β] (8)
Where U () is the sum function of all state vectors weighted by the corresponding attention weights; a prediction label y of the pixel XnIs a component of the spectral information vector label y.
In a specific embodiment, the step S7 includes: inputting the feature vectors into a 3-layer full-connection layer network, wherein the three full-connection layer network comprises three full-connection layers, normalizing the first two full-connection layers in the three full-connection layers by using Batch _ normal, activating the normalized full-connection layers by using a relu function, then using a regularized Dropout method, and outputting a prediction classification result by using Softmax in the last full-connection layer.
In a specific embodiment, the hyperspectral image classification method is implemented by using a hyperspectral image classification system which comprises a hyperspectral image module (1), a convolution network module (2), a Bi-RNN attention network module (3), a spatio-spectral feature fusion network module (4) and a classification image module (5), wherein the step S1 is implemented in the hyperspectral image module (1), the step S2 is implemented in the 3D Octave convolution network module (2), the steps S3 to S5 are implemented in the Bi-RNN attention network module (3), the step S6 is implemented in the spatio-spectral feature fusion network module (4), and the step S7 is implemented in the classification image module (5).
The invention has at least the following beneficial effects: the hyperspectral image classification method based on the 3D Octave convolution and the Bi-RNN attention network provided by the application utilizes 4 3D Octave convolutions to obtain spatial features for hyperspectral images and simultaneously reduces spatial redundant information, then utilizes the Bi-RNN spectral attention network to extract spectral information of the hyperspectral images, enhances the important system of spectral bands with higher spectral system quantity, improves the classification precision under the condition of low training samples, fully utilizes the advantages of the 3D Octave convolution and the Bi-RNN attention network, remarkably improves the classification accuracy, adopts a parallel data processing mode, and accelerates the operation speed of the model.
Drawings
FIG. 1 is a block diagram of a hyperspectral image classification method based on a 3D Octave convolution and a Bi-RNN attention network according to the invention.
Fig. 2 is a flowchart of the 3D Octave convolution of the present application.
FIG. 3 is a flow chart of the Bi-RNN attention network of the present application.
FIG. 4 is a classification diagram of different methods on a Pavia University dataset. Wherein, (a) a pseudo-color image, (b) a real terrain image, (c) SVM, (D)2D-CNN, (e) ARNN, (f) SSAN, (g)3DOC-SSAN and (h) the method of the present invention.
FIG. 5 is a classification diagram of different methods on a Botswana dataset. Wherein, (a) a pseudo-color image, (b) a real terrain image, (c) SVM, (D)2D-CNN, (e) ARNN, (f) SSAN, (g)3DOC-SSAN and (h) the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In one embodiment, the hyperspectral image classification method of the 3D Octave convolution and the Bi-RNN attention network fully utilizes the advantages of the 3D Octave convolution and the Bi-RNN attention network, and achieves the purpose of obtaining a classification result with high accuracy under low training samples.
Specifically, as shown in fig. 1, the method for classifying hyperspectral remote sensing images of a 3D Octave convolution and a Bi-RNN attention network in the embodiment includes the following steps:
and step S1, acquiring the hyperspectral remote sensing image to be classified.
Using Z1、Z2、Z3And Z4These 4D Octave convolutions obtain spatial features for the hyperspectral image while reducing spatial redundancy information, as shown in fig. 2. In this example, it is provided that the spatial feature information is obtained by each 3D Octave convolution, and the specific steps are as follows, see step S2.
In an embodiment, the provided 3D Octave convolution obtains spatial features for hyperspectral images as follows:
low frequency signal X in 1 st 3D Octave convolved input data XLSet to 0;
calculating the 1 st 3D Octave convolution network output Z1,Z1The expression of (a) is as follows:
Z1=[Z1 L,Z1 H]
=[(0+Z1 H→L),(Z1 H→H+0)]
=[∑(W1 H→L)Tpool(XH),∑(W1 H→H)TXH]
input of 2 nd 3D Octave convolutionData X2Is Z1Wherein Z is1 H→HDenotes the high frequency part, Z1 H→LRepresenting the low frequency part.
Computing the 2 nd 3D Octave convolutional network output Z2,Z2The expression of (a) is as follows:
Z2=[Z2 L,Z2 H]
=[(Z2 L→L+Z2 H→L),(ZH→H+ZL→H)]
=[∑(W2 L)TZ1,∑(W2 H)TZ1]
=[∑(W2 L→L)TZ1 L+∑(W2 H→L)Tpool(Z1 H),∑(W2 H→H)TZ1 H+up(∑(W2 L→H)TZ1 L)]
redundant information of the characteristic diagram of the hyperspectral image is reduced, and important characteristics are reserved.
High frequency signature Z using pooling2 HDown-sampling, and comparing the down-sampling result with the low-frequency characteristic diagram Z2 LMerging into a new profile Zpool
Input data X of 3D Octave convolution3Is ZpoolThe low frequency part is set to 0.
Calculating the output Z of the 3D Octave convolution network3,Z3The expression of (a) is as follows:
Z3=[Z3 L,Z3 H]
=[(0+Z3 H→L),(Z3 H→H+0)]
=[∑(W3 L)TZpool,∑(W3 H)TZpool]
=[∑(W3 H→L)Tpool(Zpool H),∑(W3 H→H)TZpool H]
input data X of 4 th 3D Octave convolution4Is Z3Wherein Z is3 H→HDenotes the high frequency part, Z1 H→LRepresenting the low frequency part.
Calculating the 4 th 3D Octave convolution network output Z4,Z4The expression of (a) is as follows:
Z4=[Z4 L,Z4 H]
=[(Z4 L→L+Z4 H→L),(Z4 H→H+Z4 L→H)]
=[∑(W4 L)TZ3,∑(W4 H)TZ3]
=[∑(W4 L→L)TZ3 L+∑(W4 H→L)Tpool(Z3 H),∑(W4 H→H)TZ3 H+up(∑(W4 L→H)TZ3 L)]
ensuring the integrity of the information, and matching the low-frequency characteristic diagram Z4 LFused to Z after upsampling4 HIn (b) to obtain ZO
The 3D Octave convolution structure is set to be a 4-layer convolution structure, the sizes of convolution kernels of the four-layer convolution structure are all set to be 5 multiplied by 3, and the number of the convolution kernels is respectively set to be 24, 48, 24 and 1.
The 3D Octave convolution method aims to reduce spatial redundant information under the condition that inherent spectral dimension information of a hyperspectral image is reserved. In fact, the 3D Octave convolution method is a multi-frequency feature representation method, which stores high-frequency and low-frequency maps into different groups, and stores and processes the low-frequency part in the feature map using low-dimensional vectors, and since the low-frequency component is redundant, the redundancy can be reduced by reducing the resolution of the low-frequency feature. Thus, the following reasoning can be drawn: after the 3D Octave convolution, the spatial redundancy information of the hyperspectral image is greatly reduced, which has important influence on the subsequent classification of the hyperspectral image.
And step S3, the hyperspectral data output after the step S1 is executed is regarded as an ordered spectral vector. Parallel to step S2, inputting the spectrum sequences into the bidirectional hidden layer of the Bi-RNN network one by one, and connecting the state output by the forward hidden layer and the state output by the reverse hidden layer by a series function to obtain a vector gn;
let the hyperspectral input data X be an ordered spectral vector, X ═ X1, X2, X3.
Figure BDA0002835265310000091
Figure BDA0002835265310000092
Where n represents the range 1-m of the spectral band, the coefficient matrix and the input from the current concealment layer, representing the last concealment state hn-1, starting from hn +1 in the subsequent concealment state, f is the nonlinear activation function of the concealment layer, and the output of the encoder is the input of the vector gn, calculated gn as follows:
Figure BDA0002835265310000093
where concat () is a series function between the forward hidden state function and the reverse hidden state function.
The Bi-RNN comprises a hidden layer of a bidirectional GRU layer, the spectrum sequences are input one by one, and two hidden layers running along opposite directions are connected to a single output, so that front and back spectrum information in the hyperspectral image spectrum sequences can be processed.
Step S4, connecting the bidirectional hidden layersThe output vector g afternAs input to the attention module. Probability weights W derived by random initialization of attention mechanismiAnd vector gnIs multiplied by an offset parameter biAnd after the tanh activation function, calculating by a softmax function to obtain an attention weight parameter beta. As shown in fig. 3.
Obtaining the weight values of different spectral information, and calculating the weight of a spectral attention layer as follows:
ein=tanh(Wign+bi)
βin=softmax(Wi'ein+bi')
wherein, WiAnd Wi' is a transformation matrix, biAnd bi' is the bias term, while softmax () maps non-normalized output values to probability distributions, and the output values are constrained to be within the (0, 1) interval. tanh activation converts it to einAs hnA new hidden representation of (a). The attention weight β is generated by the softmax layer.
The attention weight parameter β and the vector g obtained in step S3 are combinednMultiplying corresponding values, and then summing the values to obtain a new spectral information vector label y; the method comprises the following steps:
calculate the prediction tag y for pixel Xn
yn=U[gn,β]
Where U () is the sum function of all state vectors weighted by the corresponding attention weights.
In practice, the spectral curve is not a straight line of fixed constant but a continuous curve with peaks and valleys. Thus, some important spectral channels in the spectrum should have greater weight, while those minor spectral segments should be given less weight. Additional attention weights can enhance the spectral correlation between spectral channels, with a powerful function of capturing context information in the sequence.
In order to assign appropriate weighting parameters to each spectral channel, highlight and distinguish valid features, obtain more relevant and noticeable information, and attenuate information that is not conducive to classification. And a Bi-RNN attention network is introduced, so that the model can capture the correlation between internal spectral channels and perform better classification, and the training model is more accurate.
Step S6, extracting the spatial feature information Z of the last full connection layer of the 3D Octave convolution network in the step S2OAnd combining the obtained new spectral information vector label y with the last full-connection layer of the Bi-RNN attention network in the step S5 to form a new full-connection layer and output the feature vector.
Step S7, inputting the feature vectors into a 3-layer full-connection layer network, wherein the three full-connection layer network comprises three full-connection layers, and the first two full-connection layers in the three full-connection layers are activated by a relu function after being normalized by a Batch _ normal; to prevent overfitting, the first two of the three fully-connected layers use the regularized Dropout method, and the last fully-connected layer outputs the prediction classification results using Softmax.
In the embodiment, 4 3D Octave convolutions are used for extracting the hyperspectral image spatial information, so that the redundancy of the spatial information is reduced, meanwhile, a Bi-RNN attention network is used for extracting the hyperspectral image spectral information, the importance of spectral band information with higher spectral information content is enhanced by using the attention network, and the classification accuracy under the condition of low training samples is improved; the advantages of the 3D Octave convolution and the Bi-RNN attention network are fully utilized, the classification result with high accuracy is obtained under the condition of low training samples, and the running speed of the model is accelerated by adopting a parallel data processing mode.
Example 1
The experimental hardware platform is a high-performance computer, and is configured as follows: intel Core i9-9900K @3.60GHz eight cores, 32G memory, and the graphics card is Nvidia GeForce RTX 2080Ti (11 GB). The software platforms are Python3.6.0 and TensorFlow1.14 in the Windows10 system environment.
First, dividing experimental data and samples
To evaluate the classification effect of the proposed method, the Pavia University dataset and the Botswana dataset were selected to verify the performance of the proposed method.
The Pavia University dataset is remotely sensed image data obtained by reflective optical imaging spectrometer sensors at the University of paviia, north italy. The pixel size of the method is 610 multiplied by 340, 115 original spectral bands are arranged in the range of 430-860 nm, 12 noise bands are removed, and the remaining 103 spectral bands are used for classification. 9 semantic classes are defined in the Pavia University dataset, and the size of each class sample, and the division of the number of experimental training samples and test set samples are shown in Table 1.
The Botswana data set was acquired by the United states aviation administration by Hyperion sensor imaging spectrometer on EO-1 satellites from 5/31 of 2001. The image covers a 7.7km long strip-shaped zone in the region of the Gorgwarnaokavantage delta, the spatial resolution of the image reaches 30m, and the spectral resolution reaches 10 nm. The image originally comprises 242 wave bands, after the wave bands affected by noise are eliminated, the remaining 145 wave bands can be used for classifying the hyperspectral image, the image size is 1476 x 256, and the image contains 14 different categories in total. The size of each class sample, and the partitioning of the number of samples in the experimental training sample and test set are shown in table 2.
The classification precision evaluation index of the hyperspectral image adopts three commonly used evaluation indexes, namely overall classification precision (OA), average classification precision (AA) and Kappa coefficient to measure the classification precision.
TABLE 1 training set and test set sample number for the Pavia University dataset
Figure BDA0002835265310000111
TABLE 2 training and test set sample numbers for the Botswana data set
Figure BDA0002835265310000121
Second, parameter setting
In the experiment, three parameters of the learning rate, the space size and the discarding rate can cause remarkable influence on the experiment. Here we take the Pavia University dataset as an example, and make detailed evaluations of experimental parameters.
1) Learning rate: in experiments, we tested the effect of different learning rates. The learning rate determines the learning process and the amount of assignment errors each time the model weights are updated. Too large a learning rate may cause periodic oscillations in the training, and too small a learning rate may cause the model to fail to converge. Therefore, the learning rates of the model are respectively selected from [0.01,0.005,0.001,0.0007,0.0005,0.0003, 0.0001, 0.00007, 0.00005, 0.00003 and 0.00001] to carry out experiments, and the results show that the classification effect is the best when the learning rate is 0.0001.
2) The space size is as follows: because the extraction of the image space features depends heavily on the size of the space domain area. And a larger spatial input will provide more opportunities to learn more spatial features, but a larger spatial region also brings unnecessary information and the possibility of causing an image over-smoothing phenomenon. Therefore, selecting an appropriate space size is very important to improve classification performance. Under the conditions that the number of spectral channels is fixed, the optimal learning rate is 32, the batch size is 32, and the number of training iterations is 100, the classification accuracy results under different spatial dimensions are shown in table 3.
As can be seen from tables 3 and 4, the classification effect is best when the space size of the input data is 15 × 15 and the loss rate is 0.6. To optimize the classification performance, the experiment selects the best loss rate.
Also in the Botswana dataset, to optimize the classification performance, the learning rate of the experiment was set to 0.0001, the spatial size 13 × 13, the batch size 16, and the number of iterations of the training was set to 400.
TABLE 3 Classification accuracy under different spatial dimensions
Figure BDA0002835265310000131
TABLE 4 Classification accuracy at different loss rates
Figure BDA0002835265310000132
Third, experimental results
In order to ensure the accuracy of the experimental results, the experiment was repeated 10 times and the average was taken.
In order to verify the effectiveness and superiority of the method, the invention is compared with some traditional methods and mainstream deep learning methods (such as SVM, ARNN, SSAN, 3DOC-SSAN, 2D-CNN) in experiments. The results of the classification performance comparison experiments on the Pavia University dataset by the different methods are shown in table 5.
As can be seen from the results in Table 5, the performance of the method provided by the invention is obviously better than that of the traditional SVM method on the Pavia University data set, and the OA value, the AA value and the Kappa value of the method provided by the invention are higher than the precision of other mainstream deep learning classification methods, wherein the OA value is 9.50% higher than that of the SVM, 0.70% higher than that of 2D-CNN, 1.74% higher than that of ARNN, 0.97% higher than that of SSAN, and 0.10% higher than that of 3DC-SSAN classification methods. AA values 6.28% higher than SVM, 0.55% higher than 2D-CNN, 0.79% higher than ARNN, 0.66% higher than SSAN, 0.09% higher than 3 DC-SSAN. The Kappa value is 11.02% higher than that of SVM, 1.61% higher than that of 2D-CNN, 1.47% higher than that of ARNN, 2.37% higher than that of SSAN, and 0.02% higher than that of 3 DC-SSAN. The three indexes show that the method is superior to other methods in classification performance.
TABLE 5 Classification Performance of different methods on the Pavia University dataset
Figure BDA0002835265310000141
While the classification map of the different methods on the Pavia University dataset is shown in fig. 4. As can be seen from the figure, the final classification results of SVM, ARNN, SSAN and 2D-CNN all have a large amount of disordered speckles, and some regions have the phenomenon of wrong classification. The 3DOC-SSAN method has good classification effect, but there are also fewer blobs in the lower right corner and the upper left corner. The classification result graph of the method of the invention has the advantages that the ground objects are basically classified completely and correctly, spots are hardly seen, and the homogeneous area is relatively smooth.
Results of the classification performance comparison experiments on the Botswana data set by the different methods are shown in Table 6. Meanwhile, the classification map on the data set is shown in fig. 5.
As can be seen from Table 6, the accuracy of the method provided by the invention on the Botswana data set is higher than that of other methods on three indexes of OA value, AA value and Kappa value. Wherein the OA value is 8.88 percent higher than that of SVM, 1.65 percent higher than that of 2D-CNN, 2.77 percent higher than that of ARNN, 1.79 percent higher than that of SSAN, and 0.36 percent higher than that of the 3DC-SSAN classification method. The AA value was 9.67% higher than that of SVM, 1.95% higher than that of 2D-CNN, 2.71% higher than that of ARNN, 1.69% higher than that of SSAN, and 0.34% higher than that of 3 DC-SSAN. The Kappa value is 7.57% higher than that of SVM, 1.81% higher than that of 2D-CNN, 3.01% higher than that of ARNN, 1.94% higher than that of SSAN, and 0.38% higher than that of 3 DC-SSAN. And the classification precision reaches 100% in 11 classes, except that the classification precision on the flood plain grassland 1 is 97.38%, the classification precision of the other two classes also reaches more than 99.88%.
TABLE 6 Classification Performance of different methods on Botswana datasets
Figure BDA0002835265310000151
Meanwhile, as can be seen from tables 5 and 6, the 3D Octave convolution classification method 3DC-SSAN and the method of the present invention have significantly better performance than the 2D-CNN, ARNN and SSAN classification methods, which proves that the 3D Octave convolution has certain advantages in reducing spatial redundancy information and improving classification performance. The method of the invention has better classification performance than the 3DC-SSAN method without adding the Bi-RNN attention network, which shows that the Bi-RNN attention network has certain advantages in the aspect of extracting information of enhanced spectral features and is beneficial to the improvement of classification performance.
In addition, compared with the 3DOC-SSAN model, the model of the method does not need to be additionally provided with a space attention network module, the model is relatively simple, the model can be processed in parallel during model training, and the speed is higher when parallel running calculation is adopted. Because the data flow of the 3DOC-SSAN method is serial, the hyperspectral data needs to be input into an Octave convolution model for preprocessing, then the data can be respectively added into a spectrum and a space attention network for respectively extracting the spatial spectrum characteristics, then the characteristic information is fused through a data fusion module, and finally classification is carried out. The method of the present invention is different in that the data streams of the method of the present invention are parallel. Meanwhile, the running time of the Bi-RNN attention network running once is about 3 times faster than that of the 3D Octave convolution network. When parallel operation is adopted, the method is applicable to both a task-based parallel processing mode and a data-based parallel processing mode. After the 3D Octave convolution network is executed, the injected spatial spectral feature information can be directly fused into the network, the spatial attention and the spectral attention networks are operated without extra time overhead, and the operation time of the model is greatly reduced in comparison.
Fourth, conclusion
In order to reduce redundancy of spatial characteristic information, enhance acquisition of spectral information and improve classification performance of hyperspectral images, the invention provides a new model based on a 3D Octave convolution and a Bi-RNN attention network. The model is simple in structure, complex preprocessing and postprocessing of the hyperspectral image data are not needed, and end-to-end training can be achieved. Experiments show that the classification performance is obviously improved compared with the traditional method, and compared with some current mainstream deep learning algorithms, the method provided by the invention can be used for more fully extracting the spatial and spectral characteristic information and has better classification performance.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A hyperspectral image classification method belongs to a remote sensing image shot by an aerial camera, and is characterized in that the hyperspectral image classification method is based on a 3D Octave convolution and a Bi-RNN attention network, wherein the Bi-RNN is a bidirectional recurrent neural network, and the hyperspectral image classification method comprises the following steps:
s1, acquiring a hyperspectral remote sensing image to be classified;
step S2, obtaining spatial feature information Z for the hyperspectral image by using continuous 4 or more 3D Octave convolutionsO(ii) a The number of 3D Octave convolutions is preferably 4;
step S3, regarding the hyperspectral data output after the step S1 as an ordered spectral vector, parallel to the step S2, inputting the spectral sequence into the bidirectional hidden layers one by one, and connecting the output state of the forward hidden layer and the output state of the reverse hidden layer through a series function to obtain a vector gn
Step S4, connecting output vector g of bidirectional hidden layernAs input to the attention module; probability weights W derived by random initialization of attention mechanismiAnd vector gnIs multiplied by an offset parameter biAfter the tanh activation function, calculating by a softmax function to obtain an attention weight parameter beta;
step S5, multiplying the attention weight parameter β by the corresponding value of the vector gn obtained in step S3, and then summing them to obtain a new spectral information vector label y;
step S6, extracting the spatial feature information Z of the last full connection layer of the 3D Octave convolution network in the step S2OCombining the new spectral information vector label y obtained from the last full-link layer of the Bi-RNN attention network in the step S5 to form a new full-link layer and outputting a feature vector;
and S7, inputting the feature vectors into more than two layers of full-connection layer networks, preferably 2-5 layers, more preferably 3 layers, and predicting classification results through a softmax layer.
2. The hyperspectral image classification method according to claim 1, wherein step S2 comprises:
let the size of the image used for the hyperspectral image classification be W × H × L;
reshaping the hyperspectral image classification data into X with the size of L multiplied by N, wherein N is W multiplied by H;
the hyperspectral data X is used as the input of a 3D Octave convolutional network, and the input data and the output data of the Octave convolutional network are assumed to be X ═ X respectivelyH,XL},Z={ZH,ZLH and L are respectively expressed as high frequency information and low frequency information; that is, the input hyperspectral data X and the data Z output after the data processing of the 3D Octave convolution network can be respectively represented as the sum of corresponding high-frequency information and low-frequency information;
the Octave convolution model is built as follows:
ZH=ZH→H+ZL→Hand ZL=ZL→L+ZH→L
Wherein Z isH→H,ZL→LRepresenting the updating of hyperspectral image data information in high and low frequency, respectively, ZL →H,ZH→LRespectively representing the conversion of the hyperspectral image data information between low-frequency and high-frequency frequencies and between high-frequency and low-frequency frequencies;
in order to update and convert the high-frequency characteristic information and the low-frequency characteristic information of the hyperspectral image, the weight parameter corresponding to the Octave convolution model is assumed to be W ═ WH,WL](ii) a Likewise, the weight parameter WHAnd WLAre respectively defined as WL=[WL→L,WH→L],WH=[WH →H,WL→H]Wherein W isH→H,WL→LIndicating the information update weight, W, within the corresponding frequencyH→L,WL→HRepresenting information between corresponding frequenciesConverting the weight;
from above to obtain ZHAnd ZLAre respectively:
Figure FDA0002835265300000021
Figure FDA0002835265300000022
wherein, T in the formula (1) and the formula (2) represents matrix transposition, up represents up-sampling operation, and pool represents average pooling operation;
calculating the Octave convolution network output Z, wherein the expression of Z is as follows:
Z=[ZL,ZH]
=[(ZL→L+ZH→L),(ZH→H+ZL→H)]
=[∑(WL)TX,∑(WH)TX]
=[∑(WL→L)TXL+∑(WH→L)Tpool(XH),∑(WH→H)TXH+up(∑(WL→H)TXL)]。
3. the hyperspectral image classification method according to claim 1, wherein the step S3 comprises:
let the hyperspectral input data X be an ordered spectral vector, X ═ X1,X2,X3,...,Xn) Calculating the bidirectional hidden layer output h of the Bi-RNN networknThe following were used:
Figure FDA0002835265300000031
Figure FDA0002835265300000032
in the formulae (3) and (4), n represents the range of the spectral band from 1 to m, and the coefficient matrix
Figure FDA0002835265300000033
And
Figure FDA0002835265300000034
the input from the current hidden layer is,
Figure FDA0002835265300000035
indicating the last hidden state hn-1
Figure FDA0002835265300000036
From h in subsequent hidden statesn+1Initially, f is the nonlinear activation function of the hidden layer, and the output of the encoder is taken as vector gnIs input, calculate gnThe following were used:
Figure FDA0002835265300000037
where concat () is a series function between the forward hidden state function and the reverse hidden state function.
4. The hyperspectral image classification method according to claim 1, wherein the step S4 comprises:
acquiring weight values of different spectral information, wherein the weight of the attention layer is calculated as follows:
ein=tanh(Wign+bi) (6)
βin=softmax(Wi'ein+b′i) (7)
in the formulae (6) and (7), WiAnd Wi' is a transformation matrix, biAnd bi' is a bias term, and softmax () is a denormalNormalizing the output values to map to a probability distribution, wherein the output values are constrained in a (0, 1) interval; equation (6) is a layer of neural network that rearranges the state vector space of Bi-RNN, which is then converted to e by tanh activationinAs a new hidden representation of hn; equation (7) generates the attention weight β by softmax layer, the βinIs a component of one of the attention weight parameters beta, i.e. the ith weight parameter, where we are based on einThe importance of the input is measured by the correlation with another channel vector, einIs an intermediate parameter.
5. The hyperspectral image classification method according to claim 1, wherein the step S5 comprises:
calculate the prediction tag y for pixel Xn
yn=U[gn,β] (8)
Where U () is the sum function of all state vectors weighted by the corresponding attention weights; a prediction label y of the pixel XnIs a component of the spectral information vector label y.
6. The hyperspectral image classification method according to claim 1, wherein the step S7 comprises: inputting the feature vectors into a 3-layer full-connection layer network, wherein the three full-connection layer network comprises three full-connection layers, normalizing the first two full-connection layers in the three full-connection layers by using Batch _ normal, activating the normalized full-connection layers by using a relu function, then using a regularized Dropout method, and outputting a prediction classification result by using Softmax in the last full-connection layer.
7. The hyperspectral image classification method according to any of claims 1 to 6, wherein the hyperspectral image classification method is performed using a hyperspectral image classification system comprising a hyperspectral image module (1), a convolutional network module (2), a Bi-RNN attention network module (3), a spatio-spectral feature fusion network module (4) and a classified image module (5), the step S1 is performed in the hyperspectral image module (1), the step S2 is performed in the 3D Octave convolutional network module (2), the steps S3 to S5 are performed in the Bi-RNN attention network module (3), the step S6 is performed in the spatio-spectral feature fusion network module (4), and the step S7 is performed in the classified image module (5).
CN202011468157.9A 2020-12-14 2020-12-14 Hyperspectral image classification method Active CN112464891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011468157.9A CN112464891B (en) 2020-12-14 2020-12-14 Hyperspectral image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011468157.9A CN112464891B (en) 2020-12-14 2020-12-14 Hyperspectral image classification method

Publications (2)

Publication Number Publication Date
CN112464891A true CN112464891A (en) 2021-03-09
CN112464891B CN112464891B (en) 2023-06-16

Family

ID=74803979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011468157.9A Active CN112464891B (en) 2020-12-14 2020-12-14 Hyperspectral image classification method

Country Status (1)

Country Link
CN (1) CN112464891B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887328A (en) * 2021-09-10 2022-01-04 天津理工大学 Method for extracting space-time characteristics of photonic crystal space transmission spectrum in parallel by ECA-CNN fusion dual-channel RNN
CN114220002A (en) * 2021-11-26 2022-03-22 通辽市气象台(通辽市气候生态环境监测中心) Method and system for monitoring invasion of foreign plants based on convolutional neural network
CN115979973A (en) * 2023-03-20 2023-04-18 湖南大学 Hyperspectral traditional Chinese medicinal material identification method based on dual-channel compression attention network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268195A1 (en) * 2016-01-27 2018-09-20 Shenzhen University Gabor cube feature selection-based classification method and system for hyperspectral remote sensing images
CN110516596A (en) * 2019-08-27 2019-11-29 西安电子科技大学 Empty spectrum attention hyperspectral image classification method based on Octave convolution
CN111507409A (en) * 2020-04-17 2020-08-07 中国人民解放军战略支援部队信息工程大学 Hyperspectral image classification method and device based on depth multi-view learning
CN111898662A (en) * 2020-07-20 2020-11-06 北京理工大学 Coastal wetland deep learning classification method, device, equipment and storage medium
CN111965116A (en) * 2020-07-21 2020-11-20 天津大学 Hyperspectrum-based airport gas detection system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268195A1 (en) * 2016-01-27 2018-09-20 Shenzhen University Gabor cube feature selection-based classification method and system for hyperspectral remote sensing images
CN110516596A (en) * 2019-08-27 2019-11-29 西安电子科技大学 Empty spectrum attention hyperspectral image classification method based on Octave convolution
CN111507409A (en) * 2020-04-17 2020-08-07 中国人民解放军战略支援部队信息工程大学 Hyperspectral image classification method and device based on depth multi-view learning
CN111898662A (en) * 2020-07-20 2020-11-06 北京理工大学 Coastal wetland deep learning classification method, device, equipment and storage medium
CN111965116A (en) * 2020-07-21 2020-11-20 天津大学 Hyperspectrum-based airport gas detection system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周宇谷;王平;高颖慧;: "基于视觉词袋模型的遥感图像分类方法", 重庆理工大学学报(自然科学), no. 05 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887328A (en) * 2021-09-10 2022-01-04 天津理工大学 Method for extracting space-time characteristics of photonic crystal space transmission spectrum in parallel by ECA-CNN fusion dual-channel RNN
CN114220002A (en) * 2021-11-26 2022-03-22 通辽市气象台(通辽市气候生态环境监测中心) Method and system for monitoring invasion of foreign plants based on convolutional neural network
CN114220002B (en) * 2021-11-26 2022-11-15 通辽市气象台(通辽市气候生态环境监测中心) Method and system for monitoring invasion of foreign plants based on convolutional neural network
CN115979973A (en) * 2023-03-20 2023-04-18 湖南大学 Hyperspectral traditional Chinese medicinal material identification method based on dual-channel compression attention network

Also Published As

Publication number Publication date
CN112464891B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN107316013B (en) Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
Fu et al. DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion
Wang et al. Dual-channel capsule generation adversarial network for hyperspectral image classification
Li et al. Fast infrared and visible image fusion with structural decomposition
CN112464891A (en) Hyperspectral image classification method
CN111738124A (en) Remote sensing image cloud detection method based on Gabor transformation and attention
CN108491849A (en) Hyperspectral image classification method based on three-dimensional dense connection convolutional neural networks
CN109598732B (en) Medical image segmentation method based on three-dimensional space weighting
CN112733589B (en) Infrared image pedestrian detection method based on deep learning
CN110443296B (en) Hyperspectral image classification-oriented data adaptive activation function learning method
CN116482618B (en) Radar active interference identification method based on multi-loss characteristic self-calibration network
CN116434069A (en) Remote sensing image change detection method based on local-global transducer network
CN117058558A (en) Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network
Jiang et al. Hyperspectral image classification with CapsNet and Markov random fields
Wang et al. Spectral-spatial global graph reasoning for hyperspectral image classification
CN116977723A (en) Hyperspectral image classification method based on space-spectrum hybrid self-attention mechanism
Lin et al. A frequency-domain convolutional neural network architecture based on the frequency-domain randomized offset rectified linear unit and frequency-domain chunk max pooling method
Tian et al. Object feedback and feature information retention for small object detection in intelligent transportation scenes
CN114612709A (en) Multi-scale target detection method guided by image pyramid characteristics
CN116486183B (en) SAR image building area classification method based on multiple attention weight fusion characteristics
Li et al. Hyperspectral image fusion algorithm based on improved deep residual network
Fei et al. A GNN Architecture With Local and Global-Attention Feature for Image Classification
Xiao et al. Feature-level image fusion
Kuang et al. A spectral-spatial attention aggregation network for hyperspectral imagery classification
CN116229153A (en) Feature classification method based on spectrum space fusion transducer feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant