CN112508066A - Hyperspectral image classification method based on residual error full convolution segmentation network - Google Patents

Hyperspectral image classification method based on residual error full convolution segmentation network Download PDF

Info

Publication number
CN112508066A
CN112508066A CN202011337421.5A CN202011337421A CN112508066A CN 112508066 A CN112508066 A CN 112508066A CN 202011337421 A CN202011337421 A CN 202011337421A CN 112508066 A CN112508066 A CN 112508066A
Authority
CN
China
Prior art keywords
convolution
hyperspectral image
expansion
residual
segmentation network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011337421.5A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202011337421.5A priority Critical patent/CN112508066A/en
Publication of CN112508066A publication Critical patent/CN112508066A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image classification method based on a residual error full convolution segmentation network, which can well overcome the defects and shortcomings of the existing hyperspectral image classification technology. According to the invention, the network structure/parameters are reasonably limited according to the actual problems faced by the hyperspectral image classification technology, and a residual error full convolution segmentation network suitable for hyperspectral image classification is constructed. Compared with the existing hyperspectral image classification method, the hyperspectral image classification method has the advantages that the classification accuracy is kept high, meanwhile, the calculation complexity of hyperspectral image classification can be greatly reduced, and more powerful technical support is provided for the wide application of the hyperspectral remote sensing image.

Description

Hyperspectral image classification method based on residual error full convolution segmentation network
Technical Field
The invention relates to the technical field of hyperspectral remote sensing image classification, in particular to a hyperspectral image classification method based on a residual error full convolution segmentation network.
Background
The hyperspectral image classification is a technology for obtaining pixel-level dense classification of images by allocating a semantic category to each pixel in a hyperspectral image by utilizing spectral information or space-spectral information of ground objects, and can be widely applied to the fields of agriculture, environmental management, monitoring, geophysical and the like. Over the last 30 years, researchers have proposed various hyperspectral image classification methods. These conventional methods can be basically classified into a unified two-stage framework, i.e., feature extraction + classification. Spectral dimensional features are usually extracted by methods such as PCA, ICA, LDA and the like, and spatial features are usually extracted by methods such as morphological filtering, wavelet transformation, LBP, Garbor transformation and the like. The classifier usually adopts SVM, KNN, neural network, Bayesian classifier, etc.
In recent years, with the development of deep convolutional neural networks, researchers have also proposed various hyperspectral image classification methods based on deep convolutional neural networks. Compared with the traditional two-stage method, the method based on the deep convolutional neural network not only can integrate the feature extraction and classification, but also can realize the nonlinear feature extraction by stacking a plurality of convolutional layers and activation layers, so that higher classification precision and integration degree can be generally obtained.
However, both the conventional two-stage method and the emerging method based on the deep convolutional neural network are processed in units of pixels, that is, 1D spectral information or 3D spatial spectral information of a single pixel is input, and the output is the ground object type of the pixel.
However, in practical applications, in the face of a hyperspectral image which is to be a real scene, the pixel-by-pixel processing method usually requires high calculation cost, and the wide application of the hyperspectral image classification technology is limited to a great extent.
Disclosure of Invention
The invention aims to solve the major defect that the existing hyperspectral image classification technology can only process pixel by pixel, and provides a hyperspectral image classification method which is convenient to implement and based on a residual error full convolution segmentation network.
In order to achieve the above object, the method comprises the steps of:
(1) and constructing a residual full convolution segmentation network. The residual full-volume integral cutting network comprises a 3D convolution subnet, a 2D convolution subnet and 1 2D pooling layer;
(2) shearing a plurality of empty spectrum cube data blocks with marked pixel points from a hyperspectral image of a data set, and establishing a training sample set;
(3) sending the training sample set into the residual full-convolution segmentation network, and minimizing a target function of the network by using a random gradient descent algorithm, so as to iteratively optimize the weight and the deviation in the network, and finally obtaining the trained residual full-convolution segmentation network;
(4) preprocessing a hyperspectral image to be processed to obtain a space boundary expansion image of the hyperspectral image to be processed;
(5) and inputting the space boundary expansion image of the hyperspectral image to be processed into the trained residual full-convolution segmentation network to obtain a pixel-level dense classification result of the hyperspectral image to be processed.
Further, the residual full convolution segmentation network constructed in the step (1) needs to satisfy the following limiting conditions: I) the boundary expansion is not carried out on partial convolution/pooling layers; II) no spatial down-sampling operation must be introduced into the 2D convolutional subnet; III) adopting a full convolution network structure.
Further, in the step (1), the 3D convolutional subnets, the 2D convolutional subnets and the 2D pooling layer in the residual full convolutional partitioning network are connected in series.
Further, in step (1), the 3D convolutional subnet includes 1 3D input convolutional layer, a plurality of 3D implicit residual blocks, and 1 3D output convolutional layer.
Further, in step (1), the 2D convolutional subnet includes 1 2D input convolutional layer, a plurality of 2D implicit residual blocks, and 1 2D output convolutional layer.
Further, the 3D residual block in the 3D convolutional subnet and the 2D residual block in the 2D convolutional subnet are selected from one of a basic residual block, a bottleneck residual block or a pyramid residual block, and preferably the basic residual block. The number of residual blocks is chosen in the range of 1-4, preferably 2.
Further, in the 3D convolutional subnet, the 3D input convolutional layer has no boundary expansion and the spectrum dimension stride is 2, all convolutional layers in the 3D implicit residual block have boundary expansion and stride is 1, and the boundary expansion is one of complement 0 expansion, mirror expansion and copy expansion. The complement 0 expansion is preferred.
Further, in the 3D convolution subnet, the 3D output convolution layer has no boundary expansion and stride is 1, and the size of the convolution kernel is adjustable according to the number of spectral bands of the hyperspectral image in the data set and the size of the convolution kernel of the 3D input convolution layer in the 3D convolution subnet, and the specific adjustment mode is: if the number of spectral bands of the hyperspectral image is N and the convolution kernel size of the 3D input convolution layer is 1 x s, the convolution kernel size of the 3D output convolution layer is 1 x (N-s + 1)/2.
Further, in the 2D convolutional subnet, the 2D input convolutional layer has no boundary expansion and stride is 1, all convolutional layers in the 2D implicit residual block and the 2D output convolutional layer have boundary expansion and stride is 1, and the boundary expansion is one of complement 0 expansion, mirror expansion and copy expansion. The complement 0 expansion is preferred.
Further, the 2D pooling layer has no edge expansion and stride is 1, and pooling is selected as average pooling.
Furthermore, the size of the pooling kernel in the 2D pooling layer can be adjusted correspondingly according to the spatial size of the training sample. If the spatial dimension of the training sample is (2n +1) × (2n +1), the size of the pooling nuclei in the 2D pooling layer is (2n-1) × (2 n-1).
Further, the number of output channels of the 2D output convolution layer in the 2D convolution sub-network can be adjusted according to the number of ground object types in the data set, and the specific adjustment mode is as follows: if the number of the ground object types in the data set is C, the number of output channels of the 2D output convolution layer is also C.
Further, in the step (2), a specific method for establishing the sample set is as follows: and (3) cutting a spectrum-space cube data block of a small neighborhood of (2n +1) × (2n +1) around each marked pixel point in the hyperspectral image, putting the data block and a real class label of the data block into a sample set as a sample, and then randomly selecting k% of each class of sample from the sample set to form a training sample set. Wherein n ranges from 1 to 12, preferably from 3 to 6, and k ranges from 5 to 50, preferably from 15 to 20.
Further, the objective function of the residual full convolution partition network in the step (3) is cross entropy loss between the prediction class and the real class.
Further, in the step (3), the input of the residual full convolution partition network is spectrum-space cube data corresponding to a small neighborhood around the central pixel point, and the output is a prediction category of the central pixel point.
Further, in the step (4), the boundary extension is one of a complement 0 extension, a mirror image extension and a copy extension. Mirror expansion is preferred. The up-down, left-right and extended size of the space can be correspondingly adjusted according to the space size of the training sample, and the specific adjustment mode is as follows: if the spatial size of the training sample is (2n +1) × (2n +1), the spatial up-down left-right expansion size is n.
Further, in the step (5), the input of the residual full convolution segmentation network is 3D spatial spectrum data corresponding to the boundary expansion result of the hyperspectral image to be processed, and the output is a pixel-level dense classification result of the hyperspectral image to be processed.
Compared with the prior art, the invention has the following beneficial effects:
the invention overcomes the defects of the existing hyperspectral image classification method, and constructs the residual full-convolution segmentation network suitable for hyperspectral image classification by limiting the network structure and parameters. Based on the residual error full convolution segmentation network, on one hand, network training can be carried out by taking spectrum-space information of pixel points as input, and on the other hand, dense classification of hyperspectral images can be carried out by taking the whole hyperspectral image as input, so that pixel-by-pixel processing is avoided, the computational complexity of hyperspectral image classification is greatly reduced, and powerful technical support is provided for wide application of hyperspectral remote sensing images.
Drawings
FIG. 1 is a flow chart of a hyperspectral image classification method based on a residual full convolution segmentation network.
FIG. 2 is a block diagram of an implementation of an embodiment of the invention.
Fig. 3 is a model structure diagram of a residual full convolution partition network according to an embodiment of the present invention.
Detailed Description
The present invention is further illustrated by the following figures and examples, which include, but are not limited to, the following examples.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, B exists alone, and A and B exist at the same time, and the term "/and" is used herein to describe another association object relationship, which means that two relationships may exist, for example, A/and B, may mean: a alone, and both a and B alone, and further, the character "/" in this document generally means that the former and latter associated objects are in an "or" relationship.
Examples
The data set related to the embodiment is a public data set IP, Indian pipes, which comprises: 1) a hyperspectral image with spectral band number of 200 and space size of 145 x 145; 2) and (4) ground feature label information corresponding to each pixel in the image, wherein the unmarked pixel points are simply marked as background points and do not participate in training. Contains 16 surface feature classes.
The embodiment provides a hyperspectral image classification method based on a residual full convolution segmentation network, which is characterized in that the following limitations are made on the network structure/hyper-parameters: 1) the boundary expansion is not carried out on partial convolution/pooling layers; 2) no spatial down-sampling operation is introduced in the network; 3) by adopting the full convolution network, the defect that the existing CNN-based hyperspectral image classification method can only process point by point is well overcome, and the method can be widely applied to hyperspectral image classification.
As shown in fig. 1 and fig. 2, the hyperspectral image classification method based on the residual full convolution segmentation network mainly includes 5 steps. The respective steps will be described in detail below.
(1) And constructing a residual full convolution segmentation network meeting the above definition. This is the key to the present invention. As shown in fig. 3, the residual full convolution partition network includes a 3D convolution sub-network, a 2D convolution sub-network, and a 2D pooling layer connected in series.
Wherein the 3D convolutional subnet comprises one 3D input convolutional layer, 2 implicit 3D basic residual blocks and one 3D output convolutional layer.
The 3D input convolution layer has no boundary expansion and the spectrum dimension stride is 2, the size of convolution kernel is 1 multiplied by 7, the number of output channels is 24, the convolution layer in the 3D basic residual block has the boundary expansion and the stride is 1 in the spectrum dimension, the size of convolution kernel is 1 multiplied by 7, the number of output channels is 24, the boundary expansion is filled by 0 compensation, the 3D output convolution layer has no boundary expansion and the stride is 1, the size of convolution kernel is 1 multiplied by 97, and the number of output channels is 128.
The 2D convolutional subnet includes a 2D input convolutional layer, 2 implicit 2D basic residual blocks, and a 2D output convolutional layer. The 2D input convolution layer has no edge expansion and stride is 1, the convolution kernel size is 3 multiplied by 3, the output channel number is 24, the convolution layer in the 2D basic residual block has edge expansion and stride is 1, the convolution kernel size is 3 multiplied by 3, the output channel number is 24, and the boundary expansion is filled with 0 padding. The 2D output convolution layer has an edge extension with stride equal to 1, the convolution kernel size is 1 × 1, the number of output channels is 16, and the edge extension is filled with 0 padding.
The 2D pooling layer has no edge expansion, stride is 1, the size of the pooling core is 5 multiplied by 5, the number of output channels is 16, and average pooling is selected for pooling.
As shown in fig. 2, in this embodiment, the input samples in the training stage are data of 7 × 7 small image blocks randomly cut from the original data set, and the ground feature classification result of the central pixel point of the 7 × 7 small image blocks is output; in the prediction stage, a hyperspectral image with any size is input, and a pixel level dense classification result of the hyperspectral image is output.
(2) And shearing a plurality of empty spectrum cube data blocks with marked pixel points from the hyperspectral image of the data set, and establishing a training sample set.
The specific method comprises the following steps: firstly, cutting a spectrum-space cube block of a small neighborhood of 7 multiplied by 7 around each marked pixel point in a hyperspectral image, putting the cut cube block and a real class label of the cut cube block into a sample set, and then randomly selecting a training sample set with the total number of 2000 from 20% of each class of samples from the sample set.
(3) And sending the training sample set to the established residual full-convolution segmentation network, and minimizing an objective function (cross entropy between a network output result and a real label) of the network by using a random gradient descent algorithm, thereby iteratively optimizing the weight and deviation in the network and finally obtaining the trained residual full-convolution segmentation network.
In the embodiment, the network inputs the spectrum-space data P epsilon R of the neighborhood around the pixel point7×7×200The network output is the probability x belonging to each category of the pixel point to the R16The cross-entropy loss L (x, c) between it and the true tag c is:
Figure BDA0002797675710000071
to learn the network parameters, a stochastic gradient descent algorithm is utilized to minimize cross-entropy loss. Gradient of the loss function L (x, c) with respect to all parameters W of the network
Figure BDA0002797675710000072
Can be accurately calculated by a neural network back propagation algorithm. After the gradient is obtained, the network parameters (weights and biases) are adjusted in an iterative manner:
Figure BDA0002797675710000073
wherein epsilon represents the learning rate, the value is 0.01, and the iteration number is 200.
(4) And performing mirror image boundary expansion on the hyperspectral image to be processed, respectively expanding 3 pixel points in the four directions of the boundary, namely the upper direction, the lower direction, the left direction and the right direction, and obtaining a boundary expansion image of the hyperspectral image to be processed, wherein the boundary expansion is filled by 0 supplement.
(5) And inputting the boundary expansion image of the hyperspectral image to be processed into the trained residual full-convolution segmentation network model, and outputting the pixel-level dense classification result of the hyperspectral image to be processed by the network.
The invention is well implemented in accordance with the above-described embodiments. It should be noted that, based on the above design principle, even if some insubstantial modifications or changes are made on the basis of the disclosed structure or method, the adopted technical solution is still the same as the present invention, and therefore, the technical solution is also within the protection scope of the present invention.

Claims (10)

1. A hyperspectral image classification method based on a residual error full convolution segmentation network is characterized by comprising the following steps:
(1) constructing a residual full-convolution partition network, wherein the residual full-convolution partition network comprises a 3D convolution subnet, a 2D convolution subnet and 1 2D pooling layer;
(2) shearing a plurality of spectrum-space cube data blocks with marked pixel points from a hyperspectral image of a data set, and establishing a training sample set;
(3) sending the training sample set into the residual full-convolution segmentation network, and minimizing a target function of the network by using a random gradient descent algorithm, so as to iteratively optimize the weight and the deviation in the network, and finally obtaining the trained residual full-convolution segmentation network;
(4) preprocessing a hyperspectral image to be processed to obtain a space boundary expansion image of the hyperspectral image to be processed;
(5) and inputting the space boundary expansion image of the hyperspectral image to be processed into the trained residual full-convolution segmentation network to obtain a pixel-level dense classification result of the hyperspectral image to be processed.
2. The hyperspectral image classification method based on the residual error full convolution segmentation network of claim 1, wherein the 3D convolution sub-network in the step (1) comprises 1 3D input convolution layer, a plurality of 3D residual blocks, and 1 3D output convolution layer; the 2D convolutional subnet comprises 1 2D input convolutional layer, a plurality of 2D residual blocks and 1 2D output convolutional layer; the 3D residual block and the 2D residual block are selected from one of a basic residual block, a bottleneck residual block or a pyramid residual block, preferably the basic residual block, and the number selection range of the 3D residual block and the 2D residual block is 1-4, preferably 2.
3. The hyperspectral image classification method based on the residual error full convolution segmentation network according to claim 2 is characterized in that the 3D input convolution layer has no boundary expansion and the spectrum dimension stride is 2; all the convolution layers in the 3D residual block have boundary expansion, stride is equal to 1, the boundary expansion is one of complement 0 expansion, mirror image expansion and copy expansion, and the complement 0 expansion is preferred; the 3D output convolution layer has no boundary expansion and stride is 1, the size of a convolution kernel can be adjusted according to the number of spectral bands of a hyperspectral image in a data set and the size of the convolution kernel of the 3D input convolution layer, and the specific adjustment mode is as follows: if the number of spectral bands of the hyperspectral image is N and the convolution kernel size of the 3D input convolution layer is 1 x s, the convolution kernel size of the 3D output convolution layer is 1 x (N-s + 1)/2.
4. The hyperspectral image classification method based on the residual error full convolution segmentation network according to claim 2 is characterized in that in the 2D convolution subnet, the 2D input convolution layer has no boundary expansion and stride is 1, all convolution layers in the 2D implicit residual block and the 2D output convolution layer have boundary expansion and stride is 1, and the boundary expansion is selected from one of complement 0 expansion, mirror image expansion and copy expansion, preferably complement 0 expansion; the number of output channels of the 2D output convolution layer in the 2D convolution sub-network can be adjusted according to the number of ground object types in the data set, and the specific adjustment method is as follows: if the number of ground object types in the data set is C, the number of output channels of the 2D output convolutional layer is also C.
5. The hyperspectral image classification method based on the residual error full convolution segmentation network according to claim 2 is characterized in that the 2D pooling layer has no boundary expansion and stride is 1, and average pooling is selected for pooling; the size of the pooling kernel can be correspondingly adjusted according to the space size of the training sample, and the specific adjustment mode is as follows: if the spatial dimension of the training sample is (2n +1) × (2n +1), the size of the pooling nuclei in the 2D pooling layer is (2n-1) × (2 n-1).
6. The hyperspectral image classification method based on the residual error full convolution segmentation network of claim 1 is characterized in that the specific method for establishing the training sample set in the step (2) is as follows: firstly, cutting off a spectrum-space cube data block of a (2n +1) × (2n +1) small neighborhood around each marked pixel point in a hyperspectral image, putting the data block and a class label of the data block into a sample set, and then randomly selecting k% of each class of sample from the sample set to form a training sample set, wherein the value range of n is 1-12, preferably 3-6, and the value range of k is 5-50, preferably 15-20.
7. The method for hyperspectral image classification based on the residual fully convolutional segmentation network of claim 1, wherein the objective function of the residual fully convolutional segmentation network in the step (3) is cross entropy loss between predicted classes and real classes.
8. The hyperspectral image classification method based on the residual error full convolution segmentation network of claim 1 is characterized in that in the step (3), the input of the residual error full convolution segmentation network is spectrum-space cube data corresponding to a small neighborhood around a central pixel point, and the output is a ground object category of the central pixel point.
9. The hyperspectral image classification method based on the residual error full convolution segmentation network according to claim 1 is characterized in that in the step (4), the boundary expansion is one of complement 0 expansion, mirror image expansion and copy expansion, preferably mirror image expansion, and the size of the spatial up, down, left and right expansion can be adjusted according to the spatial size of a training sample. If the spatial size of the training sample is (2n +1) × (2n +1), the spatial up-down left-right expansion size is n.
10. The hyperspectral image classification method based on the residual error fully-convolutional segmentation network of claim 1, wherein in the step (5), the input of the residual error fully-convolutional segmentation network is 3D spatial spectrum data corresponding to the boundary expansion result of the hyperspectral image to be processed, and the output is a pixel-level dense classification result of the hyperspectral image to be processed.
CN202011337421.5A 2020-11-25 2020-11-25 Hyperspectral image classification method based on residual error full convolution segmentation network Pending CN112508066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011337421.5A CN112508066A (en) 2020-11-25 2020-11-25 Hyperspectral image classification method based on residual error full convolution segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011337421.5A CN112508066A (en) 2020-11-25 2020-11-25 Hyperspectral image classification method based on residual error full convolution segmentation network

Publications (1)

Publication Number Publication Date
CN112508066A true CN112508066A (en) 2021-03-16

Family

ID=74958630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011337421.5A Pending CN112508066A (en) 2020-11-25 2020-11-25 Hyperspectral image classification method based on residual error full convolution segmentation network

Country Status (1)

Country Link
CN (1) CN112508066A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326847A (en) * 2021-06-04 2021-08-31 天津大学 Remote sensing image semantic segmentation method and device based on full convolution neural network
CN113537228A (en) * 2021-07-07 2021-10-22 中国电子科技集团公司第五十四研究所 Real-time image semantic segmentation method based on depth features
CN113807362A (en) * 2021-09-03 2021-12-17 西安电子科技大学 Image classification method based on interlayer semantic information fusion deep convolutional network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015064218A (en) * 2013-09-24 2015-04-09 住友電気工業株式会社 Optical measuring system, and operating method for the same
CN110689065A (en) * 2019-09-23 2020-01-14 云南电网有限责任公司电力科学研究院 Hyperspectral image classification method based on flat mixed convolution neural network
CN110866552A (en) * 2019-11-06 2020-03-06 西北工业大学 Hyperspectral image classification method based on full convolution space propagation network
CN111353463A (en) * 2020-03-12 2020-06-30 北京工业大学 Hyperspectral image classification method based on random depth residual error network
CN111652039A (en) * 2020-04-13 2020-09-11 上海海洋大学 Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015064218A (en) * 2013-09-24 2015-04-09 住友電気工業株式会社 Optical measuring system, and operating method for the same
CN110689065A (en) * 2019-09-23 2020-01-14 云南电网有限责任公司电力科学研究院 Hyperspectral image classification method based on flat mixed convolution neural network
CN110866552A (en) * 2019-11-06 2020-03-06 西北工业大学 Hyperspectral image classification method based on full convolution space propagation network
CN111353463A (en) * 2020-03-12 2020-06-30 北京工业大学 Hyperspectral image classification method based on random depth residual error network
CN111652039A (en) * 2020-04-13 2020-09-11 上海海洋大学 Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326847A (en) * 2021-06-04 2021-08-31 天津大学 Remote sensing image semantic segmentation method and device based on full convolution neural network
CN113537228A (en) * 2021-07-07 2021-10-22 中国电子科技集团公司第五十四研究所 Real-time image semantic segmentation method based on depth features
CN113537228B (en) * 2021-07-07 2022-10-21 中国电子科技集团公司第五十四研究所 Real-time image semantic segmentation method based on depth features
CN113807362A (en) * 2021-09-03 2021-12-17 西安电子科技大学 Image classification method based on interlayer semantic information fusion deep convolutional network
CN113807362B (en) * 2021-09-03 2024-02-27 西安电子科技大学 Image classification method based on interlayer semantic information fusion depth convolution network

Similar Documents

Publication Publication Date Title
CN108537192B (en) Remote sensing image earth surface coverage classification method based on full convolution network
CN111368896B (en) Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network
CN109919206B (en) Remote sensing image earth surface coverage classification method based on full-cavity convolutional neural network
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN112508066A (en) Hyperspectral image classification method based on residual error full convolution segmentation network
CN109376804A (en) Based on attention mechanism and convolutional neural networks Classification of hyperspectral remote sensing image method
CN111310666B (en) High-resolution image ground feature identification and segmentation method based on texture features
CN111695467A (en) Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN113674334B (en) Texture recognition method based on depth self-attention network and local feature coding
CN110728192A (en) High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
CN110728197B (en) Single-tree-level tree species identification method based on deep learning
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN113705580B (en) Hyperspectral image classification method based on deep migration learning
CN111626267B (en) Hyperspectral remote sensing image classification method using void convolution
CN116797787B (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN110689065A (en) Hyperspectral image classification method based on flat mixed convolution neural network
CN116843975A (en) Hyperspectral image classification method combined with spatial pyramid attention mechanism
CN115272670A (en) SAR image ship instance segmentation method based on mask attention interaction
Kumar et al. A hybrid cluster technique for improving the efficiency of colour image segmentation
CN114596463A (en) Image-based land parcel type classification method
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization
CN112766340B (en) Depth capsule network image classification method and system based on self-adaptive spatial mode
Jiang et al. Semantic segmentation network combined with edge detection for building extraction in remote sensing images
CN105023269A (en) Vehicle-mounted infrared image colorization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210316