CN111612127B - Multi-direction information propagation convolution neural network construction method for hyperspectral image classification - Google Patents
Multi-direction information propagation convolution neural network construction method for hyperspectral image classification Download PDFInfo
- Publication number
- CN111612127B CN111612127B CN202010359251.4A CN202010359251A CN111612127B CN 111612127 B CN111612127 B CN 111612127B CN 202010359251 A CN202010359251 A CN 202010359251A CN 111612127 B CN111612127 B CN 111612127B
- Authority
- CN
- China
- Prior art keywords
- convolution
- dimensional
- network
- hidden layer
- slice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for constructing a multi-directional information propagation convolutional neural network for hyperspectral image classification, which comprises the following steps of: the input end is a local three-dimensional hyperspectral data cube sample taking a target pixel as a center; the deep neural network consists of two-dimensional convolution multilayer perceptrons among the hidden layer units, two-dimensional convolution perceptrons inside the hidden layer units, a pooling layer and a full-connection layer; the two-dimensional convolution sensor in the hidden layer unit divides the internal feature map of the hidden layer into pieces according to the row or column direction, and executes the piece-by-piece convolution between the feature pieces according to the upper, lower, left and right directions, thereby transmitting the spatial information of the pixels in different directions; the output layer is a class probability vector of the input spectral pixels. The network is different from a classical convolution network, a spatial information propagation mechanism among characteristic channels is formed inside a hidden layer, the spatial spectrum characteristics with higher discriminative performance can be learned, the network is applied to hyperspectral supervised classification, and the supervised classification capability under a small number of samples is greatly improved.
Description
Technical Field
The invention relates to a deep neural network technology, in particular to a construction method of a multi-direction information propagation convolution neural network for hyperspectral image classification.
Background
In recent years, a convolutional neural network as a popular deep learning framework has gradually become a powerful tool in hyperspectral image analysis, and has a very wide application prospect in the field of hyperspectral classification. Compared with a method based on shallow characterization learning, the convolutional neural network realized by the deep convolutional sensor can adaptively learn hierarchical representation from low-level features to high-level features, and then sequentially identify the most discriminative features for a supervised classification task of a hyperspectral image. For the hyperspectral image classification task, the challenges are the following. First, pixels are located in a high-dimensional complex manifold, and the nonlinear correlation between pixels is more complex than that of a natural image; second, the spatial variability of spectral features increases the variability between internal classes; finally, due to the fact that ground object type distribution is unbalanced, the hyperspectral image is always in a state of class imbalance.
To address these issues, various different structures of convolutional neural network frameworks have been proposed in succession today to achieve more compact and more discriminative spatio-spectral features. In general, among the methods based on Convolutional Neural networks, the Convolutional forms are various, and typical ones include two-dimensional Convolutional Neural networks [ Konstancos M, Konstancos K, et al. Deep preceding Learning for Hyperspectral data Classification scheme. IEEE International geographic information and recent Sensing Symposium. 2015, 4959. sup. 4-dimensional Convolutional Neural networks [ Li Y, Zhang H, Shen Q. spectral-Spatial Classification of Hyperspectral information with 3D Convolutional Neural Network prediction, 2017,9(1): 67. also, empty Residual spectrum networks [ Zhang Z, J, research. spectral information. binary spectral analysis. J., spectrum-Spatial Neural Network 56: semantic information, echo, spectrum, echo, and echo, echo. The two-dimensional convolutional neural network can construct high-level features containing rich pixel space spectrum information in a layered mode by using the convolutional neural network and the multilayer perceptron. The three-dimensional convolution neural network directly uses initial three-dimensional high-level data as network input without artificially extracting image features, and the network can effectively extract the spatial spectrum features of the hyperspectral image. The spatial spectrum residual network uses the spectral residual block and the spatial residual block to learn deep discriminative features. Although these methods are effective in improving the hyperspectral classification effect, many deep features in hyperspectral images are still not utilized, and especially in terms of spatial features, the utilization is far from enough.
Disclosure of Invention
The invention aims to provide a multidirectional information propagation convolutional neural network for hyperspectral image classification, which is different from a classical convolutional network, forms a spatial information propagation mechanism among characteristic channels inside a hidden layer, can learn more discriminative spatial spectrum characteristics, is applied to hyperspectral supervised classification, and greatly improves the supervised classification capability under a small number of samples.
The technical solution for realizing the purpose of the invention is as follows: a construction method of a multi-directional information propagation convolutional neural network for hyperspectral image classification comprises the following steps:
the input layer is three-dimensional spatial spectrum data with a target pixel as a center, namely the input data of the network is a three-dimensional neighborhood pixel block with hundreds of spectral bands;
constructing a multi-directional information propagation convolutional neural network;
accelerating network training by adopting batch normalization, parameter correction linear unit activation functions and random discarding strategies;
the output layer is a category probability vector of an input spectrum pixel, namely the output of the network is the category probability vector of a central pixel point of an input three-dimensional neighborhood pixel block, the category probability vector is used for determining the category of the pixel, and the vector length is the total number of the categories.
Compared with the existing classification method, the deep neural network establishes characteristic sheet propagation in the hidden layer and a multidirectional information propagation mechanism between the hidden layer and the hidden layer, and has the advantages that: (1) the two-dimensional convolution sensor in the hidden layer unit can effectively utilize the spatial correlation of the hyperspectral image pixels, enriches the spatial information of the pixels, and can effectively extract abundant and discriminative space spectrum features by combining the two-dimensional convolution multilayer sensor between the hidden layer units and the two-dimensional convolution sensor in the hidden layer unit; (2) an effective optimization method is adopted, so that the network convergence speed is high, and the network parameters are few; (3) the network can obtain better performance under the condition of a small amount of training set samples, has good stability, and can obtain excellent effect when being applied to hyperspectral image classification; the invention can be widely applied to the fields of land and feature classification, environmental monitoring, crop classification and the like.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a block diagram of a multi-directional information propagation convolutional neural network structure for hyperspectral image classification according to the present invention.
Fig. 2 is a schematic diagram of the piece-by-piece convolution.
Fig. 3 is a network architecture diagram of the present invention.
FIG. 4(a) is a Pavia University ground truth map.
FIG. 4(b) is a graph of the classification effect of the Pavia University 0.5% training set.
FIG. 4(c) is a graph of the classification effect of the Pavia University 1% training set.
FIG. 4(d) is a Pavia University 5% training set classification effect diagram.
FIG. 5(a) is a plot of the actual terrain map for Indian Pines.
FIG. 5(b) is a graph of the classification effect of Indian Pines 1% training set.
FIG. 5(c) is a diagram of the effect of the Indian Pines 5% training set classification.
FIG. 5(d) is a diagram of the classification effect of Indian Pines 10% training set.
Detailed Description
In order to solve the problem that the existing convolutional neural network method cannot fully utilize the spatial characteristics of a hyperspectral image, the invention provides a multi-directional information propagation convolutional neural network construction method for hyperspectral image classification, which comprises the following steps: the input end is a local three-dimensional hyperspectral data cube sample taking a target pixel as a center; the deep neural network consists of two-dimensional convolution multilayer perceptrons among the hidden layer units, two-dimensional convolution perceptrons inside the hidden layer units, a pooling layer and a full-connection layer; the multilayer convolution perceptron between the hidden layer units is formed by convolving the spectrum dimension by a 1 multiplied by 1 two-dimensional convolution core; the two-dimensional convolution sensor in the hidden layer unit divides the internal feature map of the hidden layer into pieces according to the row or column direction, and executes the piece-by-piece convolution between the feature pieces according to the upper, lower, left and right directions, thereby transmitting the spatial information of the pixels in different directions; the output layer is a class probability vector of the input spectral pixels. The network is different from a classical convolution network, a spatial information propagation mechanism among characteristic channels is formed inside a hidden layer, spatial spectrum characteristics with higher discriminative performance can be learned, the network is applied to hyperspectral supervised classification, and the supervised classification capability under a small number of samples is greatly improved. The network fully utilizes the spatial correlation among the pixels of the hyperspectral images through a novel piece-by-piece convolution structure, is combined with a two-dimensional convolution layer, obtains richer and more distinctive spatial spectrum characteristics, and can obtain excellent effects on the supervision and classification of the hyperspectral images.
The following describes the implementation process of the present invention in detail with reference to fig. 1 and fig. 2, and the steps of the present invention are as follows:
in the first step, the input layer is three-dimensional spatial spectrum data with a target pixel as the center, that is, the input data of the network is a three-dimensional neighborhood pixel block with hundreds of spectral bands, and the value range is [100,600 ]]. Note the bookAnd the pixel block is a three-dimensional neighborhood pixel block, wherein l is the height and the width of the three-dimensional neighborhood pixel block, and b is the number of spectral channels of the three-dimensional neighborhood pixel block.
And secondly, constructing a multi-directional information propagation convolutional neural network. The deep neural network is composed of two-dimensional convolution multilayer perceptrons among hidden layer units, two-dimensional convolution perceptrons inside the hidden layer units, a pooling layer and a full connection layer. The two-dimensional convolution multilayer perceptron between the hidden layer units and the two-dimensional convolution perceptron inside the hidden layer units have the following structural forms:
1) the input of the hidden layer unit isOutput is asThe two-dimensional convolution multi-layer perceptron between the hidden layer units adopts m 1 multiplied by 1 two-dimensional convolution cores to carry out convolution transformation on the spectrum dimension, and the output ism is the number of input and output channels, and m is an integer and is more than 1;
2) the two-dimensional convolution perceptron inside the hidden layer unit is a feature map obtained after two-dimensional convolutionSlicing in row direction or column direction to obtain characteristic slices with size of l × m, and dividing into upper and lower slicesSequentially performing two-dimensional convolution on each feature slice in the left and right directions, wherein the convolution kernel size is w × m, and 0 is<w is not more than l and is an integer, the number of convolution kernels is m, the same filling mode is adopted to keep the size of the result after convolution consistent with that of the original feature slice, the convolution result with the size of l multiplied by m is linearly added with the next feature slice to obtain an updated feature slice, then two-dimensional convolution is applied to the updated feature slice, the obtained convolution result is used for updating the next slice, the operation is repeated until the last slice is updated, and the specific implementation calculation formula is as follows
(f 1 ,f 2 ,......,f l )=split(T)
O=CON(f 1 ,f′ 2 ,......,f′ l )
Wherein the content of the first and second substances,represents a convolution operation, h i Representing the ith convolution kernel in the two-dimensional convolution layer, BN (-) representing batch normalization processing, sigma representing a nonlinear activation function, split (-) representing that the feature diagram output by the previous layer is subjected to slicing operation according to the row or column direction of the image, f k Is the k-th feature slice f after the feature map is sliced k ′ Is the updated k-th feature slice, W k-1 Is the convolution kernel of the k-1 th slice in the slice-by-slice convolution, and CON (-) represents the operation of re-splicing the feature slices into the feature graph.
Thirdly, batch normalization, a parameter modification linear unit activation function and random discarding acceleration network training are adopted, wherein the parameter modification linear unit activation function is abbreviated as PRelu (x) i ) The calculation formula is defined as:
wherein x is i Representing the input of the parametrically modified linear unit activation function on the ith channel, a i Is a learnable parameter that determines the slope of the negative portion. Updating a by momentum method i :
Wherein mu is momentum, and the value range is [0,1 ]](ii) a lr is the learning rate of the network, and the value range [0,0.0005 ]]In iteration a i As an initial value, 0.25.
And fourthly, the output layer is the category probability vector of the input spectrum pixel, namely the output of the network is the category probability vector of the central pixel point of the input three-dimensional neighborhood pixel block, the category of the pixel is determined, and the vector length is the total number of the categories. Note the bookFor the three-dimensional neighborhood pixel block input by the network, the target pixel can be divided into C different classes, and the output layer of the network isIndicating the probability that the pixel belongs to each class. Wherein Y can be represented as:
Y=FC(P(O′))=[y 1 ,y 2 ,……,y C ]
wherein, y C Represents the probability that the pixel belongs to class C, P (-) represents pooling level processing, and FC (-) represents a fully-connected operation.
According to the invention, a novel piece-by-piece convolution structure is embedded in a hidden feature layer of a convolution neural network, and the structure can utilize convolution operation among feature pieces of a feature map, so that spatial feature information is spread, and the spatial information of each pixel is richer. The traditional layer-by-layer convolution is combined with the piece-by-piece convolution provided by the invention, the characteristic learning process is obviously improved, and the spatial spectrum characteristics which are richer and have more discriminativity can be obtained through the network, so that the hyperspectral image classification performance is improved. The network of the invention is an end-to-end supervised classification neural network model, the input does not need to be preprocessed, the training process is efficient and time-saving, the output result is simple and clear, the model stability is good, the robustness is high, and the invention can be widely applied to the engineering field.
The effect of the present invention can be further illustrated by the following simulation experiments:
examples
The hyperspectral images are typical three-dimensional space spectrum data, and verification experiments are carried out in the following two groups of common hyperspectral data sets: indian Pines dataset and Pavia University dataset. The Pavia University dataset was acquired by a rossi sensor in pavian, and included 115 bands in total, the image size was 610 × 340, and after removing the noise band, the remaining 103 bands were selected as the study objects, and since the image included a large number of background pixels, 42776 were included in the ground object pixels actually used in the classification experiment, and 9 were included in the ground object categories. The Indian Pines dataset is a hyperspectral remote sensing image acquired by an airborne visible infrared imaging spectrometer (AVIRIS) in an Indian Pines experimental area, indiana, usa. The image contains 220 bands in total, the spatial resolution is 20m, and the image size is 145 × 145. After removing 20 water vapor absorption and low signal-to-noise ratio bands, the remaining 200 bands were selected as the study. The region contains 16 known surface features and 10249 surface feature samples. In the experiments performed on both data sets, no pre-treatment was taken. On a Pavia University data set, 0.5%, 1% and 5% of samples are randomly selected as training data sets in an experiment, 1% of samples are randomly selected as verification data sets in the experiment, and the rest samples are used as testing data sets. On an Indian Pines data set, 1%, 5% and 10% of samples are randomly selected as a training data set, 1% of samples are randomly selected as a verification data set, and the rest samples are used as a test data set. In the experiment, the experiments on the two data sets were repeated 10 times respectively and averaged as the final result, and oa (overall accuracy) and aa (average accuracy) were used as the evaluation indexes of classification performance. All experiments were performed on the same equipment and in the same environment: windows10 operating system, CPU: i5-8400, GPU: NVIDIA GeForce GTX 1060, 8G memory, Python3.5+ Tensflow environment. The network structure used in the experiment is shown in fig. 3: extracting three-dimensional neighborhood pixel blocks from an original hyperspectral image as input of a network, respectively performing four times of two-dimensional convolution and piece-by-piece convolution in different directions, and finally obtaining the classification soft probability of a single sample after operations such as pooling, dimension reduction, full connection, random discarding and the like.
Table 1 shows the classification accuracy results obtained by performing the validation experiment on the two data sets by the method of the present invention.
TABLE 1
From the classification results, the method of the invention shows good performance on both the Pavia university and Indian Pines datasets. Under the condition that 5% of samples are selected as training data sets by the Pavia University and 10% of samples are selected as training data sets by the Indian pipes, the classification precision reaches 99%, and the performance is far higher than that of the traditional hyperspectral image classification method, so that the feasibility of the method is proved. Moreover, under the condition of a small amount of training samples, namely the condition that 1 percent and 0.5 percent of samples are selected as training data sets by the Pavia University and 5 percent and 1 percent of samples are selected as training data sets by the Indian Pines, the classification results also keep higher performance, so that the method disclosed by the invention can still obtain excellent effects under the condition of a small amount of training samples and has the advantage of higher stability. The experimental results of the method of the present invention on two sets of data sets are shown in fig. 4(a) -4 (d) and fig. 5(a) -5 (d), and the classification result graph shows that the method of the present invention achieves a good classification effect on both data sets.
Claims (3)
1. A construction method of a multi-directional information propagation convolutional neural network for hyperspectral image classification is characterized by comprising the following steps:
the input layer is three-dimensional spatial spectrum data with target pixel as center, i.e. the input data of the network is three-dimensional neighborhood pixel block with hundreds of spectral bandsThe pixel block is a three-dimensional neighborhood pixel block, wherein l is the height and the width of the three-dimensional neighborhood pixel block, and b is the number of spectral channels of the three-dimensional neighborhood pixel block;
constructing a multidirectional information propagation convolutional neural network, wherein the multidirectional information propagation convolutional neural network is composed of two-dimensional convolutional multilayer perceptrons among hidden layer units, two-dimensional convolutional perceptrons inside the hidden layer units, a pooling layer and a full-connection layer; the two-dimensional convolution multilayer perceptron between the hidden layer units and the two-dimensional convolution perceptron inside the hidden layer units have the following structural forms:
1) the input of the hidden layer unit isOutput is asThe two-dimensional convolution multi-layer perceptron between the hidden layer units adopts m 1 multiplied by 1 two-dimensional convolution cores to carry out convolution transformation on the spectrum dimension, and the output ism is the number of input and output channels, m>1 and is an integer;
2) the two-dimensional convolution perceptron in the hidden layer unit is a feature map obtained after two-dimensional convolutionThe method comprises the steps of slicing in the row direction or the column direction to obtain characteristic slices with the size of l x m, sequentially performing two-dimensional convolution on each characteristic slice in the upper, lower, left and right directions, wherein the convolution kernel size is w x m, and is 0<w is less than or equal to l and is an integer, the number of convolution kernels is m, and the same padding is adoptedAnd keeping the size of the convolved result consistent with that of the original feature slice in a filling mode, linearly adding the convolved result with the size of l multiplied by m with the next feature slice to obtain an updated feature slice, applying two-dimensional convolution to the updated feature slice to obtain a convolution result for updating the next slice, and repeating the operation until the last slice is updated, wherein the specifically implemented calculation formula is as follows:
(f 1 ,f 2 ,……,f l )=split(T)
O=CON(f 1 ,f 2 ′,……,f l ′)
wherein the content of the first and second substances,represents a convolution operation, h i Representing the ith convolution kernel in the two-dimensional convolution layer, BN (-) representing batch normalization processing, sigma representing a nonlinear activation function, split (-) representing that the feature diagram output by the previous layer is subjected to slicing operation according to the row or column direction of the image, f k Is the k characteristic piece, f 'after the characteristic diagram piece' k Is the updated kth feature slice, W k-1 Is the convolution kernel of the k-1 th slice in the slice-by-slice convolution, and CON (·) represents the operation of splicing the feature slices into the feature graph again;
accelerating network training by adopting batch normalization, parameter correction linear unit activation functions and random discarding strategies;
the output layer is a category probability vector of an input spectrum pixel, namely the output of the network is the category probability vector of a central pixel point of an input three-dimensional neighborhood pixel block, the category probability vector is used for determining the category of the pixel, and the vector length is the total number of the categories.
2. The method for constructing the multidirectional information propagation convolutional neural network for hyperspectral image classification as claimed in claim 1, wherein batch normalization, parameter-modified linear unit activation function and random discard acceleration network training are adopted, wherein the parameter-modified linear unit activation function is abbreviated as PRelu (x) i ) The calculation formula is defined as:
wherein x is i Representing the input of the parametrically modified linear unit activation function on the ith channel, a i Is a learnable parameter that determines the slope of the negative portion; updating a by momentum method i :
Wherein mu is momentum, and the value range is [0,1 ]](ii) a lr is the learning rate of the network, and takes the value range [0,0.0005 ]]In iteration a i 0.25 as an initial value.
3. The method for constructing the multi-directional information propagation convolutional neural network for hyperspectral image classification according to claim 1, wherein an output layer is a class probability vector of an input spectral pixel, that is, the output of the network is a class probability vector of a central pixel point of an input three-dimensional neighborhood pixel block, the class probability vector is used for determining a class to which the pixel belongs, and the vector length is the total number of the classes; note bookFor the three-dimensional neighborhood pixel block input by the network, the target pixel can be divided into C different classes, and the output layer of the network isIndicating that the pixel belongs to each categoryProbability; wherein Y is represented by:
Y=FC(P(O′))=[y 1 ,y 2 ,……,y C ]
wherein, y C Representing the probability that the pixel belongs to class C, P (-) representing pooling level processing, and FC (-) representing a fully connected operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010359251.4A CN111612127B (en) | 2020-04-29 | 2020-04-29 | Multi-direction information propagation convolution neural network construction method for hyperspectral image classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010359251.4A CN111612127B (en) | 2020-04-29 | 2020-04-29 | Multi-direction information propagation convolution neural network construction method for hyperspectral image classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111612127A CN111612127A (en) | 2020-09-01 |
CN111612127B true CN111612127B (en) | 2022-09-06 |
Family
ID=72201275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010359251.4A Active CN111612127B (en) | 2020-04-29 | 2020-04-29 | Multi-direction information propagation convolution neural network construction method for hyperspectral image classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612127B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376753A (en) * | 2018-08-31 | 2019-02-22 | 南京理工大学 | A kind of the three-dimensional space spectrum separation convolution depth network and construction method of dense connection |
CN110533077A (en) * | 2019-08-01 | 2019-12-03 | 南京理工大学 | Form adaptive convolution deep neural network method for classification hyperspectral imagery |
-
2020
- 2020-04-29 CN CN202010359251.4A patent/CN111612127B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376753A (en) * | 2018-08-31 | 2019-02-22 | 南京理工大学 | A kind of the three-dimensional space spectrum separation convolution depth network and construction method of dense connection |
CN110533077A (en) * | 2019-08-01 | 2019-12-03 | 南京理工大学 | Form adaptive convolution deep neural network method for classification hyperspectral imagery |
Also Published As
Publication number | Publication date |
---|---|
CN111612127A (en) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110852227A (en) | Hyperspectral image deep learning classification method, device, equipment and storage medium | |
CN108564006B (en) | Polarized SAR terrain classification method based on self-learning convolutional neural network | |
CN109086700B (en) | Radar one-dimensional range profile target identification method based on deep convolutional neural network | |
CN113159051B (en) | Remote sensing image lightweight semantic segmentation method based on edge decoupling | |
CN111985543B (en) | Construction method, classification method and system of hyperspectral image classification model | |
CN110348399B (en) | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network | |
CN111695467A (en) | Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion | |
CN110110596B (en) | Hyperspectral image feature extraction, classification model construction and classification method | |
CN110097178A (en) | It is a kind of paid attention to based on entropy neural network model compression and accelerated method | |
CN110084311B (en) | Hyperspectral image wave band selection method based on ternary weight convolution neural network | |
CN109087375A (en) | Image cavity fill method based on deep learning | |
CN109087367B (en) | High-spectrum image rapid compressed sensing reconstruction method based on particle swarm optimization | |
CN110852369B (en) | Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing | |
CN112560966B (en) | Polarized SAR image classification method, medium and equipment based on scattering map convolution network | |
CN110826693A (en) | Three-dimensional atmospheric temperature profile inversion method and system based on DenseNet convolutional neural network | |
CN113705580A (en) | Hyperspectral image classification method based on deep migration learning | |
CN115909052A (en) | Hyperspectral remote sensing image classification method based on hybrid convolutional neural network | |
CN115331104A (en) | Crop planting information extraction method based on convolutional neural network | |
CN115222994A (en) | Hyperspectral image classification method based on hybrid spectrum network and multi-head self-attention mechanism | |
CN113111975A (en) | SAR image target classification method based on multi-kernel scale convolutional neural network | |
CN111123232B (en) | Radar individual identification system with task adaptability | |
CN114065831A (en) | Hyperspectral image classification method based on multi-scale random depth residual error network | |
CN112818777B (en) | Remote sensing image target detection method based on dense connection and feature enhancement | |
CN113392871A (en) | Polarized SAR terrain classification method based on scattering mechanism multichannel expansion convolutional neural network | |
CN111612127B (en) | Multi-direction information propagation convolution neural network construction method for hyperspectral image classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |