CN110837808A - Hyperspectral image classification method based on improved capsule network model - Google Patents
Hyperspectral image classification method based on improved capsule network model Download PDFInfo
- Publication number
- CN110837808A CN110837808A CN201911094708.7A CN201911094708A CN110837808A CN 110837808 A CN110837808 A CN 110837808A CN 201911094708 A CN201911094708 A CN 201911094708A CN 110837808 A CN110837808 A CN 110837808A
- Authority
- CN
- China
- Prior art keywords
- layer
- hyperspectral image
- model
- convolution
- multiplied
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002775 capsule Substances 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims description 16
- 235000008331 Pinus X rigitaeda Nutrition 0.000 claims description 12
- 235000011613 Pinus brutia Nutrition 0.000 claims description 12
- 241000018646 Pinus brutia Species 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000013145 classification model Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000013100 final test Methods 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 12
- 238000001228 spectrum Methods 0.000 abstract description 5
- 238000013528 artificial neural network Methods 0.000 abstract 1
- 238000013527 convolutional neural network Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 5
- WEYNBWVKOYCCQT-UHFFFAOYSA-N 1-(3-chloro-4-methylphenyl)-3-{2-[({5-[(dimethylamino)methyl]-2-furyl}methyl)thio]ethyl}urea Chemical compound O1C(CN(C)C)=CC=C1CSCCNC(=O)NC1=CC=C(C)C(Cl)=C1 WEYNBWVKOYCCQT-UHFFFAOYSA-N 0.000 description 4
- 102100027241 Adenylyl cyclase-associated protein 1 Human genes 0.000 description 4
- 101710137115 Adenylyl cyclase-associated protein 1 Proteins 0.000 description 4
- 102100021879 Adenylyl cyclase-associated protein 2 Human genes 0.000 description 4
- 101710137132 Adenylyl cyclase-associated protein 2 Proteins 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 241000209761 Avena Species 0.000 description 1
- 235000007319 Avena orientalis Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a hyperspectral image classification method based on an improved capsule network model, which is based on the basic idea that a hyperspectral image block is subjected to dimensionality reduction by adopting a 1 x 1 convolution kernel, then primary features of a dimensionality reduction image are extracted by utilizing a dual-channel convolution neural network, further primary feature information is packaged into a capsule vector on a Primarycaps layer, and finally the class of a central pixel of the image block is judged by calculating the modular length of the capsule vector through a Digitcaps layer. Compared with the prior art, the method can more fully extract the spectrum and the spatial features of the hyperspectral image and identify the spatial position information among the features, thereby improving the accuracy of classification.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hyperspectral image classification method based on an improved capsule network model.
Background
The hyperspectral image has abundant spectrum and spatial information and is widely applied to various fields. The classification of each pixel of the hyperspectral image is an important content in the field of hyperspectral image research, however, the hyperspectral image has the characteristics of multiple wave bands and large data volume, so that the traditional classification method cannot effectively extract image features, and further, the image is classified according to ground objects. At present, a Convolutional Neural Network (CNN) can be used to extract features of different levels of an image, and is widely applied in the field of image processing, and many researchers begin to classify hyperspectral images by using CNN.
Research shows that CNN has strong feature extraction capability, but it cannot sufficiently extract feature information of hyperspectral images, resulting in low image classification accuracy. In recent years, researchers at home and abroad propose an image classification method combining image space information and spectral information, so that the precision of image classification is remarkably improved, however, most methods cannot effectively identify the spatial position, translation and rotation relation among features, and therefore the classification capability of a model is limited.
The capsule network can better extract the characteristics of the hyperspectral training data of the small sample, and simultaneously extract the spatial positions among the characteristics, thereby improving the classification performance of the model on the hyperspectral image of the small sample. Therefore, based on the advantages of the Capsule network, an Improved model (abbreviated as ICAP) based on the Capsule network is provided by combining a two-channel convolution network model. The hyperspectral image classification model can fully extract the spectrum and the spatial features of the hyperspectral image, simultaneously takes the spatial position relation among the features into consideration, and reduces the classification error on the hyperspectral image.
Disclosure of Invention
The invention aims to provide a hyperspectral image classification method based on an improved capsule network model, and solves the problem that a traditional convolutional neural network cannot effectively extract hyperspectral image spectrums and spatial features and identify spatial positions among the features.
The purpose of the invention can be realized by the following technical scheme:
a hyperspectral image classification method based on an improved capsule network model comprises a capsule network used for classifying hyperspectral images, wherein the capsule network comprises 1 x 1 convolution layer and 2 feature extraction channels, the 2 feature extraction channels are a channel I and a channel II respectively, and each channel comprises 2 convolution layers, 1 average pooling layer and 1 Primarycaps layer.
Further, the number of convolution kernels of the 1 × 1 convolution layer is 64; the first channel sequentially comprises a first convolution layer with 16 convolution kernels of 5 multiplied by 5, a second convolution layer with 16 convolution kernels of 5 multiplied by 5, an average pooling layer with the size of 2 multiplied by 2 and a Primarycaps layer with the size of 16 convolution kernels of 5 multiplied by 5; channel two includes, in order, a first convolution layer of 16 7 × 7 convolution kernels, a second convolution layer of 16 7 × 7 convolution kernels, an average pooling layer of size 2 × 2, and 16 PrimaryCaps layers of size 7 × 7 convolution kernels.
Further, the method also comprises a fusion layer for splicing the 2 channel characteristic data and a Digitcaps layer for calculating the existence probability of the ground objects represented by each capsule.
A hyperspectral image classification method based on an improved capsule network model comprises the following steps:
(1) image pre-processing
Taking the Indian Pines hyperspectral data set as an example, the improved capsule network model-based hyperspectral image classification method is used for classifying the ground objects in the Indian Pines hyperspectral data set. Before the hyperspectral image is input into a model, edge filling and data normalization processing are carried out on the image, then a hyperspectral image block with the size of 27 multiplied by 27 is taken out, and the hyperspectral image block is divided into a training set, a verification set and a test set.
(2) Training model
1) Image block dimension reduction
And performing dimensionality reduction on hyperspectral image blocks in the training set by using a 1 x 1 convolution kernel to obtain 64 convolution feature maps.
2) Feature extraction
And respectively inputting 64 feature maps into two feature extraction channels, and processing by 2 convolutional layers and 1 average pooling layer to obtain 16 convolutional feature maps with the size of 13 multiplied by 13.
3) Feature fusion
And packaging the 16 feature images into a four-dimensional tensor of 2 multiplied by 8 multiplied by 7 on a Primarycaps layer, and splicing the output tensors of the two channels on the first dimension to obtain a tensor of 4 multiplied by 8 multiplied by 7.
4) Calculating a loss function
The 4 × 8 × 7 × 7 tensors are input into the DigitCaps layer, resulting in 16 class capsule vectors. The modular length of each capsule vector represents the existence probability of the corresponding class, and the Margin loss classification loss is calculated according to the modular length.
5) Updating model parameters
And continuously optimizing model parameters by using an Adam optimizer, and storing the classification model with the minimum loss function in the verification set as a final test model.
(3) Test model
And inputting the hyperspectral image blocks of the test set into the trained model to obtain a final classification result. And inputting all image blocks into the model to obtain a hyperspectral image classification effect graph.
The invention has the beneficial effects that:
the invention provides a hyperspectral image classification method based on an improved capsule network model, aiming at the problem that the traditional CNN model cannot fully extract hyperspectral image characteristic information, so that the image classification precision is not high. The selection of the multi-scale convolution kernel in the improved model effectively improves the detail extraction of the hyperspectral image feature information, fully utilizes the extraction capability of the capsule network to the hyperspectral image detail features, extracts the spatial position relation among the features and improves the image classification precision.
Drawings
Fig. 1 is a diagram of an improved capsule network architecture.
FIG. 2 is a view showing the structure of 2D-CNN.
FIG. 3 is a diagram of the results of Indian Pines classification.
Detailed Description
Example (b):
the model structure of this embodiment is shown in fig. 1, and the specific implementation steps are as follows:
a hyperspectral image classification method based on an improved capsule network model comprises a capsule network used for classifying hyperspectral images, wherein the capsule network comprises 1 x 1 convolution layer and 2 feature extraction channels, the 2 feature extraction channels are a channel I and a channel II respectively, and each channel comprises 2 convolution layers, 1 average pooling layer and 1 Primarycaps layer.
Further, the number of convolution kernels of the 1 × 1 convolution layer is 64; the first channel sequentially comprises a first convolution layer with 16 convolution kernels of 5 multiplied by 5, a second convolution layer with 16 convolution kernels of 5 multiplied by 5, an average pooling layer with the size of 2 multiplied by 2 and a Primarycaps layer with the size of 16 convolution kernels of 5 multiplied by 5; channel two includes, in order, a first convolution layer of 16 7 × 7 convolution kernels, a second convolution layer of 16 7 × 7 convolution kernels, an average pooling layer of size 2 × 2, and 16 PrimaryCaps layers of size 7 × 7 convolution kernels.
Further, the method also comprises a fusion layer for splicing the 2 channel characteristic data and a Digitcaps layer for calculating the existence probability of the ground objects represented by each capsule.
A hyperspectral image classification method based on an improved capsule network model comprises the following steps:
(1) image pre-processing
Taking the Indian Pines hyperspectral data set as an example, the improved capsule network model-based hyperspectral image classification method is used for classifying the ground objects in the Indian Pines hyperspectral data set. Before the hyperspectral image is input into a model, edge filling and data normalization processing are carried out on the image, then a hyperspectral image block with the size of 27 multiplied by 27 is taken out, and the hyperspectral image block is divided into a training set, a verification set and a test set.
(2) Training model
6) Image block dimension reduction
And performing dimensionality reduction on hyperspectral image blocks in the training set by using a 1 x 1 convolution kernel to obtain 64 convolution feature maps.
7) Feature extraction
And respectively inputting 64 feature maps into two feature extraction channels, and processing by 2 convolutional layers and 1 average pooling layer to obtain 16 convolutional feature maps with the size of 13 multiplied by 13.
8) Feature fusion
And packaging the 16 feature images into a four-dimensional tensor of 2 multiplied by 8 multiplied by 7 on a Primarycaps layer, and splicing the output tensors of the two channels on the first dimension to obtain a tensor of 4 multiplied by 8 multiplied by 7.
9) Calculating a loss function
The 4 × 8 × 7 × 7 tensors are input into the DigitCaps layer, resulting in 16 class capsule vectors. The modular length of each capsule vector represents the existence probability of the corresponding class, and the Margin loss classification loss is calculated according to the modular length.
10) Updating model parameters
And continuously optimizing model parameters by using an Adam optimizer, and storing the classification model with the minimum loss function in the verification set as a final test model.
(3) Test model
And inputting the hyperspectral image blocks of the test set into the trained model to obtain a final classification result. And inputting all image blocks into the model to obtain a hyperspectral image classification effect graph.
According to the hyperspectral image classification method based on the improved capsule network model, the spatial neighborhood information and the spectral information of hyperspectral image pixels are extracted, meanwhile, the position relation among the features is considered, the overfitting of the model is relieved by adding the 1 x 1 convolutional layer and the batch normalization layer into the model, and the classification capability of the model is further improved. The superiority of the process of the invention compared to other processes is illustrated by a set of experiments below.
The Indian Pines dataset was randomly divided into 10% training set, 10% validation set, and 80% test set, where the number of training samples for the categories Grass-past-mowed and Oats was increased to 5, with the remaining categories unchanged. In order to verify the image classification capability of the improved model, 1 2D-CNN model is designed, FIG. 2 is a structural diagram of the 2D-CNN model, the number of neurons of a full connection layer of the model is set to be 128, a Dropout mechanism is added behind the full connection layer, the value of the Dropout mechanism is set to be 0.5, and the rest parameters are the same as those of the ICAP model. In addition, in order to verify the characteristic extraction capability of the dual-channel network, the invention also designs two capsule networks which are marked as CAP-1 and CAP-2. CAP-1 only retains channel one, CAP-2 only retains channel two, and the remaining parameters are the same as the ICAP model.
FIG. 3 is a diagram showing the classification results of models on an Indian Pines dataset, and FIG. 3(a) is a diagram showing true value labeling. It can be seen from the figure that the ICAP model of the invention has better classification effect in Indian Pines data sets than 2D-CNN, CAP-1 and CAP-2. Table 1 shows the classification accuracy of each model on the Indian Pines dataset. As can be seen from table 1, the classification accuracy of the ICAP model is superior to that of other models. OA, AA and Kappa coefficients of the ICAP model in the Indian Pines data set are superior to those of 2D-CNN, and the results show that the capsule network can better extract spectrum and space information in an image and identify the spatial position information, translation and rotation relation among features compared with the 2D-CNN, so that the image classification capability of the model is improved. In addition, compared with CAP-1 and CAP-2 models, the ICAP model has certain improved OA, AA and Kappa coefficients. The dual-channel model containing the 5 × 5 and 7 × 7 convolution kernels can extract primary image information in multiple scales, information loss in the convolution process is reduced, and classification accuracy of the model is improved.
TABLE 1 Classification accuracy comparison of models%
The capsule network is a research hotspot in the field of deep learning at present, and has great development potential in the field of hyperspectral image classification. In conclusion, the hyperspectral image classification method based on the improved capsule network model has better generalization capability, can effectively extract image features, and can also identify spatial position information among the features, so that the classification accuracy is improved.
Claims (4)
1. A hyperspectral image classification method based on an improved capsule network model is characterized by comprising the following steps: the hyperspectral image classification method based on the capsule network comprises the capsule network used for classifying hyperspectral images, wherein the capsule network comprises 1 multiplied by 1 convolution layer and 2 feature extraction channels, the 2 feature extraction channels are a channel I and a channel II respectively, and each channel comprises 2 convolution layers, 1 average pooling layer and 1 Primarycaps layer.
2. The hyperspectral image classification method based on the improved capsule network model according to claim 1 is characterized in that: the number of convolution kernels of the 1 × 1 convolution layer is 64; the first channel sequentially comprises a first convolution layer with 16 convolution kernels of 5 multiplied by 5, a second convolution layer with 16 convolution kernels of 5 multiplied by 5, an average pooling layer with the size of 2 multiplied by 2 and a Primarycaps layer with the size of 16 convolution kernels of 5 multiplied by 5; channel two includes, in order, a first convolution layer of 16 7 × 7 convolution kernels, a second convolution layer of 16 7 × 7 convolution kernels, an average pooling layer of size 2 × 2, and 16 PrimaryCaps layers of size 7 × 7 convolution kernels.
3. The hyperspectral image classification method based on the improved capsule network model according to claim 1 or 2 is characterized in that: the system also comprises a fusion layer for splicing the 2 channel characteristic data and a DigitCaps layer for calculating the existence probability of the ground objects represented by each capsule.
4. A hyperspectral image classification method based on an improved capsule network model according to claims 1-3, characterized by: the method comprises the following steps:
(1) image pre-processing
Taking the Indian Pines hyperspectral data set as an example, the improved capsule network model-based hyperspectral image classification method is used for classifying the ground objects in the Indian Pines hyperspectral data set. Before the hyperspectral image is input into a model, edge filling and data normalization processing are carried out on the image, then a hyperspectral image block with the size of 27 multiplied by 27 is taken out, and the hyperspectral image block is divided into a training set, a verification set and a test set.
(2) Training model
1) Image block dimension reduction
And performing dimensionality reduction on hyperspectral image blocks in the training set by using a 1 x 1 convolution kernel to obtain 64 convolution feature maps.
2) Feature extraction
And respectively inputting 64 feature maps into two feature extraction channels, and processing by 2 convolutional layers and 1 average pooling layer to obtain 16 convolutional feature maps with the size of 13 multiplied by 13.
3) Feature fusion
And packaging the 16 feature images into a four-dimensional tensor of 2 multiplied by 8 multiplied by 7 on a Primarycaps layer, and splicing the output tensors of the two channels on the first dimension to obtain a tensor of 4 multiplied by 8 multiplied by 7.
4) Calculating a loss function
The 4 × 8 × 7 × 7 tensors are input into the DigitCaps layer, resulting in 16 class capsule vectors. The modular length of each capsule vector represents the existence probability of the corresponding class, and the Margin loss classification loss is calculated according to the modular length.
5) Updating model parameters
And continuously optimizing model parameters by using an Adam optimizer, and storing the classification model with the minimum loss function in the verification set as a final test model.
(3) Test model
And inputting the hyperspectral image blocks of the test set into the trained model to obtain a final classification result. And inputting all image blocks into the model to obtain a hyperspectral image classification effect graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911094708.7A CN110837808A (en) | 2019-11-11 | 2019-11-11 | Hyperspectral image classification method based on improved capsule network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911094708.7A CN110837808A (en) | 2019-11-11 | 2019-11-11 | Hyperspectral image classification method based on improved capsule network model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110837808A true CN110837808A (en) | 2020-02-25 |
Family
ID=69574973
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911094708.7A Pending CN110837808A (en) | 2019-11-11 | 2019-11-11 | Hyperspectral image classification method based on improved capsule network model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110837808A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111431938A (en) * | 2020-04-24 | 2020-07-17 | 重庆邮电大学 | Industrial internet intrusion detection method based on capsule network |
CN111582387A (en) * | 2020-05-11 | 2020-08-25 | 吉林大学 | Rock spectral feature fusion classification method and system |
CN111985575A (en) * | 2020-09-02 | 2020-11-24 | 四川九洲电器集团有限责任公司 | Hyperspectral image classification method based on convolutional neural network |
CN112348038A (en) * | 2020-11-30 | 2021-02-09 | 江苏海洋大学 | Visual positioning method based on capsule network |
CN113920393A (en) * | 2021-09-18 | 2022-01-11 | 广东工业大学 | Hyperspectral remote sensing image classification method based on global capsule neural network |
CN113920393B (en) * | 2021-09-18 | 2024-10-22 | 广东工业大学 | Hyperspectral remote sensing image classification method based on global capsule neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017215284A1 (en) * | 2016-06-14 | 2017-12-21 | 山东大学 | Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network |
CN110084159A (en) * | 2019-04-15 | 2019-08-02 | 西安电子科技大学 | Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint |
CN110309811A (en) * | 2019-07-10 | 2019-10-08 | 哈尔滨理工大学 | A kind of hyperspectral image classification method based on capsule network |
-
2019
- 2019-11-11 CN CN201911094708.7A patent/CN110837808A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017215284A1 (en) * | 2016-06-14 | 2017-12-21 | 山东大学 | Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network |
CN110084159A (en) * | 2019-04-15 | 2019-08-02 | 西安电子科技大学 | Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint |
CN110309811A (en) * | 2019-07-10 | 2019-10-08 | 哈尔滨理工大学 | A kind of hyperspectral image classification method based on capsule network |
Non-Patent Citations (2)
Title |
---|
曾锐;陈锻生;: "结合双深度学习特征的高光谱遥感图像分类" * |
朱应钊;胡颖茂;李嫚: "胶囊网络技术及发展趋势研究" * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111431938A (en) * | 2020-04-24 | 2020-07-17 | 重庆邮电大学 | Industrial internet intrusion detection method based on capsule network |
CN111582387A (en) * | 2020-05-11 | 2020-08-25 | 吉林大学 | Rock spectral feature fusion classification method and system |
CN111985575A (en) * | 2020-09-02 | 2020-11-24 | 四川九洲电器集团有限责任公司 | Hyperspectral image classification method based on convolutional neural network |
CN112348038A (en) * | 2020-11-30 | 2021-02-09 | 江苏海洋大学 | Visual positioning method based on capsule network |
CN113920393A (en) * | 2021-09-18 | 2022-01-11 | 广东工业大学 | Hyperspectral remote sensing image classification method based on global capsule neural network |
CN113920393B (en) * | 2021-09-18 | 2024-10-22 | 广东工业大学 | Hyperspectral remote sensing image classification method based on global capsule neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113011499B (en) | Hyperspectral remote sensing image classification method based on double-attention machine system | |
Hao et al. | Two-stream deep architecture for hyperspectral image classification | |
CN110837808A (en) | Hyperspectral image classification method based on improved capsule network model | |
US20220382553A1 (en) | Fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery | |
CN111274869B (en) | Method for classifying hyperspectral images based on parallel attention mechanism residual error network | |
CN115937655B (en) | Multi-order feature interaction target detection model, construction method, device and application thereof | |
CN111738344B (en) | Rapid target detection method based on multi-scale fusion | |
CN110909801B (en) | Data classification method, system, medium and device based on convolutional neural network | |
CN112200090B (en) | Hyperspectral image classification method based on cross-grouping space-spectral feature enhancement network | |
CN115331110B (en) | Fusion classification method and device for remote sensing hyperspectral image and laser radar image | |
CN113642445B (en) | Hyperspectral image classification method based on full convolution neural network | |
CN111160273A (en) | Hyperspectral image space spectrum combined classification method and device | |
CN115527095A (en) | Multi-scale target detection method based on combined recursive feature pyramid | |
Zhan et al. | Semi-supervised classification of hyperspectral data based on generative adversarial networks and neighborhood majority voting | |
CN111860683A (en) | Target detection method based on feature fusion | |
CN115564996A (en) | Hyperspectral remote sensing image classification method based on attention union network | |
CN114841244A (en) | Target detection method based on robust sampling and mixed attention pyramid | |
CN113159067A (en) | Fine-grained image identification method and device based on multi-grained local feature soft association aggregation | |
García et al. | Efficient semantic segmentation of hyperspectral images using adaptable rectangular convolution | |
CN108446723B (en) | Multi-scale space spectrum collaborative classification method for hyperspectral image | |
CN111931618A (en) | Hyperspectral classification method based on separable residual three-dimensional dense convolution | |
CN114998696B (en) | YOLOv3 target detection method based on feature enhancement and multi-level fusion | |
CN114373080B (en) | Hyperspectral classification method of lightweight hybrid convolution model based on global reasoning | |
CN108052981A (en) | Image classification method based on non-downsampling Contourlet conversion and convolutional neural networks | |
Confalonieri et al. | An End-to-End Framework for the Classification of Hyperspectral Images in the Wood Domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200225 |