CN111652039A - Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module - Google Patents

Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module Download PDF

Info

Publication number
CN111652039A
CN111652039A CN202010283765.6A CN202010283765A CN111652039A CN 111652039 A CN111652039 A CN 111652039A CN 202010283765 A CN202010283765 A CN 202010283765A CN 111652039 A CN111652039 A CN 111652039A
Authority
CN
China
Prior art keywords
residual
residual error
remote sensing
feature fusion
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010283765.6A
Other languages
Chinese (zh)
Other versions
CN111652039B (en
Inventor
韩彦岭
崔鹏霞
王静
张云
洪中华
曹守启
周汝雁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ocean University
Original Assignee
Shanghai Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ocean University filed Critical Shanghai Ocean University
Priority to CN202010283765.6A priority Critical patent/CN111652039B/en
Publication of CN111652039A publication Critical patent/CN111652039A/en
Application granted granted Critical
Publication of CN111652039B publication Critical patent/CN111652039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral remote sensing ground object classification method based on residual error network and feature fusion, which solves the problems that the precision of hyperspectral remote sensing ground object classification by using a traditional method is difficult to improve and the spatial features in a hyperspectral remote sensing ground object image cannot be fully utilized, and the technical scheme main points comprise the following steps: obtaining original label sample data through an original hyperspectral image; extracting spatial features from the raw data; the label samples are randomly divided into training samples and testing samples, the hyperspectral remote sensing data are trained through a residual error network and feature fusion method, and the trained network model can classify the ground objects and visualize the results. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion can improve the performance of a classification model, fully utilizes deep features extracted by the residual error network, and effectively improves the classification precision.

Description

Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module
Technical Field
The invention relates to the field of hyperspectral image classification, in particular to a hyperspectral remote sensing ground object classification method based on a residual error network and a feature fusion module.
Background
The hyperspectral image has the characteristics of multiple wave bands, narrow spectral bands, large information content and high resolution, and the characteristics determine that the hyperspectral image has stronger ground object identification and fine classification capabilities, but the abundant ground object, space and spectrum information of the hyperspectral image cannot be fully utilized, so that the hyperspectral image classification work is challenged. The hyperspectral image classification algorithm can be applied to the aspects of agriculture, forestry, environmental monitoring and the like. In agriculture, fruit worm diseases can be detected, weeds can be identified, and the like; in the aspect of forestry, tree species can be identified, and forest division is assisted; in the aspect of environmental monitoring, the water quality change can be monitored, and water pollution can be detected. Therefore, the hyperspectral image classification has great significance in future life.
In the aspect of feature extraction, the features extracted by the traditional machine learning are all shallow features, and the classification features need to be manually designed through feature engineering to improve the classification precision, so that the time is long; compared with the traditional hyperspectral image classification, the deep learning method has the characteristics of automatic deep feature extraction and strong classification capability, so that the method combining the hyperspectral image classification and the deep learning is the most effective method.
At present, methods for classifying hyperspectral images are various, the method is generally small for input with pixel points as units, a network structure cannot be adjusted greatly, accuracy is improved by increasing network depth of a residual network proposed in an ImageNet competition in 2015, and extra parameters cannot be generated and calculation complexity cannot be increased while the network depth is increased by an equal mapping of the residual network. The features are better extracted through a residual network, and the features of different scales are fused, so that the utilization rate of deep features for extracting the hyperspectral images through depth learning is improved, and therefore, the design of a network which utilizes the residual network to increase the depth and fuses the extracted features is of great significance to hyperspectral image classification.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module fully utilizes the features to improve the hyperspectral ground object classification accuracy.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: the hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module comprises the following steps:
step one, obtaining a first principal component of an original image through a Principal Component Analysis (PCA) algorithm, and using the first principal component as an input sample; and making a label sample;
step two, randomly dividing the label sample into a training sample and a testing sample;
thirdly, constructing a network model by using training samples through a residual error network and a feature fusion module for training: inputting the first principal component into a residual block for processing, and then inputting output information of the residual block into a reverse convolution module to extract deep features;
fusing the deep features and the shallow features together through a feature fusion module;
inputting the fused features into the residual block again, and then taking the output in the residual block as the input of an independent residual layer to obtain more features;
inputting the features extracted from the independent residual layer into the full-link layer;
iterating the model until the model converges;
finishing the model training;
step four, classifying the test samples by using the trained model, and calculating a classification result by using a confusion matrix;
and step five, classifying the hyperspectral remote sensing images through the network model after training and testing, and outputting a visualization result.
As a preferred scheme, the label samples are randomly divided into training samples and testing samples according to the ratio of 1: 1.
As a preferable scheme, the extraction size of the label sample is KXK in units of pixel points, where K is the size of a pixel block and is an odd number.
As a preferable scheme, the inputting the first principal component into the residual block processing specifically includes: inputting the first principal component into a first residual layer of a residual block, passing through a second residual layer and a third residual layer, and then entering a first pooling layer for maximal pooling with a convolution kernel size of 3X3, wherein the number of convolution kernels is 32; and taking the pooled output as the input of a fourth residual layer, performing maximum pooling with a convolution kernel size of 3X3 by a fifth residual layer, a sixth residual layer and a second pooling layer, wherein the number of convolution kernels is 64.
As a preferred scheme, the output information of the residual block is then input to a deconvolution module to extract deep features, specifically: the output of the first pooling layer of the first residual block is used as the input of the first deconvolution module; the output of the second pooling layer of the first residual block is used as the input of the second deconvolution module.
As a preferred scheme, the convolution kernel size of the first deconvolution module is 5X5, and the step size is 1; the number of convolution kernels is 64, the size of the convolution kernel of the second deconvolution module is 3X3, and the step length is 1; the number of convolution kernels is 64.
As a preferred scheme, the size of the independent residual layer convolution kernel is 3 × 3, and the number of convolution kernels is 128.
As a preferred scheme, the features extracted from the independent residual layer are input to the fully-connected layer, specifically: and mapping and fusing local features extracted in the convolution process, calculating loss rate through a Softmax cross entropy function, calculating the gradient of each parameter through back propagation, and dynamically updating network parameters by using an Adam algorithm.
The invention has the beneficial effects that: the method can fully utilize the extracted information of the spatial features based on the feature fusion module network of the residual error network, and improves the classification precision under the condition of less label samples;
the residual error network increases the depth of the network through the principle of identity mapping without introducing extra parameters and computational complexity, and simultaneously relieves the problem of gradient dispersion or disappearance caused by simply increasing the number of layers of the deep learning network. The main idea of the residual error is to highlight tiny changes, and the extracted feature information can be supplemented by the tiny changes of the feature fusion modules of different layers, so that the feature graph extracted by the hyperspectral image is more fully utilized, and the classification result is improved.
Drawings
FIG. 1 is a block diagram of the process
FIG. 2 is a flow chart of the present invention
FIG. 3 is a diagram showing the classification result of Taihu lake (original image on the left and classification result on the right)
FIG. 4 is a diagram showing the classification result of the nested lake (the left side is the original image, and the right side is the classification result diagram)
FIG. 5 graph of the average spectrum of Taihu lake
FIG. 6 average spectrum curve of nested lake
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1-2, the hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module comprises the following steps:
step one, obtaining a first principal component of an original image through a Principal Component Analysis (PCA) algorithm, and using the first principal component as an input sample; and making a label sample; the extraction size of the label sample is KXK by taking a pixel point as a unit, wherein K is the size of a pixel block, and an odd number is taken. And randomly dividing the label samples into training samples and testing samples according to the ratio of 1: 1.
Step two, randomly dividing the label sample into a training sample and a testing sample;
thirdly, constructing a network model by using training samples through a residual error network and a feature fusion module for training: inputting the first principal component into a residual block for processing, specifically: inputting the first principal component into a first residual layer of a residual block, passing through a second residual layer and a third residual layer, and then entering a first pooling layer for maximal pooling with a convolution kernel size of 3X3, wherein the number of convolution kernels is 32; and taking the pooled output as the input of a fourth residual layer, performing maximum pooling with a convolution kernel size of 3X3 by a fifth residual layer, a sixth residual layer and a second pooling layer, wherein the number of convolution kernels is 64.
Then, inputting the output information of the residual block into a deconvolution module to extract deep features; the method specifically comprises the following steps:
the output of the first pooling layer of the first residual block is used as the input of the first deconvolution module; the output of the second pooling layer of the first residual block is used as the input of the second deconvolution module.
The convolution kernel size of the first deconvolution module is 5X5, and the step size is 1; the number of convolution kernels is 64, the size of the convolution kernel of the second deconvolution module is 3X3, and the step length is 1; the number of convolution kernels is 64.
Fusing the deep features and the shallow features together through a feature fusion module;
inputting the fused features into the first residual layer of the residual block again, and then taking the output of the second pooling layer in the residual block as the input of the independent residual layer to obtain more features;
inputting the features extracted from the independent residual layer into the full-link layer; the size of the convolution kernel of the independent residual error layer is 3X3, the number of the convolution kernels is 128, and the method specifically comprises the following steps: and mapping and fusing local features extracted in the convolution process, calculating the loss rate through a Softmax cross entropy function, calculating the gradient of each parameter through back propagation, and dynamically updating the network parameters by using an Adam algorithm.
Iterating the model until the model converges;
finishing the model training;
step four, classifying the test samples by using the trained model, and calculating a classification result by using a confusion matrix;
and step five, classifying the hyperspectral remote sensing images through the network model after training and testing, and outputting a visualization result.
The following is a result of processing a hyperspectral remote sensing image of the mountain Xucun in Tanshu, Anhui Hefei city, Jiangsu province by adopting the method. To verify the validity of the method:
1) description of data
The data set is hyperspectral data captured by a Zhuhai satellite I, and is a hyperspectral remote sensing image of a Wuxi Taihu lake and a Xucun mountain of Anhui Hefei city in Jiangsu province. The first experimental data is a high-spectrum image of Taihu lake, Wuxi city, Jiangsu province, China, the imaging time is 1 day, 10 months and 2018 years, the longitude of the upper left corner is 120 degrees, 11 '34 degrees, the latitude is 31 degrees, 06' 52 degrees, and the Taihu lake is named as shown in FIG. 5. The second experimental data is a high-spectrum remote sensing image of Xucun mountain of Hefei city, Anhui, China, the imaging time is 17 months 4 in 2019, the longitude of the upper left corner is 117.24491051, and the latitude is 31.84037861, so that the image is called a nested lake.
The hyperspectral images have 32 bands in total, the spatial resolution is 10 meters, the size of an original image is 5056X5056, two sets of data are cut to 3000X3000 in an experiment, the data are divided into three types of lake water, houses and farmlands in the experiment, pixel points are used as sample labels according to the combination of a spectrum curve and a Google map, an average spectrum curve of a Taihu lake and a nest lake is shown in figures 5 and 6 respectively, the red represents the lake water, the green represents the houses, and the blue represents the farmlands, and three types of land features in the data can be better distinguished from the two figures. The total number of labeled pixels in Taihu lake experimental data is 29346, the total number of labeled pixels in nested lake is 29512, and the ratio of training samples to test samples in the experiment is 1:1, and the specific number is shown in Table 1.
TABLE 1 number of samples in data set
Figure BDA0002447714220000061
2) Experimental setup
The preprocessing mode adopts a Principal Component Analysis (PCA) algorithm to perform dimensionality reduction on the hyperspectral image, the dimensionality reduction of data of 32 wave bands is performed to 3 wave bands as input, a training sample is input randomly, a test sample is a total sample number minus the training sample, the randomness of the training sample enables model accuracy to be not completely consistent, therefore, each experiment is trained for five times to ensure the stability of the experiment result, and the experiment result is more convincing.
The residual error network comprises a large number of parameters to be trained, the network structure directly influences the number of the training parameters, the more complex the structure, the more the training parameters are, the training difficulty is increased, and the fixed part of the parameters is very necessary. In the experiment, the learning rate was set to 0.0005, the drop ratio (drop _ prob) was set to 0.5, the batch size was 20, and the number of iterations was 20000. The improved residual network parameters and the network settings thereof are shown in table 4, and the same two sets of data are subjected to comparative experiments of twin networks, SVM, CNN and traditional residual networks. The twin network (Siamese) is accurately classified under a small sample, and the precision can be calculated only by one forward operation; the SVM adopts a radial basis function as a kernel function, and the parameter g and the penalty factor c are obtained by five times of cross validation optimization; CNN parameter settings are shown in Table 2; the conventional residual network parameters and their network settings are shown in table 3.
Table 2 CNN network architecture
Figure BDA0002447714220000062
TABLE 3 conventional residual error network
Figure BDA0002447714220000063
Figure BDA0002447714220000071
TABLE 4 modified residual network (Dconv1, Dconv2 are deconvoluted)
Figure BDA0002447714220000072
3) Example results
The improved residual error network is based on a traditional residual error network, two modules of deconvolution and fusion are added, the deconvolution is mainly realized by image expansion, then the feature fusion modules with different latitudes are combined together through multi-scale fusion, the utilization rate of the features of the hyperspectral image is improved, finally the features are input into a residual error block, more deep features are extracted to train the model to obtain better classification result experimental results as shown in a table 5, the table shows that the precision of the improved residual error network on the data of the Taihu lake and the nested lake is the highest, and the kappa coefficient is the highest in all algorithms.
Table 5 shows the classification results of different comparison algorithms, and it can be seen that the precision based on the improved residual error network is respectively improved by 2.2% and 2.05% on the data of Taihu lake and nested lake compared with the precision of the conventional residual error network, the Kappa coefficient is respectively improved by 1.19% and 2.96% compared with the conventional residual error network, the classification precision of the CNN network is 3.84% lower than that of the improved residual error network, and the precision of the twin network and the SVM in the conventional machine learning is far lower than that of the deep learning network
TABLE 5 comparison of the different methods
Figure BDA0002447714220000081
The feature fusion module method based on the residual error network can effectively improve the utilization rate of spatial features of hyperspectrum, solves the problem of low classification precision of the traditional machine learning algorithm, and provides a new idea for hyperspectral image classification. Experiments show that the improved residual error network fully utilizes the spatial information of the hyperspectral images after the feature fusion module, effectively improves the classification precision compared with other algorithms, and proves the effectiveness of the algorithm, but the method still has the problem of more targeted fusion of features, so that the combination of the residual error network and the feature fusion module has great potential in the application of hyperspectral image classification, and how to more quickly and effectively improve the classification precision is still to be improved.
The above-mentioned embodiments are merely illustrative of the principles and effects of the present invention, and some embodiments may be used, not restrictive; it should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the inventive concept of the present invention, and these changes and modifications belong to the protection scope of the present invention.

Claims (8)

1. The hyperspectral remote sensing ground feature classification method based on the residual error network and the feature fusion module is characterized by comprising the following steps of:
step one, obtaining a first principal component of an original image through a Principal Component Analysis (PCA) algorithm, and using the first principal component as an input sample; and making a label sample;
step two, randomly dividing the label sample into a training sample and a testing sample;
thirdly, constructing a network model by using training samples through a residual error network and a feature fusion module for training: inputting the first principal component into a residual block for processing, and then inputting output information of the residual block into a deconvolution module to extract deep features;
fusing the deep features and the shallow features together through a feature fusion module;
inputting the fused features into the residual block again, and then taking the output in the residual block as the input of an independent residual layer to obtain more features;
inputting the features extracted from the independent residual layer into the full-link layer;
iterating the model until the model converges;
finishing the model training;
step four, classifying the test samples by using the trained model, and calculating a classification result by using a confusion matrix;
and step five, classifying the hyperspectral remote sensing images through the network model after training and testing, and outputting a visual result.
2. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in claim 1 is characterized in that label samples are randomly divided into training samples and testing samples according to the proportion of 1: 1.
3. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in claim 1 is characterized in that the extraction size of the label sample is KXK by taking pixel points as units, wherein K is the size of a pixel block, and the label sample is an odd number.
4. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in claim 1 is characterized in that the first principal component is input into the residual error block for processing, and specifically comprises: inputting the first principal component into a first residual layer of a residual block, passing through a second residual layer and a third residual layer, and then entering a first pooling layer for maximal pooling with a convolution kernel size of 3X3, wherein the number of convolution kernels is 32; and taking the pooled output as the input of a fourth residual layer, performing maximum pooling with a convolution kernel size of 3X3 by a fifth residual layer, a sixth residual layer and a second pooling layer, wherein the number of convolution kernels is 64.
5. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in claim 1, wherein the output information of the residual error block is then input into a deconvolution module to extract deep features, specifically: the output of the first pooling layer of the first residual block is used as the input of the first deconvolution module; the output of the second pooling layer of the first residual block is used as the input of the second deconvolution module.
6. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in claim 5, wherein the convolution kernel size of the first deconvolution module is 5X5, and the step length is 1; the number of convolution kernels is 64, the size of the convolution kernel of the second deconvolution module is 3X3, and the step length is 1; the number of convolution kernels is 64.
7. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in any of claim 6 is characterized in that the independent residual error layer convolution kernel size is 3X3, and the number of convolution kernels is 128.
8. The hyperspectral remote sensing ground object classification method based on the residual error network and the feature fusion module as claimed in claim 7 is characterized in that the features extracted from the independent residual error layer are input into the full-link layer, specifically: and mapping and fusing local features extracted in the convolution process, calculating the loss rate through a Softmax cross entropy function, calculating the gradient of each parameter through back propagation, and dynamically updating the network parameters by using an Adam algorithm.
CN202010283765.6A 2020-04-13 2020-04-13 Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module Active CN111652039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010283765.6A CN111652039B (en) 2020-04-13 2020-04-13 Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010283765.6A CN111652039B (en) 2020-04-13 2020-04-13 Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module

Publications (2)

Publication Number Publication Date
CN111652039A true CN111652039A (en) 2020-09-11
CN111652039B CN111652039B (en) 2023-04-18

Family

ID=72347834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010283765.6A Active CN111652039B (en) 2020-04-13 2020-04-13 Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module

Country Status (1)

Country Link
CN (1) CN111652039B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365540A (en) * 2020-11-18 2021-02-12 北京观微科技有限公司 Ship target positioning detection method and system suitable for multiple scales
CN112508066A (en) * 2020-11-25 2021-03-16 四川大学 Hyperspectral image classification method based on residual error full convolution segmentation network
CN112749906A (en) * 2021-01-14 2021-05-04 云南中烟工业有限责任公司 Sensory evaluation method for spectrum data of cigarette mainstream smoke
CN113723255A (en) * 2021-08-24 2021-11-30 中国地质大学(武汉) Hyperspectral image classification method and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636769A (en) * 2018-12-18 2019-04-16 武汉大学 EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN109871830A (en) * 2019-03-15 2019-06-11 中国人民解放军国防科技大学 Spatial-spectral fusion hyperspectral image classification method based on three-dimensional depth residual error network
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
US20200026953A1 (en) * 2018-07-23 2020-01-23 Wuhan University Method and system of extraction of impervious surface of remote sensing image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200026953A1 (en) * 2018-07-23 2020-01-23 Wuhan University Method and system of extraction of impervious surface of remote sensing image
CN109636769A (en) * 2018-12-18 2019-04-16 武汉大学 EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN109871830A (en) * 2019-03-15 2019-06-11 中国人民解放军国防科技大学 Spatial-spectral fusion hyperspectral image classification method based on three-dimensional depth residual error network
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENQUAN GAN: "A Two-Branch Convolution Residual Network for Image Compressive Sensing", 《IEEE》 *
罗会兰: "基于深度网络的图像语义分割综述", 《电子学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365540A (en) * 2020-11-18 2021-02-12 北京观微科技有限公司 Ship target positioning detection method and system suitable for multiple scales
CN112508066A (en) * 2020-11-25 2021-03-16 四川大学 Hyperspectral image classification method based on residual error full convolution segmentation network
CN112749906A (en) * 2021-01-14 2021-05-04 云南中烟工业有限责任公司 Sensory evaluation method for spectrum data of cigarette mainstream smoke
CN113723255A (en) * 2021-08-24 2021-11-30 中国地质大学(武汉) Hyperspectral image classification method and storage medium
CN113723255B (en) * 2021-08-24 2023-09-01 中国地质大学(武汉) Hyperspectral image classification method and storage medium

Also Published As

Publication number Publication date
CN111652039B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111652039B (en) Hyperspectral remote sensing ground object classification method based on residual error network and feature fusion module
CN110399909B (en) Hyperspectral image classification method based on label constraint elastic network graph model
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN109840556B (en) Image classification and identification method based on twin network
Li et al. Adaptive multiscale deep fusion residual network for remote sensing image classification
CN109784392B (en) Hyperspectral image semi-supervised classification method based on comprehensive confidence
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN115331087B (en) Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
CN111985543A (en) Construction method, classification method and system of hyperspectral image classification model
Zhao et al. ADRN: Attention-based deep residual network for hyperspectral image denoising
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN111639587A (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN112036249A (en) Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN112052758A (en) Hyperspectral image classification method based on attention mechanism and recurrent neural network
Huan et al. MAENet: multiple attention encoder–decoder network for farmland segmentation of remote sensing images
CN111738052A (en) Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning
CN111353412B (en) End-to-end 3D-CapsNet flame detection method and device
CN116486263A (en) Hyperspectral anomaly detection method based on depth features and double-tributary isolated forest
CN114187477A (en) Small sample hyperspectral image classification method based on supervised self-contrast learning
CN114373120B (en) Multi-scale space fusion hyperspectral soil heavy metal pollution identification and evaluation method
CN116977747B (en) Small sample hyperspectral classification method based on multipath multi-scale feature twin network
CN115205853B (en) Image-based citrus fruit detection and identification method and system
Wang et al. An Instance Segmentation Method for Anthracnose Based on Swin Transformer and Path Aggregation
CN112633155B (en) Natural conservation place human activity change detection method based on multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant