CN111914696A - Hyperspectral remote sensing image classification method based on transfer learning - Google Patents

Hyperspectral remote sensing image classification method based on transfer learning Download PDF

Info

Publication number
CN111914696A
CN111914696A CN202010685827.6A CN202010685827A CN111914696A CN 111914696 A CN111914696 A CN 111914696A CN 202010685827 A CN202010685827 A CN 202010685827A CN 111914696 A CN111914696 A CN 111914696A
Authority
CN
China
Prior art keywords
hyperspectral
convolution
layer
sample set
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010685827.6A
Other languages
Chinese (zh)
Inventor
高红民
陈忠昊
李臣明
邱泽林
缪雅文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202010685827.6A priority Critical patent/CN111914696A/en
Publication of CN111914696A publication Critical patent/CN111914696A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral remote sensing image classification method based on transfer learning, which can be used for transferring a convolutional neural network model obtained by training on non-hyperspectral remote sensing data to the extraction of hyperspectral remote sensing image characteristics.

Description

Hyperspectral remote sensing image classification method based on transfer learning
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a hyperspectral remote sensing image classification method based on transfer learning.
Background
The remote sensing image processing technology plays an increasingly important role in production and life with the advantages of high flexibility, good reproducibility and the like. The hyperspectral image processing technology is an important branch of the field, and among a plurality of hyperspectral image processing technologies, the hyperspectral image classification technology is always one of research focuses and hotspots. By utilizing a hyperspectral image classification technology, the distribution and growth conditions of crops can be acquired, so that effective and scientific agricultural management is achieved; the method can accurately identify and classify urban houses, pavements and the like, thereby providing help for urban planning and the like.
The traditional hyperspectral image classification method can be roughly divided into a spectrum matching classification method according to a ground feature spectral characteristic curve and a data statistical characteristic-based classification method, and the data statistical characteristic-based classification method can be divided into supervised classification and unsupervised classification according to whether a sample with a mark is required in model training. The unsupervised classification algorithm is that only a training data set is used, no sample with labeled classification results is used, and data features can be analyzed only by a computer. The supervised classification algorithm refers to that a training data set and samples of labeled classification results are input into a model together, so that the obtained classification precision is necessarily high, but a large number of labeled samples are needed.
In the past decades, various conventional machine learning methods according to supervised classification concepts have been applied to hyperspectral image classification tasks. However, the universality of the traditional machine learning algorithm model is not high, when the data distribution changes, most models need to be retrained to fit new data, and a large amount of manpower and material resources are consumed for collecting new data again. In addition, due to the fact that a large amount of labeled data is needed by the supervised machine learning method, and the number of labeled samples in the hyperspectral image data is limited, the phenomenon of over-training fitting often occurs in the hyperspectral image classification work by the traditional machine learning method. Therefore, it is important how to apply the trained model to directly fit new data or find the internal relationship between new and old data.
Disclosure of Invention
The invention aims to solve the technical problem that a hyperspectral remote sensing image classification method based on transfer learning is provided, and the problems that a deep convolutional neural network is easy to over-fit when a small number of marked samples of a hyperspectral remote sensing image are learned and the model generalization capability is poor are solved.
In order to solve the technical problem, the invention provides a hyperspectral remote sensing image classification method based on transfer learning, which comprises the following steps:
(1) constructing a convolution neural network model for RGB image classification and initializing model weight parameters;
(2) training the model constructed in the step (1) by using the existing massive RGB images with labels;
(3) storing the trained model structure and the weight parameters thereof;
(4) carrying out normalization preprocessing on the hyperspectral image data;
(5) acquiring and dividing a data set;
(6) using part of modules and weight parameters thereof in the model stored in the step (3) as the first half part of the hyperspectral image classification model to extract shallow layer characteristics of the hyperspectral image training sample set, and initializing weight parameters of the rest of modules in the model stored in the step (3);
(7) training the weight parameters of the initialized residual part module in the step (6) by using the shallow feature extracted in the step (6) to obtain the latter half part of the hyperspectral classification model;
(8) combining the front part and the rear part of the hyperspectral image classification model obtained in the step (6) and the step (7) into a final hyperspectral image classification model;
(9) a prediction is made of the set of test samples.
Preferably, in step (1), the structure of the constructed convolutional neural network model is as follows: input layer → convolution module 1 → max pooling layer → convolution module 2 → max pooling layer → convolution module 3 → max pooling layer → convolution module 4 → max pooling layer → full connected layer module → output layer.
Preferably, in the step (1), the parameters of each module in the convolutional neural network model are as follows: the number of feature mapping layers of the input layer is 3; the convolution module 1 comprises two convolution layers with 64 characteristic mapping layers, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 2 comprises two convolution layers with a characteristic mapping layer of 128, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 3 comprises two convolution layers with 256 characteristic mapping layers, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 4 comprises two convolution layers with characteristic mapping layers of 512, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the full connection layer comprises three layers of full connection structures, and the number of the hidden units is 4096, 4096 and 1000; the pooling size of the maximum pooling layer is 2 x 2 pixels.
Preferably, in the step (4), the normalization preprocessing of the hyperspectral image data specifically includes the following steps:
(41) acquiring data I of a first wave band of a hyperspectral image1Calculating the average value Ave of the band data1And standard deviation S1
(42) Calculating to obtain a normalized value N of the first waveband data according to the following formula1
N1=(I1-Ave1)/S1
(43) And (3) repeating the steps (1) and (2), and recombining all normalized wave bands into a normalized hyperspectral image.
Preferably, in the step (5), the data set acquisition and division specifically includes the following steps:
(51) performing principal component extraction on the normalized hyperspectral data to obtain a data principal component with a spectral dimension reduced to 3;
(52) taking each pixel point to be classified as the center of the data after dimensionality reduction, and acquiring a square neighborhood block with the size of 9 multiplied by 3 pixels;
(53) and selecting 10% of the neighborhood blocks and the class labels corresponding to the central pixel points of the neighborhood blocks from all the obtained neighborhood blocks as a training sample set, and using the rest of the neighborhood blocks and the class labels corresponding to the central pixel points of the neighborhood blocks as a test sample set.
Preferably, in the step (6), part of the modules in the model stored in the step (3) are the first two convolution modules, an optimal migration strategy is selected according to the influence of contrast migration of each convolution module on the classification result, and in addition, when the part of modules are used for extracting the shallow features of the hyperspectral image training sample set, the weight parameters of the modules are kept unchanged, wherein the weight parameters are stored in the step (3).
Preferably, the training of the remaining part of the modules in the step (7) specifically includes: and when the remaining three convolution modules are trained, a strategy of layer-by-layer training is adopted, and the last three convolution modules adopt different learning rates: 0.001, 0.0008, 0.0005; and (3) adopting a strategy of training layer by layer for the full-connection layer, but adopting a uniform learning rate: 0.001; and the layer-by-layer training strategy only carries out gradient back transmission on modules with unfixed parameters.
Preferably, in the step (9), the predicting the test sample set specifically includes the following steps:
(91) inputting the test sample set into the hyperspectral image classification model obtained in the step (8) to obtain a prediction result of the test sample set;
(92) and obtaining the accuracy of the classification result by accurately calculating the classification label corresponding to the prediction result and the test sample set.
Preferably, in step (92), the formula for obtaining the accuracy of the classification result by accurately calculating the prediction result of the test sample set and the class label corresponding to the test sample set is as follows:
Figure BDA0002587512210000031
Figure BDA0002587512210000041
Figure BDA0002587512210000042
wherein
Figure BDA0002587512210000043
Wherein A isiThe proportion of the number of correctly classified samples of the ith class to the total number of samples of the ith class, and N is the total number of samples to be classified,aiNumber of true samples of the ith type ground object, biTo predict the number of samples of the obtained i-th type of feature, i.e.
Figure BDA0002587512210000044
Figure BDA0002587512210000045
Evaluating the classification performance of the model by using three indexes of the overall classification precision OA, the average classification precision AA and the Kappa coefficient; the total classification precision is equal to the percentage of the number of correctly classified pixels in the test sample set to the total number of samples in the whole test sample set, the average precision is the average value of the classification precision of each class, and the Kappa coefficient is another index for evaluating the classification precision and is used for evaluating the consistency of the predicted classification result of the test sample set and the corresponding mark of the real test sample set; the ground objects to be classified in the hyperspectral test sample set have class C, and the number of the samples for correctly classifying the ith ground object into the ith ground object is recorded as niiThe number of samples in which the i-th class of feature is wrongly classified into the j-th class is nij
The invention has the beneficial effects that: the convolutional neural network model trained on non-hyperspectral remote sensing data can be migrated to the extraction of the hyperspectral remote sensing image features, and the migration learning method solves the problems that the deep convolutional neural network is easy to over-fit when a small number of marked samples of the hyperspectral remote sensing image are learned and the model generalization capability is poor.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a convolutional neural network model for RGB image classification constructed by the present invention.
FIG. 3(a) is a data diagram of an Indian Pines hyperspectral remote sensing image in a simulation experiment of the present invention.
FIG. 3(b) is a pseudo-color labeled graph of the simulation experiment Indian Pines hyperspectral remote sensing image data.
FIG. 3(c) is a classification result diagram of the non-migrated model of Indian Pines hyperspectral remote sensing image data of the simulation experiment of the present invention.
FIG. 3(d) is a classification result diagram of a migration model of Indian Pines hyperspectral remote sensing image data of a simulation experiment of the present invention.
Detailed Description
As shown in fig. 1, a hyperspectral remote sensing image classification method based on transfer learning includes the following steps:
(1) constructing a convolutional neural network model for RGB image classification and initializing model weight parameters, as shown in FIG. 2;
(2) training the model constructed in the step (1) by using the existing massive RGB images with labels;
(3) storing the trained model structure and the weight parameters thereof;
(4) performing normalization preprocessing on the hyperspectral image data shown in FIG. 3 (a);
(5) acquiring and dividing a data set;
(6) using part of modules and weight parameters thereof in the model stored in the step (3) as the first half part of the hyperspectral image classification model to extract shallow layer characteristics of the hyperspectral image training sample set, and initializing weight parameters of the rest of modules in the model stored in the step (3);
(7) training the weight parameters of the initialized residual part module in the step (6) by using the shallow feature extracted in the step (6) to obtain the latter half part of the hyperspectral classification model;
(8) combining the front part and the rear part of the hyperspectral image classification model obtained in the step (6) and the step (7) into a final hyperspectral image classification model;
(9) predicting the test sample set;
(10) the prediction results obtained by predicting the obtained model for all samples are shown in fig. 3 (d).
Fig. 3(b) is a pseudo color picture drawn from the image raw data. FIG. 3(c) is a classification result diagram of the non-migrated model of Indian Pines hyperspectral remote sensing image data of the simulation experiment of the present invention.
In the step (1), the constructed convolutional neural network model has the structure as follows: input layer → convolution module 1 → max pooling layer → convolution module 2 → max pooling layer → convolution module 3 → max pooling layer → convolution module 4 → max pooling layer → full connected layer module → output layer.
In the step (1), the parameters of each module in the convolutional neural network model are as follows: the number of feature mapping layers of the input layer is 3; the convolution module 1 comprises two convolution layers with 64 characteristic mapping layers, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 2 comprises two convolution layers with a characteristic mapping layer of 128, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 3 comprises two convolution layers with 256 characteristic mapping layers, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 4 comprises two convolution layers with characteristic mapping layers of 512, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the full connection layer comprises three layers of full connection structures, and the number of the hidden units is 4096, 4096 and 1000; the pooling size of the maximum pooling layer is 2 x 2 pixels.
In the step (4), the normalization preprocessing of the hyperspectral image data specifically comprises the following steps:
(41) acquiring data I of a first wave band of a hyperspectral image1Calculating the average value Ave of the band data1And standard deviation S1
(42) Calculating to obtain a normalized value N of the first waveband data according to the following formula1
N1=(I1-Ave1)/S1
(43) And (3) repeating the steps (1) and (2), and recombining all normalized wave bands into a normalized hyperspectral image.
In the step (5), the data set acquisition and division specifically includes the following steps:
(51) performing principal component extraction on the normalized hyperspectral data to obtain a data principal component with a spectral dimension reduced to 3;
(52) taking each pixel point to be classified as the center of the data after dimensionality reduction, and acquiring a square neighborhood block with the size of 9 multiplied by 3 pixels;
(53) and selecting 10% of the neighborhood blocks and the class labels corresponding to the central pixel points of the neighborhood blocks from all the obtained neighborhood blocks as a training sample set, and using the rest of the neighborhood blocks and the class labels corresponding to the central pixel points of the neighborhood blocks as a test sample set.
In the step (6), part of the modules in the model stored in the step (3) are used as the first two convolution modules, an optimal migration strategy is selected according to the influence of contrast migration of each convolution module on the classification result, and in addition, when the part of modules are used for extracting the shallow layer characteristics of the hyperspectral image training sample set, the weight parameters of the modules are kept unchanged, wherein the weight parameters are stored in the step (3).
The training of the remaining modules in the step (7) specifically comprises: and when the remaining three convolution modules are trained, a strategy of layer-by-layer training is adopted, and the last three convolution modules adopt different learning rates: 0.001, 0.0008, 0.0005; and (3) adopting a strategy of training layer by layer for the full-connection layer, but adopting a uniform learning rate: 0.001; and the layer-by-layer training strategy only carries out gradient back transmission on modules with unfixed parameters.
In the step (9), predicting the test sample set specifically includes the following steps:
(91) inputting the test sample set into the hyperspectral image classification model obtained in the step (8) to obtain a prediction result of the test sample set;
(92) and obtaining the accuracy of the classification result by accurately calculating the classification label corresponding to the prediction result and the test sample set.
In the step (92), the formula for obtaining the accuracy of the classification result by accurately calculating the prediction result of the test sample set and the class label corresponding to the test sample set is as follows:
Figure BDA0002587512210000071
Figure BDA0002587512210000072
Figure BDA0002587512210000073
wherein
Figure BDA0002587512210000074
Wherein A isiThe proportion of the number of correctly classified samples of the ith class to the total number of samples of the ith class, N is the total number of samples to be classified, aiNumber of true samples of the ith type ground object, biTo predict the number of samples of the obtained i-th type of feature, i.e.
Figure BDA0002587512210000075
Figure BDA0002587512210000076
Evaluating the classification performance of the model by using three indexes of the overall classification precision OA, the average classification precision AA and the Kappa coefficient; the total classification precision is equal to the percentage of the number of correctly classified pixels in the test sample set to the total number of samples in the whole test sample set, the average precision is the average value of the classification precision of each class, and the Kappa coefficient is another index for evaluating the classification precision and is used for evaluating the consistency of the predicted classification result of the test sample set and the corresponding mark of the real test sample set; the ground objects to be classified in the hyperspectral test sample set have class C, and the number of the samples for correctly classifying the ith ground object into the ith ground object is recorded as niiThe number of samples in which the i-th class of feature is wrongly classified into the j-th class is nij

Claims (9)

1. A hyperspectral remote sensing image classification method based on transfer learning is characterized by comprising the following steps:
(1) constructing a convolution neural network model for RGB image classification and initializing model weight parameters;
(2) training the model constructed in the step (1) by using the existing massive RGB images with labels;
(3) storing the trained model structure and the weight parameters thereof;
(4) carrying out normalization preprocessing on the hyperspectral image data;
(5) acquiring and dividing a data set;
(6) using part of modules and weight parameters thereof in the model stored in the step (3) as the first half part of the hyperspectral image classification model to extract shallow layer characteristics of the hyperspectral image training sample set, and initializing weight parameters of the rest of modules in the model stored in the step (3);
(7) training the weight parameters of the initialized residual part module in the step (6) by using the shallow feature extracted in the step (6) to obtain the latter half part of the hyperspectral classification model;
(8) combining the front part and the rear part of the hyperspectral image classification model obtained in the step (6) and the step (7) into a final hyperspectral image classification model;
(9) a prediction is made of the set of test samples.
2. The hyperspectral remote sensing image classification method based on transfer learning according to claim 1, wherein in the step (1), the structure of the constructed convolutional neural network model is as follows: input layer → convolution module 1 → max pooling layer → convolution module 2 → max pooling layer → convolution module 3 → max pooling layer → convolution module 4 → max pooling layer → full connected layer module → output layer.
3. The hyperspectral remote sensing image classification method based on transfer learning according to claim 1, wherein in the step (1), the parameters of each module in the convolutional neural network model are as follows: the number of feature mapping layers of the input layer is 3; the convolution module 1 comprises two convolution layers with 64 characteristic mapping layers, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 2 comprises two convolution layers with a characteristic mapping layer of 128, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 3 comprises two convolution layers with 256 characteristic mapping layers, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the convolution module 4 comprises two convolution layers with characteristic mapping layers of 512, the size of a convolution kernel is 3 multiplied by 3 pixels, and the step length is 1 pixel; the full connection layer comprises three layers of full connection structures, and the number of the hidden units is 4096, 4096 and 1000; the pooling size of the maximum pooling layer is 2 x 2 pixels.
4. The hyperspectral remote sensing image classification method based on transfer learning of claim 1, wherein in the step (4), the normalization preprocessing of the hyperspectral image data specifically comprises the following steps:
(41) acquiring data I of a first wave band of a hyperspectral image1Calculating the average value Ave of the band data1And standard deviation S1
(42) Calculating to obtain a normalized value N of the first waveband data according to the following formula1
N1=(I1-Ave1)/S1
(43) And (3) repeating the steps (1) and (2), and recombining all normalized wave bands into a normalized hyperspectral image.
5. The hyperspectral remote sensing image classification method based on transfer learning of claim 1, wherein in the step (5), the data set acquisition and division specifically comprises the following steps:
(51) performing principal component extraction on the normalized hyperspectral data to obtain a data principal component with a spectral dimension reduced to 3;
(52) taking each pixel point to be classified as the center of the data after dimensionality reduction, and acquiring a square neighborhood block with the size of 9 multiplied by 3 pixels;
(53) and selecting 10% of the neighborhood blocks and the class labels corresponding to the central pixel points of the neighborhood blocks from all the obtained neighborhood blocks as a training sample set, and using the rest of the neighborhood blocks and the class labels corresponding to the central pixel points of the neighborhood blocks as a test sample set.
6. The hyperspectral remote sensing image classification method based on transfer learning according to claim 1, wherein in the step (6), part of the modules in the model stored in the step (3) are used as the first two convolution modules, an optimal transfer strategy is selected according to the influence of each convolution module on the classification result in comparison and transfer, and in addition, when the part of the modules are used for extracting the shallow features of the hyperspectral image training sample set, the weight parameters of the modules are kept unchanged, wherein the weight parameters are stored in the step (3).
7. The hyperspectral remote sensing image classification method based on transfer learning according to claim 1, wherein the training of the remaining partial modules in the step (7) specifically comprises: and when the remaining three convolution modules are trained, a strategy of layer-by-layer training is adopted, and the last three convolution modules adopt different learning rates: 0.001, 0.0008, 0.0005; and (3) adopting a strategy of training layer by layer for the full-connection layer, but adopting a uniform learning rate: 0.001; and the layer-by-layer training strategy only carries out gradient back transmission on modules with unfixed parameters.
8. The hyperspectral remote sensing image classification method based on transfer learning of claim 1, wherein in the step (9), the prediction of the test sample set specifically comprises the following steps:
(91) inputting the test sample set into the hyperspectral image classification model obtained in the step (8) to obtain a prediction result of the test sample set;
(92) and obtaining the accuracy of the classification result by accurately calculating the classification label corresponding to the prediction result and the test sample set.
9. The hyperspectral remote sensing image classification method based on transfer learning according to claim 8, wherein in step (92), the formula for obtaining the accuracy of the classification result by accurately calculating the formula between the prediction result of the test sample set and the class label corresponding to the test sample set is as follows:
Figure FDA0002587512200000031
Figure FDA0002587512200000032
Figure FDA0002587512200000033
wherein
Figure FDA0002587512200000034
Wherein A isiThe proportion of the number of correctly classified samples of the ith class to the total number of samples of the ith class, N is the total number of samples to be classified, aiNumber of true samples of the ith type ground object, biTo predict the number of samples of the obtained i-th type of feature, i.e.
Figure FDA0002587512200000035
Figure FDA0002587512200000036
Evaluating the classification performance of the model by using three indexes of the overall classification precision OA, the average classification precision AA and the Kappa coefficient; the total classification precision is equal to the percentage of the number of correctly classified pixels in the test sample set to the total number of samples in the whole test sample set, the average precision is the average value of the classification precision of each class, and the Kappa coefficient is another index for evaluating the classification precision and is used for evaluating the consistency of the predicted classification result of the test sample set and the corresponding mark of the real test sample set; the ground objects to be classified in the hyperspectral test sample set have class C, and the number of the samples for correctly classifying the ith ground object into the ith ground object is recorded as niiThe number of samples in which the i-th class of feature is wrongly classified into the j-th class is nij
CN202010685827.6A 2020-07-16 2020-07-16 Hyperspectral remote sensing image classification method based on transfer learning Pending CN111914696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010685827.6A CN111914696A (en) 2020-07-16 2020-07-16 Hyperspectral remote sensing image classification method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010685827.6A CN111914696A (en) 2020-07-16 2020-07-16 Hyperspectral remote sensing image classification method based on transfer learning

Publications (1)

Publication Number Publication Date
CN111914696A true CN111914696A (en) 2020-11-10

Family

ID=73280303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010685827.6A Pending CN111914696A (en) 2020-07-16 2020-07-16 Hyperspectral remote sensing image classification method based on transfer learning

Country Status (1)

Country Link
CN (1) CN111914696A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580670A (en) * 2020-12-31 2021-03-30 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112733775A (en) * 2021-01-18 2021-04-30 苏州大学 Hyperspectral image classification method based on deep learning
CN112784818A (en) * 2021-03-03 2021-05-11 电子科技大学 Identification method based on grouping type active learning on optical remote sensing image
CN112798539A (en) * 2020-12-10 2021-05-14 青岛农业大学 Intelligent aflatoxin detection method based on transfer learning
CN113569660A (en) * 2021-07-06 2021-10-29 河海大学 Learning rate optimization algorithm discount coefficient method for hyperspectral image classification
CN113705580A (en) * 2021-08-31 2021-11-26 西安电子科技大学 Hyperspectral image classification method based on deep migration learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344891A (en) * 2018-09-21 2019-02-15 北京航空航天大学 A kind of high-spectrum remote sensing data classification method based on deep neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344891A (en) * 2018-09-21 2019-02-15 北京航空航天大学 A kind of high-spectrum remote sensing data classification method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BIN PAN等: "CoinNet: Copy Initialization Network for Multispectral Imagery Semantic Segmentation", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112798539A (en) * 2020-12-10 2021-05-14 青岛农业大学 Intelligent aflatoxin detection method based on transfer learning
CN112580670A (en) * 2020-12-31 2021-03-30 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112580670B (en) * 2020-12-31 2022-04-19 中国人民解放军国防科技大学 Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN112733775A (en) * 2021-01-18 2021-04-30 苏州大学 Hyperspectral image classification method based on deep learning
CN112784818A (en) * 2021-03-03 2021-05-11 电子科技大学 Identification method based on grouping type active learning on optical remote sensing image
CN113569660A (en) * 2021-07-06 2021-10-29 河海大学 Learning rate optimization algorithm discount coefficient method for hyperspectral image classification
CN113569660B (en) * 2021-07-06 2024-03-26 河海大学 Learning rate optimization algorithm discount coefficient method for hyperspectral image classification
CN113705580A (en) * 2021-08-31 2021-11-26 西安电子科技大学 Hyperspectral image classification method based on deep migration learning
CN113705580B (en) * 2021-08-31 2024-05-14 西安电子科技大学 Hyperspectral image classification method based on deep migration learning

Similar Documents

Publication Publication Date Title
CN111914696A (en) Hyperspectral remote sensing image classification method based on transfer learning
AU2020102885A4 (en) Disease recognition method of winter jujube based on deep convolutional neural network and disease image
CN105046276B (en) Hyperspectral image band selection method based on low-rank representation
Kumari et al. Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant analyzer
CN111161362B (en) Spectral image identification method for growth state of tea tree
CN109615008B (en) Hyperspectral image classification method and system based on stack width learning
CN112949416B (en) Supervised hyperspectral multiscale graph volume integral classification method
CN111222545B (en) Image classification method based on linear programming incremental learning
WO2023201772A1 (en) Cross-domain remote sensing image semantic segmentation method based on adaptation and self-training in iteration domain
CN111598001A (en) Apple tree pest and disease identification method based on image processing
CN113469119A (en) Cervical cell image classification method based on visual converter and graph convolution network
CN110189305B (en) Automatic analysis method for multitasking tongue picture
CN110647932B (en) Planting crop structure remote sensing image classification method and device
CN117315381B (en) Hyperspectral image classification method based on second-order biased random walk
CN109190511A (en) Hyperspectral classification method based on part Yu structural constraint low-rank representation
CN116258914B (en) Remote Sensing Image Classification Method Based on Machine Learning and Local and Global Feature Fusion
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Jenifa et al. Classification of cotton leaf disease using multi-support vector machine
CN116863345A (en) High-resolution image farmland recognition method based on dual attention and scale fusion
CN116630700A (en) Remote sensing image classification method based on introduction channel-space attention mechanism
CN118230166A (en) Corn canopy organ identification method and canopy phenotype detection method based on improved Mask2YOLO network
CN114266321A (en) Weak supervision fuzzy clustering algorithm based on unconstrained prior information mode
Chen et al. YOLOv8-CML: A lightweight target detection method for Color-changing melon ripening in intelligent agriculture
CN116704378A (en) Homeland mapping data classification method based on self-growing convolution neural network
Suwarningsih et al. Ide-cabe: chili varieties identification and classification system based leaf

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201110