CN114494762A - Hyperspectral image classification method based on deep migration network - Google Patents

Hyperspectral image classification method based on deep migration network Download PDF

Info

Publication number
CN114494762A
CN114494762A CN202111503748.XA CN202111503748A CN114494762A CN 114494762 A CN114494762 A CN 114494762A CN 202111503748 A CN202111503748 A CN 202111503748A CN 114494762 A CN114494762 A CN 114494762A
Authority
CN
China
Prior art keywords
domain
data
layer
source domain
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111503748.XA
Other languages
Chinese (zh)
Other versions
CN114494762B (en
Inventor
刘晓敏
桑顺
孙兴建
史珉
王浩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202111503748.XA priority Critical patent/CN114494762B/en
Publication of CN114494762A publication Critical patent/CN114494762A/en
Application granted granted Critical
Publication of CN114494762B publication Critical patent/CN114494762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image classification method based on a deep migration network. And the first-order statistic information of the related sub-fields is aligned based on the local maximum mean difference, and the local distribution of the two fields is adapted. The method can extract deep and discriminant features of the target domain, and only completes classification of the unlabeled samples of the target domain by using the source domain labeled samples.

Description

Hyperspectral image classification method based on deep migration network
Technical Field
The invention relates to a method for realizing hyperspectral image classification by using a deep migration network, belonging to the field of pattern recognition.
Background
The hyperspectral image (HSI) has abundant spectrum and space information, and has wide application prospect in the fields of agriculture, climate detection, national defense safety and the like. HSI classification is a common task for these applications and aims to classify each pixel in an image into different categories based on the spectral and spatial information obtained from the detection of the feature. Researchers have proposed many methods to improve the accuracy of HSI classification, including random forests, support vector machines, and decision trees. Although the traditional hyperspectral image classification methods are simple in model, most of the traditional hyperspectral image classification methods cannot guarantee classification accuracy.
In recent years, deep learning has achieved excellent performance on many computer vision tasks, such as image processing, object detection, and natural language processing. Deep learning can automatically learn features, which enables it to be adapted to various task scenarios. And deep learning has strong nonlinear representation capability, and can extract deeper and discriminative features of data. The advantages enable the deep learning to be successfully applied to the hyperspectral image classification task. The strong feature expression capability of deep learning often requires a large number of labeled training sample supports. It is known that with the development of a new generation of satellite hyperspectral sensor, a large amount of unlabeled HSIs can be rapidly acquired, however, labeling of the images requires a large amount of time for relevant experts to complete. Therefore, the shortage of the marked samples seriously affects the application of the deep learning method in the hyperspectral classification task. In order to solve the problems, many researchers combine active learning, data enhancement and other methods with deep learning, and complete hyperspectral image classification by using a small number of labeled samples. Although the above method can alleviate the problem of insufficient training samples caused by the difficulty in labeling HSI to some extent, when the training set data and the test set data come from similar but different fields, i.e. the data distribution of the training set data and the test set data are similar but different, it is difficult for the above method to obtain an ideal experimental effect. Transfer learning can solve this problem well by exploring domain invariant structures to transfer knowledge from a tagged domain (source domain) to a similar but non-distributed domain (target domain), thereby completing cross-domain classification. The deep migration learning model obtained by combining the deep network with the migration learning can learn deeper and more migratory features, and breakthrough achievements are achieved on the hyperspectral classification task. The source of the method is that the deep migration learning network can extract the characteristics with discriminant and invariant factors in the data and effectively group the HSI characteristics according to the correlation of the invariant factors.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides a hyperspectral image classification method based on a deep migration network, which can finish the classification of label-free samples of a target domain only by using source domain labeled samples.
The technical scheme is as follows: a hyperspectral image classification method based on a depth migration network comprises the following steps:
step 1, performing dimensionality reduction on an original hyperspectral image by using waveband selection: removing the redundancy of wave bands to obtain the hyperspectral data X after dimensionality reduction0Hyperspectral data X0Including reduced-dimension source domain data
Figure BDA0003403290440000021
And target domain data
Figure BDA0003403290440000022
Step 2, training an auxiliary classifier by using a source domain marking sample, and obtaining a target domain pseudo label by using the auxiliary classifier;
step 3, constructing a deep migration neural network, adapting a source domain and a target domain by using a domain adaptation layer based on CORAL loss to reduce the second-order statistic difference, and reducing the first-order statistic difference of the source domain and the target related subspace based on LMMD;
and 4, classifying the target domain data by using the trained auxiliary classifier.
Further, in the step 1, the original hyperspectral image is subjected to dimensionality reduction by using waveband selection: removing the redundancy of wave bands to obtain the hyperspectral data X after dimensionality reduction0The method specifically comprises the following steps:
defining the original HSI wave band number as NbIn the number of intervals of
Figure BDA0003403290440000023
And
Figure BDA0003403290440000024
respectively select a andb wave bands, the number of the wave bands after dimensionality reduction is d, wherein
Figure BDA0003403290440000025
Representing a rounding down operation, we can obtain:
Figure BDA0003403290440000026
defining the hyperspectral data X after dimensionality reduction0∈Rn×dAs input to the model, n represents the number of samples, spectral data X0Including reduced-dimension source domain data
Figure BDA0003403290440000027
And target domain data
Figure BDA0003403290440000028
Wherein,
Figure BDA0003403290440000029
and
Figure BDA00034032904400000210
wherein n issRepresenting the number of source domain samples, ntRepresenting the number of samples in the target domain, YsIs a source domain label.
Further, a deep migration neural network is constructed in the step 3, a field adaptation layer is used for adapting a source domain and a target domain to reduce second-order statistic difference based on a CORAL method, and first-order statistic difference of a source domain and a target related subspace is reduced based on LMMD; the method specifically comprises the following steps:
the deep migration network DTN comprises a full connection layer, a nonlinear layer, a domain adaptation layer and a Softmax layer; reducing the dimension of the source domain data
Figure BDA00034032904400000211
And target domain data
Figure BDA00034032904400000212
Input into deep migration network DTN via full connectionExtracting features by a feature extractor consisting of the layer and the nonlinear layer; wherein, the input of the full connection layer is as follows:
F1=I×W1+b1
wherein I is the full link layer input, b1Is an offset; connecting the fully-connected layer output as an input to a nonlinear layer, the nonlinear layer output being:
Figure BDA0003403290440000031
wherein, INInputting a nonlinear layer;
the domain adaptation layer is used for adapting the distribution difference of the two domains and then connecting the output of the domain adaptation layer to the Softmax layer; the loss function of the deep migration network DTN is defined as:
Figure BDA0003403290440000032
wherein,
Figure BDA0003403290440000033
the term is adapted for the covariance field,
Figure BDA0003403290440000034
in order to adapt the terms for the subspace,
Figure BDA0003403290440000035
for source domain data classification loss, α1And alpha2Respectively are variance field adaptive parameters and subspace adaptive parameters; the covariance field adaptation term can be expressed as:
Figure BDA0003403290440000036
wherein d is1Dimension for domain adaptation layer input, CsAnd CtRespectively representing a source domain data covariance matrix and a target domain data covariance matrix;
the subspace adaptation term may be expressed as:
Figure BDA0003403290440000037
wherein,
Figure BDA0003403290440000038
c ∈ {1, 2., C } is a category index for the target domain pseudo label obtained by the auxiliary classifier,
Figure BDA0003403290440000039
and
Figure BDA00034032904400000310
to represent
Figure BDA00034032904400000311
And
Figure BDA00034032904400000312
weights belonging to class c, then
Figure BDA00034032904400000313
Is a weighted sum of the c classes,
Figure BDA00034032904400000314
can be calculated as:
Figure BDA00034032904400000315
the source domain data classification penalty can be expressed as:
Figure BDA00034032904400000316
wherein C is the number of categories, Y represents a category matrix, and S is a prediction result of a DTN model of the deep migration network.
Has the advantages that: according to the cross-domain hyperspectral image classification method of the deep migration network, the second-order statistic information of the two domains is aligned integrally based on CORAL, and the global distribution of the two domains is adapted. And the first-order statistic information of the related sub-fields is aligned based on the local maximum mean difference, and the local distribution of the two fields is adapted. The method can extract deep and discriminant features of the target domain, and only completes classification of unlabeled samples of the target domain by using the source domain labeled samples
Drawings
FIG. 1 is a schematic diagram of the process of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings.
A hyperspectral image classification method based on a deep migration network can finish classification of target domain unlabeled samples only by using source domain labeled samples.
The technical scheme is as follows: a hyperspectral image classification method based on a depth migration network comprises the following steps:
step 1, performing dimensionality reduction on an original hyperspectral image by using waveband selection: removing the redundancy of wave bands to obtain the hyperspectral data X after dimensionality reduction0Hyperspectral data X0Including reduced-dimension source domain data
Figure BDA0003403290440000041
And target domain data
Figure BDA0003403290440000042
The number of original HSI bands is large, and the correlation between the bands is strong, so that a large amount of redundant information exists between the bands. Directly inputting the original HSI into the DTN causes the increase of network parameters and the reduction of model performance. And performing dimensionality reduction on the original HSI data by applying wave band selection. Defining the original HSI wave band number as NbIn the number of intervals of
Figure BDA0003403290440000043
And
Figure BDA0003403290440000044
respectively selecting a wave band and b wave bands, and the number of the wave bands after dimensionality reduction is d, wherein
Figure BDA0003403290440000045
Representing a rounding down operation, we can obtain:
Figure BDA0003403290440000046
defining the hyperspectral data X after dimensionality reduction0∈Rn×dAs input to the model, n represents the number of samples, spectral data X0Including reduced-dimension source domain data
Figure BDA0003403290440000047
And target domain data
Figure BDA0003403290440000048
Wherein,
Figure BDA0003403290440000049
and
Figure BDA00034032904400000410
wherein n issRepresenting the number of source domain samples, ntRepresenting the number of samples in the target domain, YsIs a source domain label.
Step 2, training an auxiliary classifier by using a source domain marking sample, and obtaining a target domain pseudo label by using the auxiliary classifier;
and 3, constructing a deep migration neural network, reducing the second-order statistic difference of the two domains by using a domain adaptation layer based on a CORAL (CORrelation Alignment) algorithm, and reducing the first-order statistic difference of the source domain and the target related subspace based on an LMMD (Local Maximum Mean value metric).
Deep neural networks are widely used for HSI classification due to their powerful deep feature extraction capabilities. However, when the training set and the test set belong to different data distributions, the deep neural network has difficulty in learning migratable knowledge, thereby resulting in insufficient classification capability of the model. In order to solve the problems, a deep migration network DTN is provided, a domain adaptation layer is added into the DNN, and global second-order statistics and first-order statistics of each type of related subspace are aligned simultaneously on deep layers extracted by the DNN, source domain with discriminant and target domain features.
The deep migration network DTN is a feed-forward neural network comprising a fully-connected layer, a nonlinear layer, a domain adaptation layer and a Softmax layer. FC1 in fig. 1 represents a domain adaptation layer, and FC2 represents a Softmax layer.
Reducing the dimension of the source domain data
Figure BDA0003403290440000051
And target domain data
Figure BDA0003403290440000052
The features are extracted by a feature extractor composed of a full link layer and a nonlinear layer when the features are inputted into a deep migration network DTN. Wherein, the input of the full connection layer is as follows:
F1=I×W1+b1
wherein I is the full link layer input, b1Is an offset. Connecting the fully-connected layer output as an input to a nonlinear layer, the nonlinear layer output being:
Figure BDA0003403290440000053
wherein, INIs a non-linear layer input. A domain adaptation layer is added to adapt the two domain distribution differences, and then the output of the domain adaptation layer is connected to the Softmax layer. The loss function of the deep migration network DTN is defined as:
Figure BDA0003403290440000054
wherein,
Figure BDA0003403290440000055
the term is adapted for the covariance field,
Figure BDA0003403290440000056
in order to adapt the terms for the subspace,
Figure BDA0003403290440000057
for source domain data classification loss, α1And alpha2Respectively, a variance domain adaptive parameter and a subspace adaptive parameter.
The covariance field adaptation term can be expressed as:
Figure BDA0003403290440000058
wherein d is1Dimension for domain adaptation layer input, CsAnd CtRespectively representing the source domain and target domain data covariance matrices.
The subspace adaptation term may be expressed as:
Figure BDA0003403290440000061
wherein,
Figure BDA0003403290440000062
c ∈ {1, 2., C } is a category index for the target domain pseudo label obtained by the auxiliary classifier,
Figure BDA0003403290440000063
and
Figure BDA0003403290440000064
to represent
Figure BDA0003403290440000065
And
Figure BDA0003403290440000066
weights belonging to class c, then
Figure BDA0003403290440000067
Is class cThe weighted sum of (a) and (b),
Figure BDA0003403290440000068
can be calculated as:
Figure BDA0003403290440000069
the source domain data classification penalty can be expressed as:
Figure BDA00034032904400000610
wherein C is the number of categories, Y represents a category matrix, and S is a prediction result of a DTN model of the deep migration network.
And 4, step 4: and classifying the target domain data by using a trained auxiliary classifier.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (3)

1. A hyperspectral image classification method based on a depth migration network is characterized by comprising the following steps:
step 1, performing dimensionality reduction on an original hyperspectral image by using waveband selection: removing the redundancy of wave bands to obtain the hyperspectral data X after dimensionality reduction0Hyperspectral data X0Including reduced-dimension source domain data
Figure FDA0003403290430000011
And target domain data
Figure FDA0003403290430000012
Step 2, training an auxiliary classifier by using a source domain marking sample, and obtaining a target domain pseudo label by using the auxiliary classifier;
step 3, constructing a deep migration neural network, adapting a source domain and a target domain by using a domain adaptation layer based on CORAL loss to reduce the second-order statistic difference, and reducing the first-order statistic difference of the source domain and the target related subspace based on LMMD;
and 4, classifying the target domain data by using the trained auxiliary classifier.
2. The hyperspectral image classification method based on the depth migration network according to claim 1 is characterized in that in step 1, the original hyperspectral image is subjected to dimensionality reduction by using wave band selection: removing the redundancy of wave bands to obtain the hyperspectral data X after dimensionality reduction0The method specifically comprises the following steps:
defining the original HSI wave band number as NbIn the number of intervals of
Figure FDA0003403290430000013
And
Figure FDA0003403290430000014
respectively selecting a wave band and b wave bands, and the number of the wave bands after dimensionality reduction is d, wherein
Figure FDA0003403290430000015
Representing a rounding down operation, we can obtain:
Figure FDA0003403290430000016
defining the hyperspectral data X after dimensionality reduction0∈Rn×dAs input to the model, n represents the number of samples, spectral data X0Including reduced-dimension source domain data
Figure FDA0003403290430000017
And target domain data
Figure FDA0003403290430000018
Wherein,
Figure FDA0003403290430000019
and
Figure FDA00034032904300000110
wherein n issRepresenting the number of source domain samples, ntRepresenting the number of samples in the target domain, YsIs a source domain label.
3. The hyperspectral image classification method based on the depth migration network according to claim 1 is characterized in that a depth migration neural network is constructed in step 3, a domain adaptation layer is used for adapting a source domain and a target domain based on a CORAL method to reduce the difference of second-order statistics, and simultaneously, the difference of first-order statistics of related subspaces of the source domain and the target is reduced based on LMMD; the method specifically comprises the following steps:
the deep migration network DTN comprises a full connection layer, a nonlinear layer, a domain adaptation layer and a Softmax layer; reducing the dimension of the source domain data
Figure FDA00034032904300000111
And target domain data
Figure FDA00034032904300000112
Inputting the data into a deep migration network (DTN), and extracting features through a feature extractor consisting of a full connection layer and a nonlinear layer; wherein, the input of the full connection layer is as follows:
F1=I×W1+b1
wherein I is the full link layer input, b1Is an offset; connecting the fully-connected layer output as an input to a nonlinear layer, the nonlinear layer output being:
Figure FDA0003403290430000021
wherein, INInputting a nonlinear layer;
the domain adaptation layer is used for adapting the distribution difference of the two domains and then connecting the output of the domain adaptation layer to the Softmax layer; the loss function of the deep migration network DTN is defined as:
Figure FDA0003403290430000022
wherein,
Figure FDA0003403290430000023
the term is adapted for the covariance field,
Figure FDA0003403290430000024
in order to adapt the terms for the subspace,
Figure FDA0003403290430000025
for source domain data classification loss, α1And alpha2Respectively are variance field adaptive parameters and subspace adaptive parameters; the covariance field adaptation term can be expressed as:
Figure FDA0003403290430000026
wherein d is1Dimension for domain adaptation layer input, CsAnd CtRespectively representing a source domain data covariance matrix and a target domain data covariance matrix;
the subspace adaptation term may be expressed as:
Figure FDA0003403290430000027
wherein,
Figure FDA0003403290430000028
c ∈ {1, 2., C } is a category index for the target domain pseudo label obtained by the auxiliary classifier,
Figure FDA0003403290430000029
and
Figure FDA00034032904300000210
to represent
Figure FDA00034032904300000211
And
Figure FDA00034032904300000212
weights belonging to class c, then
Figure FDA00034032904300000213
Is a weighted sum of the c classes,
Figure FDA00034032904300000214
can be calculated as:
Figure FDA00034032904300000215
the source domain data classification penalty can be expressed as:
Figure FDA00034032904300000216
wherein C is the number of categories, Y represents a category matrix, and S is a prediction result of a DTN model of the deep migration network.
CN202111503748.XA 2021-12-10 2021-12-10 Hyperspectral image classification method based on depth migration network Active CN114494762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111503748.XA CN114494762B (en) 2021-12-10 2021-12-10 Hyperspectral image classification method based on depth migration network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111503748.XA CN114494762B (en) 2021-12-10 2021-12-10 Hyperspectral image classification method based on depth migration network

Publications (2)

Publication Number Publication Date
CN114494762A true CN114494762A (en) 2022-05-13
CN114494762B CN114494762B (en) 2024-09-24

Family

ID=81492795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111503748.XA Active CN114494762B (en) 2021-12-10 2021-12-10 Hyperspectral image classification method based on depth migration network

Country Status (1)

Country Link
CN (1) CN114494762B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188830A (en) * 2022-11-01 2023-05-30 青岛柯锐思德电子科技有限公司 Hyperspectral image cross-domain classification method based on multi-level feature alignment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359623A (en) * 2018-11-13 2019-02-19 西北工业大学 High spectrum image based on depth Joint Distribution adaptation network migrates classification method
KR102197297B1 (en) * 2019-09-27 2020-12-31 서울대학교산학협력단 Change detection method using recurrent 3-dimensional fully convolutional network for hyperspectral image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359623A (en) * 2018-11-13 2019-02-19 西北工业大学 High spectrum image based on depth Joint Distribution adaptation network migrates classification method
KR102197297B1 (en) * 2019-09-27 2020-12-31 서울대학교산학협력단 Change detection method using recurrent 3-dimensional fully convolutional network for hyperspectral image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
毛远宏;贺占庄;马钟;: "重构迁移学习的红外目标分类", 电子科技大学学报, no. 04, 30 July 2020 (2020-07-30) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188830A (en) * 2022-11-01 2023-05-30 青岛柯锐思德电子科技有限公司 Hyperspectral image cross-domain classification method based on multi-level feature alignment
CN116188830B (en) * 2022-11-01 2023-09-29 青岛柯锐思德电子科技有限公司 Hyperspectral image cross-domain classification method based on multi-level feature alignment

Also Published As

Publication number Publication date
CN114494762B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN111414942B (en) Remote sensing image classification method based on active learning and convolutional neural network
CN111695467B (en) Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion
CN106203523B (en) The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN111274869B (en) Method for classifying hyperspectral images based on parallel attention mechanism residual error network
CN111598001B (en) Identification method for apple tree diseases and insect pests based on image processing
CN112241762B (en) Fine-grained identification method for pest and disease damage image classification
CN111695456B (en) Low-resolution face recognition method based on active discriminant cross-domain alignment
CN111401426B (en) Small sample hyperspectral image classification method based on pseudo label learning
CN105205449A (en) Sign language recognition method based on deep learning
CN110175248B (en) Face image retrieval method and device based on deep learning and Hash coding
Akhand et al. Convolutional Neural Network based Handwritten Bengali and Bengali-English Mixed Numeral Recognition.
CN113947725B (en) Hyperspectral image classification method based on convolution width migration network
CN112069900A (en) Bill character recognition method and system based on convolutional neural network
Akhand et al. Convolutional neural network training with artificial pattern for bangla handwritten numeral recognition
CN114972904B (en) Zero sample knowledge distillation method and system based on fighting against triplet loss
WO2021128704A1 (en) Open set classification method based on classification utility
Sethy et al. Off-line Odia handwritten numeral recognition using neural network: a comparative analysis
CN107633264B (en) Linear consensus integrated fusion classification method based on space spectrum multi-feature extreme learning
CN114494762A (en) Hyperspectral image classification method based on deep migration network
CN116596891B (en) Wood floor color classification and defect detection method based on semi-supervised multitasking detection
CN114049567B (en) Adaptive soft label generation method and application in hyperspectral image classification
Kumar et al. Siamese based Neural Network for Offline Writer Identification on word level data
CN113723456B (en) Automatic astronomical image classification method and system based on unsupervised machine learning
CN103530658B (en) A kind of plant leaf blade data recognition methods based on rarefaction representation
CN114519787B (en) Depth feature visualization interpretation method of intelligent handwriting evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant