CN113298143A - Foundation cloud robust classification method - Google Patents

Foundation cloud robust classification method Download PDF

Info

Publication number
CN113298143A
CN113298143A CN202110565204.XA CN202110565204A CN113298143A CN 113298143 A CN113298143 A CN 113298143A CN 202110565204 A CN202110565204 A CN 202110565204A CN 113298143 A CN113298143 A CN 113298143A
Authority
CN
China
Prior art keywords
convolutional neural
vector
cloud
coefficient
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110565204.XA
Other languages
Chinese (zh)
Other versions
CN113298143B (en
Inventor
唐明
于爱华
侯北平
朱文
李刚
朱广信
杨舒捷
朱必宏
宣仲伟
宣皓莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lover Health Science and Technology Development Co Ltd
Original Assignee
Zhejiang Lover Health Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lover Health Science and Technology Development Co Ltd filed Critical Zhejiang Lover Health Science and Technology Development Co Ltd
Priority to CN202110565204.XA priority Critical patent/CN113298143B/en
Publication of CN113298143A publication Critical patent/CN113298143A/en
Application granted granted Critical
Publication of CN113298143B publication Critical patent/CN113298143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a foundation cloud robust classification method which comprises two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used to classify the input feature vector y. The invention relates to a weighted sparse representation-based convolutional neural network feature fusion cloud classification method, which is characterized in that two convolutional neural networks are extracted to serve as a dictionary for weighted sparse representation so as to improve the operation efficiency; by weighting sparse representation classification, the robustness of the system can be improved under the condition of occlusion, and the performance better than that of a single convolutional neural network can be obtained by fusing two convolutional neural networks.

Description

Foundation cloud robust classification method
Technical Field
The invention belongs to the technical field of classification of foundation cloud pictures, and particularly relates to a foundation cloud robust classification method.
Background
Cloud is an important weather phenomenon, and a reliable cloud observation technology has important significance for work such as climate research, weather analysis and weather forecast. The foundation cloud observation is an important cloud observation mode, can reflect the microstructure information of the cloud, makes up the defects of satellite observation, and can provide more comprehensive data for the cloud observation related application by fully applying the foundation cloud observation information. In the foundation cloud observation, the foundation cloud image classification technology is the key for realizing the foundation cloud observation, and the application of the technology can not only liberate observers from heavy cloud observation work, but also improve the accuracy and timeliness of cloud observation, so that the foundation cloud image classification is of great significance.
In recent decades, methods for classifying foundation cloud charts have been extensively studied. Traditional cloud classification methods rely on expert experience, are unreliable, time-consuming, and to some extent, rely on the experience of operators, with some uncertainty and bias in the classification results; in addition, human eye observation has gradually trended towards high costs.
The foundation cloud picture is a new natural texture image, which has attracted great attention in the field of computer vision in recent years, and the application of deep learning technology to the analysis and recognition research of the foundation cloud picture is increasing. The convolutional neural network technology is applied to cloud identification of a foundation cloud image, so that complex preprocessing of the cloud image in the early stage of image processing is avoided, the local receptive field of the convolutional neural network enables each neuron not to sense the whole image, only local sensing is needed, and all sensed information is integrated in the deep layer of the network to obtain the global information of the image; the weight sharing strategy of the method better accords with the characteristics of a biological neural network, greatly reduces the number of weight parameters, and reduces the computational complexity of the whole image processing process. The traditional convolutional neural network has high recognition rate in cloud classification, but has poor robustness in the cloud classification with occlusion.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a foundation cloud robust classification method based on weighted sparse representation and convolutional neural network feature fusion.
The technical scheme adopted by the invention is as follows:
a foundation cloud robust classification method comprises two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input feature vector y; the method specifically comprises the following steps:
(1) the training samples are respectively processed by two convolutional neural networks to obtain the characteristic y1∈Rn1×1,y2∈Rn2×1N1 represents the dimension of the features obtained by the first convolutional neural network, n2 represents the dimension of the features obtained by the second convolutional neural network, R represents a real number space, and the two feature vectors are added to obtain a total feature vector y:
Figure BDA0003080716140000021
(2) let n1+ n2 be 2n, convert the 2 n-dimensional eigenvector into an n-dimensional eigenvector, and propose a projection composed of two projections PiAnd PeThe system comprises the following components:
Figure BDA0003080716140000022
Figure BDA0003080716140000023
Figure BDA0003080716140000024
denotes y passes through PiVector after projection, z represents
Figure BDA0003080716140000025
Through PeA projected vector;
two of which project PiAnd PeFrom training samples Y ═ Y1 … Yk … YK]Determining;
Figure BDA0003080716140000026
mkindicates that the kth class training sample contains mkPicture, K is 1,2, …, K;
(3) computing
Figure BDA0003080716140000027
Mean value v ofkSum variance vector
Figure BDA0003080716140000028
vk
Figure BDA0003080716140000029
Figure BDA00030807161400000210
Figure BDA00030807161400000211
i=1,2,…,2n;
Yk(j) represents a matrix YkColumn j of (1);
definition of
Figure BDA00030807161400000212
Let { Vi(jp) Become Vi1.5n minimumSet of items, jp<jp+1P is 1,2, …,1.5n-1, and then gives
Figure BDA00030807161400000213
Thus, it is possible to provide
Figure BDA0003080716140000031
k=1,2,…,K;
Figure BDA0003080716140000032
Represents YkThrough projection PiThe latter matrix;
(4) calculating Vk∈R2n×KMean value v of*Sum variance vector
Figure BDA0003080716140000033
Figure BDA0003080716140000034
Figure BDA0003080716140000035
1,2, …,1.5 n; let
Figure BDA0003080716140000036
Become into
Figure BDA0003080716140000037
N sets of maximum terms jp<jp+1P is 1,2, …, n-1, then obtaining
Figure BDA0003080716140000038
(5) For all mkOf a training sampleDictionary D of class kkExpressed as:
Dk=Pe(Pi(Yk));
k=1,2,…,mK
and the whole dictionary D belongs to Rn×m
Figure BDA0003080716140000039
Called extended dictionary, consisting of { DkThe components are as follows:
D=[D1 … Dk … DK];
(6) the robust classification formula based on the optimal sparse representation is as follows:
Figure BDA00030807161400000310
a represents a sparse coefficient and a represents,
Figure BDA00030807161400000311
expressing the optimized sparse coefficient, and expressing a regular coefficient by lambda;
obtaining an optimal weighting matrix by iteration
Figure BDA00030807161400000312
(7) By obtaining
Figure BDA00030807161400000313
And
Figure BDA00030807161400000314
calculating deltak
Figure BDA00030807161400000315
δkRepresenting the weighted error between the test picture and each class;
then the
Figure BDA00030807161400000316
gkRepresenting a weighted distance between the test picture and each class;
(8) the final test sample z is classified by the following formula:
Figure BDA0003080716140000041
preferably, in the step (6),
Figure BDA0003080716140000042
is estimated previously
Figure BDA0003080716140000043
Is updated specifically as follows:
the first iteration obtains the deviation
Figure BDA0003080716140000044
Figure BDA0003080716140000045
The first iteration obtains a weight matrix W(l)
Figure BDA0003080716140000046
Beta represents a decrement rate coefficient, phi represents a coefficient for controlling the position of a demarcation point;
followed by
Figure BDA0003080716140000047
The updating is as follows:
Figure BDA0003080716140000048
||·||Frepresenting the F norm.
The invention has the beneficial effects that:
the invention relates to a weighted sparse representation-based convolutional neural network feature fusion cloud classification method, which is characterized in that two convolutional neural networks (inclusion-v 3 and ResNet-50) are extracted to serve as a weighted sparse representation dictionary to improve the operation efficiency; by weighting sparse representation classification, the robustness of the system can be improved under the condition of occlusion, and the performance better than that of a single convolutional neural network can be obtained by fusing two convolutional neural networks.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a test chart without noise (0%);
fig. 3 is a test chart after random noise (5% -25%) is added).
Detailed Description
The technical solutions of the present invention are further specifically described below by examples, which are for illustration of the present invention and are not intended to limit the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a foundation cloud robust classification method includes two parts, the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input feature vector y; the method specifically comprises the following steps:
(1) the training samples are respectively processed by two convolutional neural networks (inclusion-v 3, ResNet-50) to obtain the characteristic y1∈Rn1×1,y2∈Rn2×1N1 represents the dimension of the features obtained by the first convolutional neural network, n2 represents the dimension of the features obtained by the second convolutional neural network, R represents a real number space, and the two feature vectors are added to obtain a total feature vector y:
Figure BDA0003080716140000051
(2) let n1+ n2 be 2n, convert the 2 n-dimensional eigenvector into an n-dimensional eigenvector, and propose a projection composed of two projections PiAnd PeThe system comprises the following components:
Figure BDA0003080716140000052
Figure BDA0003080716140000053
Figure BDA0003080716140000054
denotes y passes through PiVector after projection, z represents
Figure BDA0003080716140000055
Through PeA projected vector;
two of which project PiAnd PeFrom training samples Y ═ Y1 … Yk … YK]Determining;
Figure BDA0003080716140000056
mkindicates that the kth class training sample contains mkPicture, K is 1,2, …, K;
(3) computing
Figure BDA0003080716140000057
Mean value v ofkSum variance vector
Figure BDA0003080716140000058
vk
Figure BDA0003080716140000059
Figure BDA00030807161400000510
Figure BDA00030807161400000511
i=1,2,…,2n;
Yk(j) represents a matrix YkColumn j of (1);
definition of
Figure BDA0003080716140000061
Let { Vi(jp) Become Vi1.5n sets of min terms, jp<jp+1P is 1,2, …,1.5n-1, and then gives
Figure BDA0003080716140000062
Thus, it is possible to provide
Figure BDA0003080716140000063
k=1,2,…,K;
Figure BDA0003080716140000064
Represents YkThrough projection PiThe latter matrix;
(4) calculating Vk∈R2n×KMean value v of*Sum variance vector
Figure BDA0003080716140000065
Figure BDA0003080716140000066
Figure BDA0003080716140000067
i=1,2,…,1.5n;Let
Figure BDA0003080716140000068
Become into
Figure BDA0003080716140000069
N sets of maximum terms jp<jp+1P is 1,2, …, n-1, then obtaining
Figure BDA00030807161400000610
(5) For all mkDictionary D of class k of training sampleskExpressed as:
Dk=Pe(Pi(Yk));
k=1,2,…,mK
and the whole dictionary D belongs to Rn×m
Figure BDA00030807161400000611
Called extended dictionary, consisting of { DkThe components are as follows:
D=[D1 … Dk … DK];
(6) the robust classification formula based on the optimal sparse representation is as follows:
Figure BDA00030807161400000612
a represents a sparse coefficient and a represents,
Figure BDA00030807161400000613
expressing the optimized sparse coefficient, and expressing a regular coefficient by lambda;
obtaining an optimal weighting matrix by iteration
Figure BDA00030807161400000614
Figure BDA00030807161400000615
Is estimated previously
Figure BDA00030807161400000616
Is updated specifically as follows:
the first iteration obtains the deviation
Figure BDA0003080716140000071
Figure BDA0003080716140000072
The first iteration obtains a weight matrix W(l)
Figure BDA0003080716140000073
Beta represents a decrement rate coefficient, phi represents a coefficient for controlling the position of a demarcation point;
followed by
Figure BDA0003080716140000074
The updating is as follows:
Figure BDA0003080716140000075
||·||Frepresents the F norm;
(7) by obtaining
Figure BDA0003080716140000076
And
Figure BDA0003080716140000077
calculating deltak
Figure BDA0003080716140000078
δkRepresenting the error between the test picture and each class;
then the
Figure BDA0003080716140000079
gkRepresenting the distance between the test picture and each class;
(8) the final test sample z is classified by the following formula:
Figure BDA00030807161400000710
TABLE 1
0% 5% 10% 15% 20% 25%
Inception-v3 96.97 84.03 83.18 82.24 79.43 77.52
ResNet-50 97.09 90.47 89.68 83.99 78.77 75.15
The method of the invention 99.81 99.37 98.87 98.06 95.53 90.28
In order to verify the effectiveness and robustness of the method provided by the invention, experiments are carried out on MGCD data sets, 7 types of pictures are provided, the tested pictures are added with a plurality of random shielding noises, and the sizes of the pictures are 1024 x 1024. Comparing the recognition rates of the neural network and the method of the invention under the condition of adding random noise (0% -25%), verifying the effectiveness of the method, wherein the test image without noise (0%) is shown in figure 2, and the test image after adding random noise (5% -25%) is shown in figure 3; the experimental results are shown in table 1, and it can be seen from table 1 that, under the condition of not adding noise, the recognition rates of the method and the neural network are not much different, but the recognition rate of the neural network is rapidly reduced with the increase of noise, while the recognition rate of the method is slowly reduced, which indicates that the robustness of the method is stronger than that of a deep neural network.

Claims (2)

1. A foundation cloud robust classification method is characterized by comprising two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input feature vector y; the method specifically comprises the following steps:
(1) the training samples are respectively processed by two convolutional neural networks to obtain the characteristic y1∈Rn1×1,y2∈Rn2×1N1 represents the dimension of the features obtained by the first convolutional neural network, n2 represents the dimension of the features obtained by the second convolutional neural network, R represents a real number space, and the two feature vectors are added to obtain a total feature vector y:
Figure FDA0003080716130000011
(2) let n1+ n2 be 2n, convert the 2 n-dimensional eigenvector into an n-dimensional eigenvector, and propose a projection composed of two projections PiAnd PeThe system comprises the following components:
Figure FDA0003080716130000012
Figure FDA0003080716130000013
Figure FDA0003080716130000014
denotes y passes through PiVector after projection, z represents
Figure FDA00030807161300000113
Through PeA projected vector;
two of which project PiAnd PeFrom training samples Y ═ Y1…Yk…YK]Determining;
Figure FDA0003080716130000015
mkrepresenting class k training samplesContaining mkPicture, K is 1,2, …, K;
(3) computing
Figure FDA0003080716130000016
Mean value v ofkSum variance vector
Figure FDA0003080716130000017
vk
Figure FDA00030807161300000114
Figure FDA0003080716130000019
Figure FDA00030807161300000110
i=1,2,…,2n;
Yk(j) represents a matrix YkColumn j of (1);
definition of
Figure FDA00030807161300000112
Let { Vi(jp) Become Vi1.5n sets of min terms, jp<jp+1P is 1,2, …,1.5n-1, and then gives
Figure FDA00030807161300000111
Thus, it is possible to provide
Figure FDA0003080716130000021
k=1,2,…,K;
Figure FDA0003080716130000022
Represents YkThrough projection PiThe latter matrix;
(4) calculating Vk∈R2n×KMean value v of*Sum variance vector
Figure FDA0003080716130000023
Figure FDA0003080716130000024
Figure FDA0003080716130000025
Let
Figure FDA0003080716130000026
Become into
Figure FDA0003080716130000027
1,2, …,1.5 n;
jp<jp+1p is 1,2, …, n-1, then obtaining
Figure FDA0003080716130000028
(5) For all mkDictionary D of class k of training sampleskExpressed as:
Dk=Pe(Pi(Yk));
k=1,2,…,mK
and the whole dictionary D belongs to Rn×m
Figure FDA0003080716130000029
Called extended dictionary, consisting of { DkThe components are as follows:
D=[D1…Dk…DK];
(6) the robust classification formula based on the optimal sparse representation is as follows:
Figure FDA00030807161300000210
a represents a sparse coefficient and a represents,
Figure FDA00030807161300000211
expressing the optimized sparse coefficient, and expressing a regular coefficient by lambda;
obtaining an optimal weighting matrix by iteration
Figure FDA00030807161300000212
(7) By obtaining
Figure FDA00030807161300000213
And
Figure FDA00030807161300000214
calculating deltak
Figure FDA00030807161300000215
δkRepresenting the weighted error between the test picture and each class;
then the
gk=||δk||2,
Figure FDA00030807161300000216
gkRepresenting a weighted distance between the test picture and each class;
(8) the final test sample z is classified by the following formula:
Figure FDA0003080716130000031
2. the ground-based cloud robust classification method according to claim 1, wherein in step (6),
Figure FDA0003080716130000032
is estimated previously
Figure FDA0003080716130000033
Is updated specifically as follows:
the first iteration obtains the deviation
Figure FDA0003080716130000034
Figure FDA0003080716130000035
The first iteration obtains a weight matrix W(l)
Figure FDA0003080716130000036
Beta represents a decrement rate coefficient, phi represents a coefficient for controlling the position of a demarcation point;
followed by
Figure FDA0003080716130000037
The updating is as follows:
Figure FDA0003080716130000038
||·||Frepresenting the F norm.
CN202110565204.XA 2021-05-24 2021-05-24 Foundation cloud robust classification method Active CN113298143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565204.XA CN113298143B (en) 2021-05-24 2021-05-24 Foundation cloud robust classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110565204.XA CN113298143B (en) 2021-05-24 2021-05-24 Foundation cloud robust classification method

Publications (2)

Publication Number Publication Date
CN113298143A true CN113298143A (en) 2021-08-24
CN113298143B CN113298143B (en) 2023-11-10

Family

ID=77324260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110565204.XA Active CN113298143B (en) 2021-05-24 2021-05-24 Foundation cloud robust classification method

Country Status (1)

Country Link
CN (1) CN113298143B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819748A (en) * 2012-07-19 2012-12-12 河南工业大学 Classification and identification method and classification and identification device of sparse representations of destructive insects
US20150154229A1 (en) * 2013-11-29 2015-06-04 Canon Kabushiki Kaisha Scalable attribute-driven image retrieval and re-ranking
WO2016091017A1 (en) * 2014-12-09 2016-06-16 山东大学 Extraction method for spectral feature cross-correlation vector in hyperspectral image classification
CN107066964A (en) * 2017-04-11 2017-08-18 宋佳颖 Rapid collaborative representation face classification method
CN112381070A (en) * 2021-01-08 2021-02-19 浙江科技学院 Fast robust face recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819748A (en) * 2012-07-19 2012-12-12 河南工业大学 Classification and identification method and classification and identification device of sparse representations of destructive insects
US20150154229A1 (en) * 2013-11-29 2015-06-04 Canon Kabushiki Kaisha Scalable attribute-driven image retrieval and re-ranking
WO2016091017A1 (en) * 2014-12-09 2016-06-16 山东大学 Extraction method for spectral feature cross-correlation vector in hyperspectral image classification
CN107066964A (en) * 2017-04-11 2017-08-18 宋佳颖 Rapid collaborative representation face classification method
CN112381070A (en) * 2021-01-08 2021-02-19 浙江科技学院 Fast robust face recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
丁文秀;孙锐;闫晓星: "基于分层深度学习的鲁棒行人分类", 光电工程, vol. 42, no. 9 *
侯北平;朱文;马连伟;介婧: "基于形状特征的移动目标实时分类研究", 仪器仪表学报, no. 008 *
翟林;潘新;刘霞;罗小玲;: "稀疏表示的手掌图像识别研究", 计算机仿真, no. 12 *

Also Published As

Publication number Publication date
CN113298143B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN111583263B (en) Point cloud segmentation method based on joint dynamic graph convolution
CN108681752B (en) Image scene labeling method based on deep learning
CN109359608B (en) Face recognition method based on deep learning model
CN100492399C (en) Method for making human face posture estimation utilizing dimension reduction method
CN110569901A (en) Channel selection-based countermeasure elimination weak supervision target detection method
CN109740679B (en) Target identification method based on convolutional neural network and naive Bayes
CN107169117B (en) Hand-drawn human motion retrieval method based on automatic encoder and DTW
CN109492750B (en) Zero sample image classification method based on convolutional neural network and factor space
CN111079847B (en) Remote sensing image automatic labeling method based on deep learning
CN108595558B (en) Image annotation method based on data equalization strategy and multi-feature fusion
CN114842267A (en) Image classification method and system based on label noise domain self-adaption
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN110555461A (en) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN113536925A (en) Crowd counting method based on attention guide mechanism
CN112967210B (en) Unmanned aerial vehicle image denoising method based on full convolution twin network
CN114267060A (en) Face age identification method and system based on uncertain suppression network model
CN114202792A (en) Face dynamic expression recognition method based on end-to-end convolutional neural network
CN110288002B (en) Image classification method based on sparse orthogonal neural network
CN116883746A (en) Graph node classification method based on partition pooling hypergraph neural network
CN113298143A (en) Foundation cloud robust classification method
CN113723482B (en) Hyperspectral target detection method based on multi-example twin network
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN114266911A (en) Embedded interpretable image clustering method based on differentiable k-means
CN111914718A (en) Feature weighting PCA face recognition method based on average influence value data conversion
CN116310463B (en) Remote sensing target classification method for unsupervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant