CN113298143A - Foundation cloud robust classification method - Google Patents
Foundation cloud robust classification method Download PDFInfo
- Publication number
- CN113298143A CN113298143A CN202110565204.XA CN202110565204A CN113298143A CN 113298143 A CN113298143 A CN 113298143A CN 202110565204 A CN202110565204 A CN 202110565204A CN 113298143 A CN113298143 A CN 113298143A
- Authority
- CN
- China
- Prior art keywords
- convolutional neural
- vector
- cloud
- coefficient
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 239000013598 vector Substances 0.000 claims abstract description 26
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 10
- 238000013100 final test Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a foundation cloud robust classification method which comprises two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used to classify the input feature vector y. The invention relates to a weighted sparse representation-based convolutional neural network feature fusion cloud classification method, which is characterized in that two convolutional neural networks are extracted to serve as a dictionary for weighted sparse representation so as to improve the operation efficiency; by weighting sparse representation classification, the robustness of the system can be improved under the condition of occlusion, and the performance better than that of a single convolutional neural network can be obtained by fusing two convolutional neural networks.
Description
Technical Field
The invention belongs to the technical field of classification of foundation cloud pictures, and particularly relates to a foundation cloud robust classification method.
Background
Cloud is an important weather phenomenon, and a reliable cloud observation technology has important significance for work such as climate research, weather analysis and weather forecast. The foundation cloud observation is an important cloud observation mode, can reflect the microstructure information of the cloud, makes up the defects of satellite observation, and can provide more comprehensive data for the cloud observation related application by fully applying the foundation cloud observation information. In the foundation cloud observation, the foundation cloud image classification technology is the key for realizing the foundation cloud observation, and the application of the technology can not only liberate observers from heavy cloud observation work, but also improve the accuracy and timeliness of cloud observation, so that the foundation cloud image classification is of great significance.
In recent decades, methods for classifying foundation cloud charts have been extensively studied. Traditional cloud classification methods rely on expert experience, are unreliable, time-consuming, and to some extent, rely on the experience of operators, with some uncertainty and bias in the classification results; in addition, human eye observation has gradually trended towards high costs.
The foundation cloud picture is a new natural texture image, which has attracted great attention in the field of computer vision in recent years, and the application of deep learning technology to the analysis and recognition research of the foundation cloud picture is increasing. The convolutional neural network technology is applied to cloud identification of a foundation cloud image, so that complex preprocessing of the cloud image in the early stage of image processing is avoided, the local receptive field of the convolutional neural network enables each neuron not to sense the whole image, only local sensing is needed, and all sensed information is integrated in the deep layer of the network to obtain the global information of the image; the weight sharing strategy of the method better accords with the characteristics of a biological neural network, greatly reduces the number of weight parameters, and reduces the computational complexity of the whole image processing process. The traditional convolutional neural network has high recognition rate in cloud classification, but has poor robustness in the cloud classification with occlusion.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a foundation cloud robust classification method based on weighted sparse representation and convolutional neural network feature fusion.
The technical scheme adopted by the invention is as follows:
a foundation cloud robust classification method comprises two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input feature vector y; the method specifically comprises the following steps:
(1) the training samples are respectively processed by two convolutional neural networks to obtain the characteristic y1∈Rn1×1,y2∈Rn2×1N1 represents the dimension of the features obtained by the first convolutional neural network, n2 represents the dimension of the features obtained by the second convolutional neural network, R represents a real number space, and the two feature vectors are added to obtain a total feature vector y:
(2) let n1+ n2 be 2n, convert the 2 n-dimensional eigenvector into an n-dimensional eigenvector, and propose a projection composed of two projections PiAnd PeThe system comprises the following components:
two of which project PiAnd PeFrom training samples Y ═ Y1 … Yk … YK]Determining;mkindicates that the kth class training sample contains mkPicture, K is 1,2, …, K;
i=1,2,…,2n;
Yk(j) represents a matrix YkColumn j of (1);
definition ofLet { Vi(jp) Become Vi1.5n minimumSet of items, jp<jp+1P is 1,2, …,1.5n-1, and then gives
k=1,2,…,K;
(5) For all mkOf a training sampleDictionary D of class kkExpressed as:
Dk=Pe(Pi(Yk));
k=1,2,…,mK;
and the whole dictionary D belongs to Rn×m,Called extended dictionary, consisting of { DkThe components are as follows:
D=[D1 … Dk … DK];
(6) the robust classification formula based on the optimal sparse representation is as follows:
a represents a sparse coefficient and a represents,expressing the optimized sparse coefficient, and expressing a regular coefficient by lambda;
δkRepresenting the weighted error between the test picture and each class;
then the
gkRepresenting a weighted distance between the test picture and each class;
(8) the final test sample z is classified by the following formula:
The first iteration obtains a weight matrix W(l):
Beta represents a decrement rate coefficient, phi represents a coefficient for controlling the position of a demarcation point;
||·||Frepresenting the F norm.
The invention has the beneficial effects that:
the invention relates to a weighted sparse representation-based convolutional neural network feature fusion cloud classification method, which is characterized in that two convolutional neural networks (inclusion-v 3 and ResNet-50) are extracted to serve as a weighted sparse representation dictionary to improve the operation efficiency; by weighting sparse representation classification, the robustness of the system can be improved under the condition of occlusion, and the performance better than that of a single convolutional neural network can be obtained by fusing two convolutional neural networks.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a test chart without noise (0%);
fig. 3 is a test chart after random noise (5% -25%) is added).
Detailed Description
The technical solutions of the present invention are further specifically described below by examples, which are for illustration of the present invention and are not intended to limit the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a foundation cloud robust classification method includes two parts, the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input feature vector y; the method specifically comprises the following steps:
(1) the training samples are respectively processed by two convolutional neural networks (inclusion-v 3, ResNet-50) to obtain the characteristic y1∈Rn1×1,y2∈Rn2×1N1 represents the dimension of the features obtained by the first convolutional neural network, n2 represents the dimension of the features obtained by the second convolutional neural network, R represents a real number space, and the two feature vectors are added to obtain a total feature vector y:
(2) let n1+ n2 be 2n, convert the 2 n-dimensional eigenvector into an n-dimensional eigenvector, and propose a projection composed of two projections PiAnd PeThe system comprises the following components:
two of which project PiAnd PeFrom training samples Y ═ Y1 … Yk … YK]Determining;mkindicates that the kth class training sample contains mkPicture, K is 1,2, …, K;
i=1,2,…,2n;
Yk(j) represents a matrix YkColumn j of (1);
definition ofLet { Vi(jp) Become Vi1.5n sets of min terms, jp<jp+1P is 1,2, …,1.5n-1, and then gives
k=1,2,…,K;
(5) For all mkDictionary D of class k of training sampleskExpressed as:
Dk=Pe(Pi(Yk));
k=1,2,…,mK;
and the whole dictionary D belongs to Rn×m,Called extended dictionary, consisting of { DkThe components are as follows:
D=[D1 … Dk … DK];
(6) the robust classification formula based on the optimal sparse representation is as follows:
a represents a sparse coefficient and a represents,expressing the optimized sparse coefficient, and expressing a regular coefficient by lambda;
obtaining an optimal weighting matrix by iteration Is estimated previouslyIs updated specifically as follows:
The first iteration obtains a weight matrix W(l):
Beta represents a decrement rate coefficient, phi represents a coefficient for controlling the position of a demarcation point;
||·||Frepresents the F norm;
δkRepresenting the error between the test picture and each class;
then the
gkRepresenting the distance between the test picture and each class;
(8) the final test sample z is classified by the following formula:
TABLE 1
0% | 5% | 10% | 15% | 20% | 25% | |
Inception-v3 | 96.97 | 84.03 | 83.18 | 82.24 | 79.43 | 77.52 |
ResNet-50 | 97.09 | 90.47 | 89.68 | 83.99 | 78.77 | 75.15 |
The method of the invention | 99.81 | 99.37 | 98.87 | 98.06 | 95.53 | 90.28 |
In order to verify the effectiveness and robustness of the method provided by the invention, experiments are carried out on MGCD data sets, 7 types of pictures are provided, the tested pictures are added with a plurality of random shielding noises, and the sizes of the pictures are 1024 x 1024. Comparing the recognition rates of the neural network and the method of the invention under the condition of adding random noise (0% -25%), verifying the effectiveness of the method, wherein the test image without noise (0%) is shown in figure 2, and the test image after adding random noise (5% -25%) is shown in figure 3; the experimental results are shown in table 1, and it can be seen from table 1 that, under the condition of not adding noise, the recognition rates of the method and the neural network are not much different, but the recognition rate of the neural network is rapidly reduced with the increase of noise, while the recognition rate of the method is slowly reduced, which indicates that the robustness of the method is stronger than that of a deep neural network.
Claims (2)
1. A foundation cloud robust classification method is characterized by comprising two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input feature vector y; the method specifically comprises the following steps:
(1) the training samples are respectively processed by two convolutional neural networks to obtain the characteristic y1∈Rn1×1,y2∈Rn2×1N1 represents the dimension of the features obtained by the first convolutional neural network, n2 represents the dimension of the features obtained by the second convolutional neural network, R represents a real number space, and the two feature vectors are added to obtain a total feature vector y:
(2) let n1+ n2 be 2n, convert the 2 n-dimensional eigenvector into an n-dimensional eigenvector, and propose a projection composed of two projections PiAnd PeThe system comprises the following components:
two of which project PiAnd PeFrom training samples Y ═ Y1…Yk…YK]Determining;mkrepresenting class k training samplesContaining mkPicture, K is 1,2, …, K;
i=1,2,…,2n;
Yk(j) represents a matrix YkColumn j of (1);
definition ofLet { Vi(jp) Become Vi1.5n sets of min terms, jp<jp+1P is 1,2, …,1.5n-1, and then gives
k=1,2,…,K;
jp<jp+1p is 1,2, …, n-1, then obtaining
(5) For all mkDictionary D of class k of training sampleskExpressed as:
Dk=Pe(Pi(Yk));
k=1,2,…,mK;
and the whole dictionary D belongs to Rn×m,Called extended dictionary, consisting of { DkThe components are as follows:
D=[D1…Dk…DK];
(6) the robust classification formula based on the optimal sparse representation is as follows:
a represents a sparse coefficient and a represents,expressing the optimized sparse coefficient, and expressing a regular coefficient by lambda;
δkRepresenting the weighted error between the test picture and each class;
then the
gkRepresenting a weighted distance between the test picture and each class;
(8) the final test sample z is classified by the following formula:
2. the ground-based cloud robust classification method according to claim 1, wherein in step (6),is estimated previouslyIs updated specifically as follows:
The first iteration obtains a weight matrix W(l):
Beta represents a decrement rate coefficient, phi represents a coefficient for controlling the position of a demarcation point;
||·||Frepresenting the F norm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565204.XA CN113298143B (en) | 2021-05-24 | 2021-05-24 | Foundation cloud robust classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565204.XA CN113298143B (en) | 2021-05-24 | 2021-05-24 | Foundation cloud robust classification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113298143A true CN113298143A (en) | 2021-08-24 |
CN113298143B CN113298143B (en) | 2023-11-10 |
Family
ID=77324260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110565204.XA Active CN113298143B (en) | 2021-05-24 | 2021-05-24 | Foundation cloud robust classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298143B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819748A (en) * | 2012-07-19 | 2012-12-12 | 河南工业大学 | Classification and identification method and classification and identification device of sparse representations of destructive insects |
US20150154229A1 (en) * | 2013-11-29 | 2015-06-04 | Canon Kabushiki Kaisha | Scalable attribute-driven image retrieval and re-ranking |
WO2016091017A1 (en) * | 2014-12-09 | 2016-06-16 | 山东大学 | Extraction method for spectral feature cross-correlation vector in hyperspectral image classification |
CN107066964A (en) * | 2017-04-11 | 2017-08-18 | 宋佳颖 | Rapid collaborative representation face classification method |
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
-
2021
- 2021-05-24 CN CN202110565204.XA patent/CN113298143B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819748A (en) * | 2012-07-19 | 2012-12-12 | 河南工业大学 | Classification and identification method and classification and identification device of sparse representations of destructive insects |
US20150154229A1 (en) * | 2013-11-29 | 2015-06-04 | Canon Kabushiki Kaisha | Scalable attribute-driven image retrieval and re-ranking |
WO2016091017A1 (en) * | 2014-12-09 | 2016-06-16 | 山东大学 | Extraction method for spectral feature cross-correlation vector in hyperspectral image classification |
CN107066964A (en) * | 2017-04-11 | 2017-08-18 | 宋佳颖 | Rapid collaborative representation face classification method |
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
Non-Patent Citations (3)
Title |
---|
丁文秀;孙锐;闫晓星: "基于分层深度学习的鲁棒行人分类", 光电工程, vol. 42, no. 9 * |
侯北平;朱文;马连伟;介婧: "基于形状特征的移动目标实时分类研究", 仪器仪表学报, no. 008 * |
翟林;潘新;刘霞;罗小玲;: "稀疏表示的手掌图像识别研究", 计算机仿真, no. 12 * |
Also Published As
Publication number | Publication date |
---|---|
CN113298143B (en) | 2023-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583263B (en) | Point cloud segmentation method based on joint dynamic graph convolution | |
CN108681752B (en) | Image scene labeling method based on deep learning | |
CN100492399C (en) | Method for making human face posture estimation utilizing dimension reduction method | |
CN110569901A (en) | Channel selection-based countermeasure elimination weak supervision target detection method | |
CN109740679B (en) | Target identification method based on convolutional neural network and naive Bayes | |
CN107169117B (en) | Hand-drawn human motion retrieval method based on automatic encoder and DTW | |
CN109492750B (en) | Zero sample image classification method based on convolutional neural network and factor space | |
CN111079847B (en) | Remote sensing image automatic labeling method based on deep learning | |
CN114842267A (en) | Image classification method and system based on label noise domain self-adaption | |
CN108595558B (en) | Image annotation method based on data equalization strategy and multi-feature fusion | |
CN112084895B (en) | Pedestrian re-identification method based on deep learning | |
CN110555461A (en) | scene classification method and system based on multi-structure convolutional neural network feature fusion | |
CN113536925A (en) | Crowd counting method based on attention guide mechanism | |
CN112967210B (en) | Unmanned aerial vehicle image denoising method based on full convolution twin network | |
CN114267060A (en) | Face age identification method and system based on uncertain suppression network model | |
CN114202792A (en) | Face dynamic expression recognition method based on end-to-end convolutional neural network | |
CN110288002B (en) | Image classification method based on sparse orthogonal neural network | |
CN116883746A (en) | Graph node classification method based on partition pooling hypergraph neural network | |
CN113298143A (en) | Foundation cloud robust classification method | |
CN116343016A (en) | Multi-angle sonar image target classification method based on lightweight convolution network | |
CN113723482B (en) | Hyperspectral target detection method based on multi-example twin network | |
CN115393631A (en) | Hyperspectral image classification method based on Bayesian layer graph convolution neural network | |
CN114266911A (en) | Embedded interpretable image clustering method based on differentiable k-means | |
CN111914718A (en) | Feature weighting PCA face recognition method based on average influence value data conversion | |
CN116310463B (en) | Remote sensing target classification method for unsupervised learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |