CN113298143B - Foundation cloud robust classification method - Google Patents
Foundation cloud robust classification method Download PDFInfo
- Publication number
- CN113298143B CN113298143B CN202110565204.XA CN202110565204A CN113298143B CN 113298143 B CN113298143 B CN 113298143B CN 202110565204 A CN202110565204 A CN 202110565204A CN 113298143 B CN113298143 B CN 113298143B
- Authority
- CN
- China
- Prior art keywords
- vector
- feature
- coefficient
- convolutional neural
- dimension
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 239000013598 vector Substances 0.000 claims abstract description 29
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 238000012360 testing method Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 230000004927 fusion Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a foundation cloud robust classification method, which comprises two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used to classify the input feature vector y. The application relates to a convolutional neural network feature fusion cloud classification method based on weighted sparse representation, which is characterized in that features are extracted by two convolutional neural networks and used as a dictionary of the weighted sparse representation so as to improve the operation efficiency; through weighted sparse representation classification, the robustness of the system can be improved under the condition that shielding exists, and simultaneously, better performance than a single convolutional neural network can be obtained by fusing two convolutional neural networks.
Description
Technical Field
The application belongs to the technical field of classification of foundation cloud pictures, and particularly relates to a foundation cloud robust classification method.
Background
The cloud is an important weather phenomenon, and a reliable cloud observation technology has important significance for weather research, weather analysis, weather forecast and other works. The foundation cloud observation is an important cloud observation mode, can reflect the microstructure information of the cloud, makes up the deficiency of satellite observation, and can provide more comprehensive data for relevant application of cloud observation by fully applying the foundation cloud observation information. In the foundation cloud observation, the foundation cloud image classification technology is a key for realizing the foundation cloud observation, and the application of the technology not only can liberate observers from heavy cloud observation work, but also can improve the accuracy and timeliness of the cloud observation, so that the foundation cloud image classification is of great significance.
The classification method of the foundation cloud pictures has been widely studied in recent decades. Traditional cloud classification methods rely on expert experience, are unreliable, time consuming, and rely to some extent on the experience of the operator, with classification results often with some uncertainty and bias; in addition, human eye observations have tended to be increasingly costly.
The foundation cloud image is also greatly focused in the field of computer vision in recent years as a new natural texture image, and the application of the deep learning technology to analysis and recognition research of the foundation cloud image is also increasing. The convolutional neural network technology is applied to cloud recognition of the foundation cloud image, so that complex pretreatment of the cloud image in the early stage of image processing is avoided, the local receptive field of the convolutional neural network enables each neuron to not need to perceive the whole image, only local perception is needed, and all perceived information is integrated in the deep layer of the network to obtain global information of the image; the strategy of weight sharing is more in line with the characteristics of the biological neural network, so that the number of weight parameters is greatly reduced, and the calculation complexity of the whole image processing process is reduced. The traditional convolutional neural network has higher recognition rate in cloud classification, but has poor robustness in cloud classification with shielding.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the application provides a foundation Yun Lu rod classification method based on convolutional neural network feature fusion based on weighted sparse representation.
The technical scheme adopted by the application is as follows:
the foundation cloud robust classification method comprises two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input characteristic vector y; the method specifically comprises the following steps:
(1) The training samples are respectively passed through two convolutional neural networks to obtain the characteristic y 1 ∈R n1×1 ,y 2 ∈R n2×1 N1 represents the dimension of the feature obtained through the first convolutional neural network, n2 represents the dimension of the feature obtained through the second convolutional neural network, R represents the real space, and the two feature vectors are superimposed to obtain a total feature vector y:
(2) Let n1+n2=2n, convert the feature vector in 2n dimension into the feature vector in n dimension, propose a projection P consisting of two projections i And P e The system comprises:
represents y passing through P i Vector after projection, z represents +.>Pass through P e Vector after projection;
two of which project P i And P e From training samples y= [ Y ] 1 … Y k … Y K ]Determining;m k indicating that the kth training sample contains m k Picture, k=1, 2, …, K;
(3) Calculation ofMean value v of (v) k Sum of variance vector->v k ,/>
i=1,2,…,2n;
Y k (: j) represents matrix Y k Is the j-th column element of (2);
definition of the definitionLet { V i (j p ) Becomes V i 1.5n min terms, j p <j p+1 P=1, 2, …,1.5n-1, then obtained
Thus (2)
k=1,2,…,K;
Represents Y k Through projection P i A matrix of the post;
(4) Calculate V k ∈R 2n×K Mean value v of (v) * Sum of variance vector
i=1, 2, …,1.5n; let theBecome->Set of n largest terms, j p <j p+1 P=1, 2, …, n-1, then gives
(5) For all m k Dictionary D of the kth class of training samples k Expressed as:
D k =P e (P i (Y k ));
k=1,2,…,m K ;
while the whole dictionary D e R n×m ,Called extended dictionary, formed by { D k Composition }:
D=[D 1 … D k … D K ];
(6) The robust classification formula based on the optimal sparse representation is as follows:
alpha represents the sparse coefficient and,representing the optimized sparse coefficient, and lambda represents the regular coefficient;
obtaining an optimal weighting matrix through iteration
(7) By obtainingAnd->Calculating delta k :
δ k Representing a weighted error between the test picture and each class;
then
g k Representing a weighted distance between the test picture and each class;
(8) Finally, the classification of test sample z is by the following formula:
preferably, in the step (6),is previously estimated +.>On the basis of the update, as shown in detail below:
the first iteration obtains the deviation
The first iteration obtains a weight matrix W (l) :
Beta represents a decreasing rate coefficient, phi represents a coefficient for controlling the position of the demarcation point;
subsequentlyThe updating is as follows:
||·|| F representing the F-norm.
The application has the beneficial effects that:
the application relates to a convolutional neural network feature fusion cloud classification method based on weighted sparse representation, which is characterized in that features are extracted by two convolutional neural networks (acceptance-v 3 and ResNet-50) and used as a dictionary of weighted sparse representation so as to improve operation efficiency; through weighted sparse representation classification, the robustness of the system can be improved under the condition that shielding exists, and simultaneously, better performance than a single convolutional neural network can be obtained by fusing two convolutional neural networks.
Drawings
FIG. 1 is a flow chart of the present application;
FIG. 2 is a graph of test without noise (0%);
fig. 3 is a test chart after random noise (5% -25%) is added.
Detailed Description
The technical scheme of the present application is further specifically described by the following examples, which are given by way of illustration and not limitation. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a ground cloud robust classification method includes two parts, a first part is for feature extraction, which converts an image into a feature vector y; the second part is used for classifying the input characteristic vector y; the method specifically comprises the following steps:
(1) The training samples are respectively passed through two convolutional neural networks (admission-v 3, resNet-50) to obtain feature y 1 ∈R n1×1 ,y 2 ∈R n2×1 N1 represents the dimension of the feature obtained through the first convolutional neural network, n2 represents the dimension of the feature obtained through the second convolutional neural network, R represents the real space, and the two feature vectors are superimposed to obtain a total feature vector y:
(2) Let n1+n2=2n, convert the feature vector in 2n dimension into the feature vector in n dimension, propose a projection P consisting of two projections i And P e The system comprises:
represents y passing through P i Vector after projection, z represents +.>Pass through P e Vector after projection;
two of which project P i And P e From training samples y= [ Y ] 1 … Y k … Y K ]Determining;m k indicating that the kth training sample contains m k Picture, k=1, 2, …, K;
(3) Calculation ofMean value v of (v) k Sum of variance vector->v k ,/>
i=1,2,…,2n;
Y k (: j) represents matrix Y k Is the j-th column element of (2);
definition of the definitionLet { V i (j p ) Becomes V i 1.5n min terms, j p <j p+1 P=1, 2, …,1.5n-1, then obtained
Thus (2)
k=1,2,…,K;
Represents Y k Through projection P i A matrix of the post;
(4) Calculate V k ∈R 2n×K Mean value v of (v) * Sum of variance vector
i=1, 2, …,1.5n; let theBecome->Is a set of n largest terms of (c),j p <j p+1 p=1, 2, …, n-1, then gives
(5) For all m k Dictionary D of the kth class of training samples k Expressed as:
D k =P e (P i (Y k ));
k=1,2,…,m K ;
while the whole dictionary D e R n×m ,Called extended dictionary, formed by { D k Composition }:
D=[D 1 … D k … D K ];
(6) The robust classification formula based on the optimal sparse representation is as follows:
alpha represents the sparse coefficient and,representing the optimized sparse coefficient, and lambda represents the regular coefficient;
obtaining an optimal weighting matrix through iteration Is previously estimated +.>On the basis of the update, as shown in detail below:
the first iteration obtains the deviation
The first iteration obtains a weight matrix W (l) :
Beta represents a decreasing rate coefficient, phi represents a coefficient for controlling the position of the demarcation point;
subsequentlyThe updating is as follows:
||·|| F represents the F norm;
(7) By obtainingAnd->Calculating delta k :
δ k Representing errors between the test picture and each class;
then
g k Representing test picturesDistance from each class;
(8) Finally, the classification of test sample z is by the following formula:
TABLE 1
0% | 5% | 10% | 15% | 20% | 25% | |
Inception-v3 | 96.97 | 84.03 | 83.18 | 82.24 | 79.43 | 77.52 |
ResNet-50 | 97.09 | 90.47 | 89.68 | 83.99 | 78.77 | 75.15 |
The method of the application | 99.81 | 99.37 | 98.87 | 98.06 | 95.53 | 90.28 |
In order to verify the effectiveness and robustness of the method provided by the application, experiments are carried out on the MGCD data set, 7 kinds of pictures are added, the test pictures are added with a plurality of blocks of random shielding noise, and the sizes of the pictures are 1024 x 1024. Comparing the recognition rate of the neural network with the recognition rate of the method under the condition of adding random noise (0% -25%), verifying the effectiveness of the method, wherein a test image without adding noise (0%) is shown in fig. 2, and a test image with adding random noise (5% -25%) is shown in fig. 3; the experimental results are shown in table 1, and it can be seen from table 1 that the recognition rate of the method and the recognition rate of the neural network are not very different under the condition that noise is not added, but the recognition rate of the neural network is rapidly reduced along with the increase of the noise, and the recognition rate of the method is slowly reduced, so that the robustness of the method is higher than that of the deep neural network.
Claims (2)
1. The ground cloud robust classification method is characterized by comprising two parts, wherein the first part is used for feature extraction and converts an image into a feature vector y; the second part is used for classifying the input characteristic vector y; the method specifically comprises the following steps:
(1) The training samples are respectively passed through two convolutional neural networks to obtain the characteristic y 1 ∈R n1×1 ,y 2 ∈R n2×1 N1 represents passing through the first rollThe dimension of the feature obtained by the product neural network, n2 represents the dimension of the feature obtained by the second convolution neural network, R represents a real space, and the total feature vector y is obtained by superposing two feature vectors:
(2) Let n1+n2=2n, convert the feature vector in 2n dimension into the feature vector in n dimension, propose a projection P consisting of two projections i And P e The system comprises:
represents y passing through P i Vector after projection, z represents +.>Pass through P e Vector after projection;
two of which project P i And P e From training samples y= [ Y ] 1 …Y k …Y K ]Determining;m k indicating that the kth training sample contains m k Picture, k=1, 2, …, K;
(3) Calculation ofMean value v of (v) k Sum of variance vector->v k ,/>
i=1,2,…,2n;
Y k (: j) represents matrix Y k Is the j-th column element of (2);
definition of the definitionLet { V i (j p ) Becomes V i 1.5n min terms, j p <j p+1 P=1, 2, …,1.5n-1, then obtained
Thus (2)
k=1,2,…,K;
Represents Y k Through projection P i A matrix of the post;
(4) Calculate V k ∈R 2n×K Mean value v of (v) * Sum of variance vector
Let->Become->I=1, 2, …,1.5n;
j p <j p+1 p=1, 2, …, n-1, then gives
(5) For all m k Dictionary D of the kth class of training samples k Expressed as:
D k =P e (P i (Y k ));
k=1,2,…,m K ;
while the whole dictionary D e R n×m ,Called extended dictionary, formed by { D k Composition }:
D=[D 1 …D k …D K ];
(6) The robust classification formula based on the optimal sparse representation is as follows:
alpha represents the sparse coefficient and,representing the optimized sparse coefficient, and lambda represents the regular coefficient;
obtaining an optimal weighting matrix through iteration
(7) By obtainingAnd->Calculating delta k :
δ k Representing a weighted error between the test picture and each class;
then
g k =||δ k || 2 ,
g k Representing a weighted distance between the test picture and each class;
(8) Finally, the classification of test sample z is by the following formula:
2. the method of classifying a foundation Yun Lu rod according to claim 1, wherein in the step (6),is previously estimated +.>On the basis of the update, as shown in detail below:
the first iteration obtains the deviation
The first iteration obtains a weight matrix W (l) :
Beta represents a decreasing rate coefficient, phi represents a coefficient for controlling the position of the demarcation point;
subsequentlyThe updating is as follows:
||·|| F representing the F-norm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565204.XA CN113298143B (en) | 2021-05-24 | 2021-05-24 | Foundation cloud robust classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110565204.XA CN113298143B (en) | 2021-05-24 | 2021-05-24 | Foundation cloud robust classification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113298143A CN113298143A (en) | 2021-08-24 |
CN113298143B true CN113298143B (en) | 2023-11-10 |
Family
ID=77324260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110565204.XA Active CN113298143B (en) | 2021-05-24 | 2021-05-24 | Foundation cloud robust classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298143B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819748A (en) * | 2012-07-19 | 2012-12-12 | 河南工业大学 | Classification and identification method and classification and identification device of sparse representations of destructive insects |
WO2016091017A1 (en) * | 2014-12-09 | 2016-06-16 | 山东大学 | Extraction method for spectral feature cross-correlation vector in hyperspectral image classification |
CN107066964A (en) * | 2017-04-11 | 2017-08-18 | 宋佳颖 | Rapid collaborative representation face classification method |
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10120879B2 (en) * | 2013-11-29 | 2018-11-06 | Canon Kabushiki Kaisha | Scalable attribute-driven image retrieval and re-ranking |
-
2021
- 2021-05-24 CN CN202110565204.XA patent/CN113298143B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819748A (en) * | 2012-07-19 | 2012-12-12 | 河南工业大学 | Classification and identification method and classification and identification device of sparse representations of destructive insects |
WO2016091017A1 (en) * | 2014-12-09 | 2016-06-16 | 山东大学 | Extraction method for spectral feature cross-correlation vector in hyperspectral image classification |
CN107066964A (en) * | 2017-04-11 | 2017-08-18 | 宋佳颖 | Rapid collaborative representation face classification method |
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
Non-Patent Citations (3)
Title |
---|
基于分层深度学习的鲁棒行人分类;丁文秀;孙锐;闫晓星;光电工程;第42卷(第9期);全文 * |
基于形状特征的移动目标实时分类研究;侯北平;朱文;马连伟;介婧;仪器仪表学报(第008期);全文 * |
稀疏表示的手掌图像识别研究;翟林;潘新;刘霞;罗小玲;;计算机仿真(第12期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113298143A (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110020682B (en) | Attention mechanism relation comparison network model method based on small sample learning | |
CN111191718B (en) | Small sample SAR target identification method based on graph attention network | |
CN111583263A (en) | Point cloud segmentation method based on joint dynamic graph convolution | |
CN108875933B (en) | Over-limit learning machine classification method and system for unsupervised sparse parameter learning | |
CN109872305B (en) | No-reference stereo image quality evaluation method based on quality map generation network | |
CN109740679B (en) | Target identification method based on convolutional neural network and naive Bayes | |
CN110619059B (en) | Building marking method based on transfer learning | |
CN109492750B (en) | Zero sample image classification method based on convolutional neural network and factor space | |
CN112489164B (en) | Image coloring method based on improved depth separable convolutional neural network | |
CN113628178B (en) | Steel product surface defect detection method with balanced speed and precision | |
CN105550649A (en) | Extremely low resolution human face recognition method and system based on unity coupling local constraint expression | |
CN112084895B (en) | Pedestrian re-identification method based on deep learning | |
CN114399533B (en) | Single-target tracking method based on multi-level attention mechanism | |
CN110555461A (en) | scene classification method and system based on multi-structure convolutional neural network feature fusion | |
CN116363423A (en) | Knowledge distillation method, device and storage medium for small sample learning | |
CN112967210B (en) | Unmanned aerial vehicle image denoising method based on full convolution twin network | |
CN113591997B (en) | Assembly feature graph connection relation classification method based on graph learning convolutional neural network | |
CN112905894B (en) | Collaborative filtering recommendation method based on enhanced graph learning | |
CN113298143B (en) | Foundation cloud robust classification method | |
CN106960225A (en) | A kind of sparse image classification method supervised based on low-rank | |
CN115937693A (en) | Road identification method and system based on remote sensing image | |
CN115761416A (en) | CS-YOLOv5s network-based insulator defect detection method | |
CN115619726A (en) | Lightweight defect detection method for detecting defects of automobile body paint surface | |
CN114581789A (en) | Hyperspectral image classification method and system | |
CN108846421B (en) | Image classification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |