CN111310820A - Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration - Google Patents

Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration Download PDF

Info

Publication number
CN111310820A
CN111310820A CN202010087138.5A CN202010087138A CN111310820A CN 111310820 A CN111310820 A CN 111310820A CN 202010087138 A CN202010087138 A CN 202010087138A CN 111310820 A CN111310820 A CN 111310820A
Authority
CN
China
Prior art keywords
cloud
cnn
training
meteorological
foundation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010087138.5A
Other languages
Chinese (zh)
Inventor
王钰
章豪东
杨杏丽
李济洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi University
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202010087138.5A priority Critical patent/CN111310820A/en
Publication of CN111310820A publication Critical patent/CN111310820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting

Abstract

The invention belongs to the technical field of classification of foundation meteorological cloud pictures, and particularly relates to a classification method of foundation meteorological cloud pictures based on cross validation depth CNN feature integration. The method comprises the steps of firstly extracting deep CNN characteristics of the ground-based meteorological cloud image by using a convolutional neural network model, then carrying out repeated resampling on the CNN characteristics based on cross validation, and finally carrying out identification on the cloud shape of the ground-based cloud image based on a voting strategy of repeated cross validation resampling results. The method can automatically classify the ground-based meteorological cloud images, and realize an end-to-end automatic cloud identification algorithm which is directly based on the adaptivity of the original cloud images without any image preprocessing. The proposed algorithm relates to the fields of computer vision, machine learning, image recognition and the like. The proposed algorithm fully overcomes the non-robustness of the single CNN characteristic cloud classification result and the high calculation overhead of multiple deep convolution neural network integration, and simultaneously ensures that the proposed algorithm has high classification accuracy and noise stability.

Description

Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
Technical Field
The invention belongs to the technical field of classification of foundation meteorological cloud pictures, and particularly relates to a classification method of foundation meteorological cloud pictures based on cross validation depth CNN feature integration.
Background
The deep convolutional neural network is one of representative algorithms of deep learning, is a probability model based on statistical learning, and in a specific implementation algorithm, although the deep convolutional network has various changes on convolutional layers, pooling layers and full-connection layers, the deep convolutional network always belongs to the category of the neural network from the aspect of broad category. In fact, the core of the deep convolutional neural network is feature learning, which adaptively learns a large amount of feature information through convolution and pooling operations, and overcomes the defect that features need to be artificially designed in the traditional machine learning.
Starting from the inspiration of neuronal research by Hubel and Wiesel on cat and monkey visual cortex containing small regions that react independently to the visual field, CNN models were used for handwritten number recognition at the earliest by LeCun, and then CNN started its rapid development era, with a substantial increase in image recognition performance on ImageNet competition in 2014. The convolutional neural network has made a major breakthrough in many research fields and directions, such as image processing, natural language processing, face recognition and medicine discovery, and has become one of the representative technologies in these practical application fields, and accordingly, the research on the convolutional neural network has also become a research hotspot problem in these fields.
In particular, ground-based cloud images have attracted great attention in recent years as a new natural texture image, and deep learning techniques are increasingly applied to analysis and identification research of ground-based cloud images. The convolutional neural network technology is applied to cloud identification of a foundation meteorological cloud image, so that complex preprocessing of a cloud image in the early stage of image processing is avoided, the local receptive field of the convolutional neural network enables each neuron not to sense the whole image, only local sensing is needed, and all sensed information is integrated in a deep layer of the network to obtain the global information of the image; the weight sharing strategy of the method better accords with the characteristics of a biological neural network, greatly reduces the number of weight parameters, and reduces the computational complexity of the whole image processing process. But when performing cloud identification of ground-based clouds directly based on deep CNN features learned by a single convolutional neural network, the classification result is often not robust because it is too dependent on the quality of CNN features extracted at a single time. On the other hand, a natural way to overcome or alleviate the above-mentioned drawbacks is to train multiple deep neural networks and then give the final cloud recognition result based on the classification integration of multiple networks, however, obviously, this way will significantly increase the computational overhead of the whole classification process, because the computational overhead of training a single deep network is already large, and the computational complexity of training multiple networks may be unacceptable.
Disclosure of Invention
Aiming at the problems of complex calculation in the cloud identification process and non-robustness of a single CNN characteristic cloud classification result, the invention provides a method for classifying a foundation meteorological cloud picture based on cross validation deep CNN characteristic integration.
The improvement specifically comprises the following two aspects:
1) how to apply techniques such as cross validation to resampling of deep CNN features to achieve efficient automatic classification of ground-based weather clouds?
2) How to improve the accuracy and stability of cloud identification of ground-based meteorological clouds under the premise of ensuring that the computational overhead is not increased?
In order to achieve the purpose, the invention adopts the following technical scheme:
the foundation meteorological cloud picture classification method based on cross validation depth CNN feature integration comprises the steps of firstly utilizing a single convolutional neural network model to extract deep CNN features of a foundation meteorological cloud picture, then conducting repeated resampling of the CNN features based on cross validation, and finally conducting identification of foundation cloud picture clouds based on a voting strategy of repeated cross validation resampling results.
Aiming at Rademacher complexity measurement for measuring the complexity of a deep neural network model, an upper bound of Rademacher complexity of the proposed deep CNN feature integration method is theoretically provided, and the proposed method is proved to have smaller Rademacher complexity. On the other hand, the generalization error bound of the proposed method under the Gaussian noise is considered, and the proposed method is proved to have smaller generalization error and better noise stability.
Further, the method comprises the steps of firstly extracting deep CNN features of the ground-based meteorological cloud image by using a single convolutional neural network model, then carrying out repeated resampling on the CNN features based on cross validation, and finally carrying out identification on the ground-based cloud image cloud form based on a voting strategy of repeated cross validation resampling results, wherein the method specifically comprises the following steps:
step 1, learning the convolution layer characteristics in a CNN model to obtain a training characteristic vector and a test characteristic vector;
step 2, equally dividing the training characteristic vectors into K groups, correspondingly equally dividing the testing characteristic vectors into K groups, then respectively making each subset data into a primary verification set, and taking the rest K-1 groups of subset data as training sets;
step 3, training all the training sets respectively by adopting multi-valued logistic regression to obtain classification results of the K groups of models;
step 4, verifying by using a verification set corresponding to each training set to obtain a classification result of the K groups of models;
step 5, obtaining the final classification result by adopting a relative majority voting method for the classification results obtained on all the training sets and the verification sets,
Figure BDA0002382457180000031
wherein H (x) represents the set of class labels { c }1,c2,...cmA flag that is predicted in (f) is set,
Figure BDA0002382457180000032
for learning device hiIn category label cjAn output of (c).
The mark with the most votes is predicted to be the final classification result of image classification, and if a plurality of marks obtain the highest votes at the same time, one of the marks is randomly selected. From the statistical perspective, because learning tasks often have a large hypothesis space, a single learner may cause poor generalization performance due to misselection, and theoretically, a plurality of learners are combined, so that generalization performance which is obviously superior to that of the single learner can be obtained, and the method has high accuracy and superior generalization performance.
Still further, the step 1 of learning the features of the convolutional layer in the CNN model to obtain a training feature vector and a test feature vector includes the specific operation steps of:
step 1.1, dividing a foundation meteorological cloud image data set into a training sample set and a testing sample set according to a ratio of 7: 3;
step 1.2, selecting a VGG16 model in the CNN model to start training, reserving parameters of the trained convolutional layer, and performing parameter fine tuning;
step 1.3, inputting a training sample, carrying out normalization processing on the training sample, and carrying out CNN model training;
step 1.4, inputting a test sample, and testing based on the trained CNN model;
and step 1.5, respectively using the trained CNN model to perform feature extraction on the training sample and the test sample, namely, reserving the straightened cloud picture feature vector after multilayer convolution operation.
The feature extraction is carried out aiming at the convolutional neural network, so that the complex preprocessing of the cloud image in the early stage of image processing is avoided, the local receptive field and the weight sharing characteristic are avoided, the number of weight parameters is greatly reduced, and the calculation complexity of the whole image processing process is reduced.
Further, the ground-based meteorological cloud image dataset is: recording a data set containing n ground-based meteorological cloud images as DnThen D isn={zi1, 1.., n }; wherein z isiIs a data set DnThe ith cloud image in (1).
The classification method based on the cross validation deep CNN feature integration is adopted, and the method has the essence of integrated learning and has stronger robustness compared with a single network. Second, due to the rich feature information in the deep network, each sub-classification can be solved by a subset of features. In this way, the method has a similar number of parameters in a single network and is therefore more efficient than most ensemble learning currently available.
Therefore, the method overcomes the non-robustness of the single CNN characteristic cloud classification result and the high calculation overhead of multiple deep convolution neural network integration, and simultaneously ensures high classification accuracy and noise stability.
Compared with the prior art, the invention has the following advantages:
1) the deep CNN features are extracted only based on a single convolutional neural network, a plurality of convolutional neural network models do not need to be trained, and the computational complexity of the whole process is greatly reduced;
2) the cloud form is identified by voting integration of a plurality of resampling results based on the cross validation of the CNN features, so that the accuracy and the stability of the cloud form are high, the image preprocessing and the manual feature design in the traditional classification process are avoided, the classifier selects the execution of a plurality of dispersion steps, and the efficient, robust and self-adaptive automatic classification of the cloud form of the foundation cloud picture is realized.
Drawings
FIG. 1 is a flow chart of classification of a ground-based weather cloud based on cross validation depth CNN feature integration;
FIG. 2 is a diagram of a VGG16 model in a CNN model;
FIG. 3 is a cross-validation graph of training and testing feature vectors.
Detailed Description
Example 1
In the method for classifying the foundation meteorological cloud picture based on the cross validation depth CNN feature integration in the embodiment, the deep CNN feature of the foundation meteorological cloud picture is extracted by using a single convolutional neural network model, then the CNN feature is repeatedly sampled based on the cross validation, finally, the foundation cloud picture cloud shape is identified based on a voting strategy of a repeated cross validation resampling result, and before introducing a specific scheme, some basic concepts and operations are introduced.
Data set: recording a data set containing n ground-based meteorological cloud images as DnThen D isn={zi1, 1.., n }. Wherein z isiIs a data set DnThe ith cloud image in (1);
index set: image data set DnIs set for each ziThe subscript of (a), denoted as I ═ 1,2,. n };
convolution: moving from the upper left corner to the lower right corner of the image by a convolution kernel, and summing products of the convolution kernel and the corresponding image pixel during the movement, wherein one value can be calculated after each movement;
pooling: the purpose of pooling is to simplify the output of the convolutional layer, compress the image, remove redundant information from the image, extract important features, and prevent overfitting to some extent. The most practical application is maximum pooling;
full connection layer: usually at the tail of the convolutional neural network, all neurons between the two layers have a weighted connection.
The method comprises the following specific steps:
step 1, learning the convolution layer characteristics in the CNN model to obtain a training characteristic vector and a test characteristic vector, and the specific operation steps comprise:
step 1.1, dividing a foundation meteorological cloud image data set into a training sample set and a testing sample set according to a ratio of 7: 3;
step 1.2, selecting a VGG16 model in the CNN model to start training, reserving parameters of the trained convolutional layer, and performing parameter fine tuning;
step 1.3, inputting a training sample, carrying out normalization processing on the training sample, and carrying out CNN model training;
step 1.4, inputting a test sample, and testing based on the trained CNN model;
and step 1.5, respectively using the trained CNN model to perform feature extraction on the training sample and the test sample, namely, reserving the straightened cloud picture feature vector after multilayer convolution operation.
Step 2, equally dividing the training characteristic vectors into K groups, correspondingly equally dividing the testing characteristic vectors into K groups, then respectively making each subset data into a primary verification set, and taking the rest K-1 groups of subset data as training sets;
step 3, training all the training sets respectively by adopting multi-valued logistic regression to obtain classification results of the K groups of models;
step 4, verifying by using a verification set corresponding to each training set to obtain a classification result of the K groups of models;
step 5, obtaining the final classification result by adopting a relative majority voting method for the classification results obtained on all the training sets and the verification sets,
Figure BDA0002382457180000061
wherein H (x) represents the set of class labels { c }1,c2,...cmA flag that is predicted in (f) is set,
Figure BDA0002382457180000062
for learning device hiIn category label cjAn output of (c).
The mark with the most votes is predicted to be the final classification result of image classification, and if a plurality of marks obtain the highest votes at the same time, one of the marks is randomly selected.
This example uses 550 class 5 ground cloud images of 125 × 125 × 3, 385 of which are training samples and 165 of which are testing samples. Fig. 2 is a VGG16 model in the CNN model, which is used for feature extraction.
Table 1 is a table of VGG16 network structures, the specific convolution kernel size and step size for each layer of the VGG16 model, and the input size and output size of the ground-based cloud map. After 300 epochs of the training model, 4608 training and testing feature vectors which are straightened after the 13 th convolutional layer operation are respectively extracted.
TABLE 1 VGG16 network architecture Table
Figure BDA0002382457180000071
The training feature vectors and the testing feature vectors are respectively subjected to 5-fold and 10-fold cross validation, and the chunks are 3 × 2 and 5 × 2 cross validation, for example, the second block of fig. 3 takes 5-fold cross validation as an example, the training feature vectors are divided into 5 groups, the testing feature vectors are correspondingly divided into 5 groups, then, each subset data is respectively subjected to a primary validation set, and the rest 4 groups of subset data are used as the training sets.
The third block of fig. 3 uses multiple-valued logistic regression to obtain classification results of 5 sets of models on the ground-based cloud map, wherein each blue circle represents one of the 5 classification results obtained by each set of training. And obtaining a test classification result by using the verification set corresponding to each group of training sets.
And obtaining a final cloud shape recognition result of the foundation cloud picture by adopting a relative majority voting method for the obtained classification result.
The specific results are shown in table 2, where the first row in table 2 is the average accuracy on training and testing obtained by repeating the above method 100 times, column 1 is the cloud identification accuracy of the ground-based meteorological cloud image directly obtained without using cross validation, columns 2 and 3 are the cloud image classification accuracy obtained by using cross validation with 5-fold and 10-fold respectively, and columns 4 and 5 are the cloud image classification accuracy obtained by cross validation with 3 × 2 and 5 × 2 respectively. In addition, in order to enhance the comparison of classification results before and after the cross validation of CNN feature integration, the comparison results of various methods under the classification tree weak classifier are also given, and are shown in the second row of Table 2.
TABLE 2 accuracy of classification results
full 5CV 10CV 3×2CV 5×2CV
Sofemax 0.933 0.949 0.936 0.961 0.963
Classification tree 0.819 0.874 0.819 0.952 0.952
The result shows that the classification accuracy of the cloud identification of the ground-based meteorological cloud picture obtained by using the cross validation method to perform the voting integration of the CNN characteristic repeated resampling result is obviously higher than that of the method without using the cross validation method; the test accuracy rate when using the chunk 5 × 2 cross validation is 96.3%, and is improved by about 3 points compared with the classification accuracy rate of the conventional CNN model without using the method of the present invention; in the case of classification tree classifiers, an accuracy of about 14% is improved. Compared with the traditional convolutional neural network, the method has higher classification accuracy and noise stability, lower computational complexity and self-adaptive end-to-end automatic cloud identification capability on the cloud identification task of the ground-based meteorological cloud image.
Those skilled in the art will appreciate that the invention may be practiced without these specific details. Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (4)

1. The foundation meteorological cloud chart classification method based on the cross validation depth CNN feature integration is characterized by comprising the following steps of: the method comprises the steps of firstly, extracting deep CNN features of a foundation meteorological cloud image by using a single convolutional neural network model, then conducting repeated resampling of the CNN features based on cross validation, and finally conducting identification of foundation cloud image clouds based on a voting strategy of repeated cross validation resampling results.
2. The method for classifying ground-based meteorological clouds based on cross-validation deep CNN feature integration according to claim 1, wherein: firstly, extracting deep CNN characteristics of a foundation meteorological cloud image by using a single convolutional neural network model, then, conducting repeated resampling of the CNN characteristics based on cross validation, and finally, conducting identification of foundation cloud image clouds based on a voting strategy of repeated cross validation resampling results, wherein the method specifically comprises the following steps:
step 1, learning the convolution layer characteristics in a CNN model to obtain a training characteristic vector and a test characteristic vector;
step 2, equally dividing the training characteristic vectors into K groups, correspondingly equally dividing the testing characteristic vectors into K groups, then respectively making each subset data into a primary verification set, and taking the rest K-1 groups of subset data as training sets;
step 3, training all the training sets respectively by adopting multi-valued logistic regression to obtain classification results of the K groups of models;
step 4, verifying by using a verification set corresponding to each training set to obtain a classification result of the K groups of models;
step 5, obtaining the final classification result by adopting a relative majority voting method for the classification results obtained on all the training sets and the verification sets,
Figure FDA0002382457170000011
wherein H (x) represents the set of class labels { c }1,c2,...cmA flag that is predicted in (f) is set,
Figure FDA0002382457170000012
for learning device hiIn category label cjAn output of (c).
The mark with the most votes is predicted to be the final classification result of image classification, and if a plurality of marks obtain the highest votes at the same time, one of the marks is randomly selected.
3. The method for classifying ground-based meteorological clouds based on cross-validation deep CNN feature integration according to claim 2, wherein: the step 1 of learning the convolutional layer characteristics in the CNN model to obtain a training characteristic vector and a test characteristic vector includes the following specific operation steps:
step 1.1, dividing a foundation meteorological cloud image data set into a training sample set and a testing sample set according to a ratio of 7: 3;
step 1.2, selecting a VGG16 model in the CNN model to start training, reserving parameters of the trained convolutional layer, and performing parameter fine tuning;
step 1.3, inputting a training sample, carrying out normalization processing on the training sample, and carrying out CNN model training;
step 1.4, inputting a test sample, and testing based on the trained CNN model;
and step 1.5, respectively using the trained CNN model to perform feature extraction on the training sample and the test sample, namely, reserving the straightened cloud picture feature vector after multilayer convolution operation.
4. The method for classifying ground-based meteorological clouds based on cross-validation deep CNN feature integration according to claim 3, wherein: the ground-based meteorological cloud image dataset is as follows: recording a data set containing n ground-based meteorological cloud images as DnThen D isn={zi1, 1.., n }; wherein z isiIs a data set DnThe ith cloud image in (1).
CN202010087138.5A 2020-02-11 2020-02-11 Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration Pending CN111310820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010087138.5A CN111310820A (en) 2020-02-11 2020-02-11 Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010087138.5A CN111310820A (en) 2020-02-11 2020-02-11 Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration

Publications (1)

Publication Number Publication Date
CN111310820A true CN111310820A (en) 2020-06-19

Family

ID=71146879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010087138.5A Pending CN111310820A (en) 2020-02-11 2020-02-11 Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration

Country Status (1)

Country Link
CN (1) CN111310820A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434554A (en) * 2020-10-16 2021-03-02 中科院成都信息技术股份有限公司 Heterogeneous reduction-based cloud image identification method and system
CN112508255A (en) * 2020-12-01 2021-03-16 北京科技大学 Photovoltaic output ultra-short-term prediction method and system based on multi-source heterogeneous data
CN113378739A (en) * 2021-06-19 2021-09-10 湖南省气象台 Foundation cloud target detection method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392241A (en) * 2017-07-17 2017-11-24 北京邮电大学 A kind of image object sorting technique that sampling XGBoost is arranged based on weighting
CN108805029A (en) * 2018-05-08 2018-11-13 天津师范大学 A kind of ground cloud atlas recognition methods based on notable antithesis activated code
CN110073301A (en) * 2017-08-02 2019-07-30 强力物联网投资组合2016有限公司 The detection method and system under data collection environment in industrial Internet of Things with large data sets

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392241A (en) * 2017-07-17 2017-11-24 北京邮电大学 A kind of image object sorting technique that sampling XGBoost is arranged based on weighting
CN110073301A (en) * 2017-08-02 2019-07-30 强力物联网投资组合2016有限公司 The detection method and system under data collection environment in industrial Internet of Things with large data sets
CN108805029A (en) * 2018-05-08 2018-11-13 天津师范大学 A kind of ground cloud atlas recognition methods based on notable antithesis activated code

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MB5FF58FC86BDA8: "集成学习的不二法门bagging、boosting和三大法宝<结合策略>平均法,投票法和学习法(stacking)", 《网页在线公开:HTTPS://BLOG.51CTO.COM/U_15077533/3919193》 *
YU WANG等: "A Selection Criterion for the Optimal Resolution of Ground-Based Remote Sensing Cloud Images for Cloud Classification", 《 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 》 *
YU WANG等: "Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution", 《NEURAL COMPUTATION》 *
崔顺等: "基于卷积神经网络的全天空地基云图分类研究", 《天文研究与技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434554A (en) * 2020-10-16 2021-03-02 中科院成都信息技术股份有限公司 Heterogeneous reduction-based cloud image identification method and system
CN112434554B (en) * 2020-10-16 2023-08-04 中科院成都信息技术股份有限公司 Cloud image recognition method and system based on heterogeneous reduction
CN112508255A (en) * 2020-12-01 2021-03-16 北京科技大学 Photovoltaic output ultra-short-term prediction method and system based on multi-source heterogeneous data
CN112508255B (en) * 2020-12-01 2021-09-07 北京科技大学 Photovoltaic output ultra-short-term prediction method and system based on multi-source heterogeneous data
CN113378739A (en) * 2021-06-19 2021-09-10 湖南省气象台 Foundation cloud target detection method based on deep learning

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109919108B (en) Remote sensing image rapid target detection method based on deep hash auxiliary network
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
CN107526785B (en) Text classification method and device
CN111695467B (en) Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion
CN110321967B (en) Image classification improvement method based on convolutional neural network
CN112734775B (en) Image labeling, image semantic segmentation and model training methods and devices
CN112883839B (en) Remote sensing image interpretation method based on adaptive sample set construction and deep learning
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN104866810A (en) Face recognition method of deep convolutional neural network
CN110619059B (en) Building marking method based on transfer learning
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN111652273B (en) Deep learning-based RGB-D image classification method
CN108052959A (en) A kind of method for improving deep learning picture recognition algorithm robustness
CN113920516B (en) Calligraphy character skeleton matching method and system based on twin neural network
CN115035418A (en) Remote sensing image semantic segmentation method and system based on improved deep LabV3+ network
CN113269224A (en) Scene image classification method, system and storage medium
CN111401156A (en) Image identification method based on Gabor convolution neural network
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN116258990A (en) Cross-modal affinity-based small sample reference video target segmentation method
CN115147632A (en) Image category automatic labeling method and device based on density peak value clustering algorithm
CN114882278A (en) Tire pattern classification method and device based on attention mechanism and transfer learning
CN115410059B (en) Remote sensing image part supervision change detection method and device based on contrast loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619