CN105550712B - Aurora image classification method based on optimization convolution autocoding network - Google Patents
Aurora image classification method based on optimization convolution autocoding network Download PDFInfo
- Publication number
- CN105550712B CN105550712B CN201510976336.6A CN201510976336A CN105550712B CN 105550712 B CN105550712 B CN 105550712B CN 201510976336 A CN201510976336 A CN 201510976336A CN 105550712 B CN105550712 B CN 105550712B
- Authority
- CN
- China
- Prior art keywords
- node
- aurora
- indicate
- hidden layer
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the aurora image classification methods based on optimization convolution autocoding network, mainly solve the problems, such as that the prior art is lower to aurora image classification accuracy rate.Implementation step are as follows: 1. seek aurora image saliency map and extract training sample based on its notable figure;2. pair training sample carries out whitening pretreatment;3. training autocoding network A E;4. asking the convolution of aurora image from coding characteristic using trained autocoding network;5. the convolution of aurora image is carried out average pond from coding characteristic;6. the convolution of Chi Huahou is input to softmax classifier from coding characteristic, the classification to aurora image is realized.The present invention is able to achieve the computer automatic sorting to four class aurora images, and has the advantages that classification accuracy is high.It can be used for the scene classification and target identification of image.
Description
Technical field
The invention belongs to technical field of image processing, further relate to the classification method of aurora image, can be used for image
Scene classification and target identification.
Background technique
Aurora are various Magnetic storm processes ionosphere traces the most intuitive, the all-sky of Chinese Arctic Yellow River Station at
As system All-sky Camera simultaneously carries out continuously three typical cases spectral coverage 427.8nm, 557.7nm and 630.0nm of aurora
Observation, generates ten hundreds of aurora images, data volume is huge.Rationally effective aurora image classification is existing to study of various aurora
As and its with relationship between Magnetic storm process is particularly important.
The aurora sort research of early stage realizes label and work of classifying based on visually observing by hand, however aurora figure
As annual millions of, the artificial mode for carrying out classification marker, which no longer meets, carries out objective classification to large-scale data
It is required that.Until 2004Document "M.T.,and Donovan E.F.,Diurnal auroral
occurrence statistics obtained via machine vision.Annales Geophysicae,22(4):
Image processing techniques is just introduced into aurora classification of images in 1103-1113,2004. ".Wang et al. is in 2007 in text
Chapter " Wang Qian, Liang Jimin, Hu ZeJun, Hu HaiHong, Zhao Heng, Hu HongQiao, Gao Xinbo,
Yang Huigen.Spatial texture based automatic classification of dayside aurora
in all-sky images.Journal of Atmospheric and Solar-Terrestrial Physics,2010,
72 (5): the gray feature of aurora image is extracted using Principal Component Analysis PCA in 498-508. ", proposes a kind of base
In the aurora classification method of presentation, certain progress is achieved in Coronal aurorae sort research direction;2008, Gao et al. was delivered
Article " L.Gao, X.B.Gao, and J.M.Liang.Dayside corona autora detection based on
Sample selection and adaBoost algorithm.J.I mage Graph, 2010,15 (1): 116-121. ",
It proposes the aurora image classification method based on Gabor transformation, uses local Gabor filter and extract characteristics of image, ensuring
Feature redundancy is reduced in the case where computational accuracy, achieves preferable classifying quality;2009, Fu et al. was in article
“Fu Ru,Jie Li and X.B.Gao..Automatic aurora images classification algorithm
based on separated texture.Proc.Int.Conf.Robotics and Biomimetics,2009:1331-
Morphology constituent analysis (MCA) is combined with aurora image procossing in 1335. ", from aurora obtained after MCA is separated
Feature is extracted in texture subgraph, for the classification of two class aurora image of arc crown, improves the accuracy of arc crown aurora classification.It is subsequent
Correlative study also: Han et al. is in article " Bing Han, Xiaojing Zhao, Dacheng Tao, et al.Dayside
aurora classification via BIFs-based sparse representation using manifold
learning.International Journal of Computer Mathematics.Published online:12Nov
The aurora classification classified based on BIFs feature and C mean value is proposed in 2013. " again;Yang et al. is in article " Yang Xi, Li
Jie,Han Bing,Gao Xinbo.Wavelet hierarchical model for aurora images
2013,40 (2): classification.Journal of Xidian University proposes multi-level Wavelet Transform in 18-24. "
Transformation achieves higher classification accuracy to indicate aurora characteristics of image;2013, Han et al. was in article " Han B, Yang
C,Gao XB.Aurora image classification based on LDA combining with saliency
Information.RuanJian Xue Bao/Journal of Software, 2013,24 (11): draws in 2758-2766. "
Enter implicit Di Li Cray distributed model LDA, and combine saliency information, the classification for further improving aurora image is quasi-
True rate.
But existing aurora image processing algorithm is all based on shallow-layer feature, characteristic present ability and classification are accurate
Rate is all greatly limited.Article " A.Krizhevsky, I.Sutskever, and G.Hinton.ImageNet
Classification with deep convolutional neural networks.In NIPS, 2012. " propose convolution
Neural network, outstanding image characteristics extraction ability are applied in aurora image characteristics extraction in academia's persistently overheating
Great potential be worth further investigation.
But depth convolutional network is directly used in the feature extraction of aurora image there are still following problems: first
It is the completely black part due to there are many absolutely not any information in aurora image, existing deep learning algorithm is for this part
Redundancy does not have processing method;Secondly because number of training limits, classification of the existing depth convolutional network technology to aurora image
Accuracy rate is not high;Third, depth convolutional network time consumption for training are serious.
Summary of the invention
It is an object of the invention in view of the deficiency of the prior art, propose that a kind of optimization convolution that is based on is compiled automatically
The aurora image classification method of code network improves classification accuracy rate so that network training is rapidly completed.
Realizing the technical solution of above-mentioned purpose of the present invention is: significance analysis is carried out to aurora image, it is significant based on aurora
Figure extracts the training sample for training autocoding network A E, then extracts aurora with trained autocoding network characterization
The convolution of image about subtracts convolution from coding characteristic from coding characteristic, and using average pond, finally by softmax points
Class device realizes the classification to aurora image.Implementation step includes the following:
(1) aurora image is inputted, totally 100000 trained block of pixels are extracted according to aurora image saliency map, forms training picture
Plain block collection P8×8×100000;
(2) to training block of pixels collection P8×8×100000Carry out whitening pretreatment, the training sample set after obtaining albefaction
xPCAwhite;
(3) the training sample set x after albefaction is utilizedPCAwhite, training autocoding network A E:
Training sample set 3a) is expressed as xPCAwhite={ xp(1),xp(2),xp(3),...,xp(i),...,xp(m), wherein
xp(i)Indicate i-th of training sample, xp(i)∈R64, i=1,2 ..., m, m indicate number of training;According to training sample xp(i)
Seek the average active degree of autocoding network A E j-th of neuron of hidden layer:
Wherein, j=1,2 ..., n, n indicate node in hidden layer, aW,b(xp(i)) indicate to input as xp(i)Shi Zidong is compiled
The activity of code network A E j-th of neuron of hidden layer, (W, b)=(W(1),b(1),W(2),b(2)) indicate autocoding network A E
Parameter, wherein W(1)Indicate weight of the input layer to hidden layer node, W(2)Indicate hidden layer node to output node layer
Weight, b(1)Indicate biasing of the input layer to hidden layer node, b(2)Indicate hidden layer node to the inclined of output node layer
It sets;
It is averagely living with the parameter (W, b) and hidden layer of autocoding network A E 3b) according to backpropagation BP coaching method principle
JerkConstruct a sparse cost function Jsparse(W, b):
In formula, hW,b() indicates nonlinear autocoding network A E function,Indicate using ρ be mean value with
WithFor the relative entropy between two Bernoulli random variables of mean value, λ1And λ2Respectively indicate the weight of hidden layer and output layer
Attenuation parameter, ρ indicate degree of rarefication coefficient, and value is a constant less than 0.1;
3c) by minimizing cost function Jsparse(W, b) optimized after autocoding network A E parameter (Wopt,
bopt):
Wherein,Indicate optimization after input layer q-th of node to hidden layer j-th of node weight,J-th of node of hidden layer be to hidden layer to the weight of k-th of node of output layer after indicating optimization,Indicate optimization
Afterwards input layer to j-th of node of hidden layer biasing,Indicate after optimization hidden layer node to the inclined of k-th node
Set, q=k=1,2 ..., 64,64 indicate input layer numbers, and output layer number of nodes be equal to input layer number, j=1,
2 ..., n, n indicate node in hidden layer;
(4) with the weight of j-th of node of q-th of node of input layer after optimization to hidden layerSeek aurora image
The convolution of I is from coding characteristic Fr;
(5) convolution of aurora image I is carried out average pondization from coding characteristic Fr to operate, i.e., by convolution from coding characteristic Fr
It is averagely divided into the block of 11 × 11 sizes, every piece is all merged into an average value, then reconfigures these average values
To pond feature F ∈ R11×11×n;
(6) the pond feature F of aurora image is input to softmax classifier, obtains a class label, the as pole
The classification of light image.
The invention has the following advantages over the prior art:
First, the present invention is trained the preferred of sample using image saliency map, has effectively removed invalid training sample, has mentioned
High network training efficiency, while model is improved to the classification accuracy of aurora image;
Second, autocoding network A E pre-training convolution filter is selected in invention, is constructed convolutional network, is effectively overcome pole
The lower problem of classification accuracy caused by light image lack of training samples.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention;
Fig. 2 is the present invention to aurora image saliency map, notable figure binaryzation and extracts training segment result figure;
Fig. 3 is the part convolution filter that the present invention is obtained by training autocoding network A E;
Fig. 4 is when being autocoding network A E node in hidden layer difference in the present invention when corresponding classification accuracy and classification
Between comparison diagram;
Fig. 5 is influence diagram of the autocoding network A E hidden layer degree of rarefication of the present invention to classification accuracy.
Specific embodiment
Realization step of the invention and technical effect are described in further detail with reference to the accompanying drawing.
Referring to Fig.1, steps are as follows for realization of the invention:
Step 1, aurora image is inputted, training block of pixels collection P is extracted8×8×100000。
1.1) width aurora image as shown in Fig. 2 (a) is inputted, each of image pixel I (x, y) is obtained,
Brightness L (x, y), Gradient Features H (x, y) and edge binaryzation feature B (x, y) obtain and by these three Fusion Features
To the conspicuousness value of information S (x, y) of aurora image slices vegetarian refreshments I (x, y):
S (x, y)=L (x, y)+H (x, y)+B (x, y);
The conspicuousness value of information S (x, y) of aurora image all the points is formed into the aurora image saliency map S as shown in Fig. 2 (b);
1.2) binarization operation is carried out to image saliency map S, obtains the binaryzation notable figure S such as Fig. 2 (c)1;
1.3) at random in binaryzation notable figure S1The upper training block of pixels ps for extracting 8 × 8 sizes8×8, judge the block of pixels
Value: if block of pixels ps8×8In 1 value proportion be greater than 0.8, then extract the block of pixels p of original image I in the position8×8;Such as
Fruit block of pixels ps8×8In 1 value proportion be less than or equal to 0.8, then do not deal with;
1.4) according to method 1.3), aurora image totally 100000 trained block of pixels are extracted, form training block of pixels collection
P8×8×100000。
Step 2, to training block of pixels collection P8×8×100000Carry out whitening pretreatment, the training sample set after seeking albefaction
xPCAwhite。
Whitening pretreatment technology includes: PCA albefaction, ZCA albefaction and spectral whitening etc..This example is using the albefaction side ZCA
Method, concrete operation step are described as follows:
It 2.1) will training block of pixels collection P8×8×100000Matrix deformation is carried out, obtains deformation matrix: x ∈ R64×100000;
2.2) covariance matrix of x is sought:
Wherein, m=100000 indicates number of training, x(i)The i-th column of representing matrix x;
2.3) carry out SVD decomposition to deformation matrix x: x=U φ V obtains left basic matrix U and right basic matrix V, and by x on a left side
The direction basic matrix U indicates are as follows: xrot=UTx;
2.4) according to obtaining 2.2) and 2.3) training sample set xPw:
Wherein, ε expression one is not 0 minimum number, value 10-5。
Step 3, training sample set x is utilizedPw, training autocoder.
3.1) training sample set is expressed as xPw={ xp(1),xp(2),xp(3),...,xp(i),...,xp(m), wherein xp(i)
Indicate i-th of training sample, xp(i)∈R64, m expression number of training;According to training sample xp(i)Ask autocoding network A E hidden
The average active degree of j-th of neuron containing layer
Wherein, j=1,2 ..., n, n indicate node in hidden layer, aW,b(xp(i)) indicate to input as xp(i)Shi Zidong is compiled
The activity of code network A E j-th of neuron of hidden layer, (W, b)=(W(1),b(1),W(2),b(2)) indicate autocoding network A E
Parameter, wherein W(1)Indicate weight of the input layer to hidden layer node, W(2)Indicate hidden layer node to output node layer
Weight, b(1)Indicate biasing of the input layer to hidden layer node, b(2)Indicate hidden layer node to the inclined of output node layer
It sets;
3.2) average with the parameter (W, b) and hidden layer of autocoding network A E according to backpropagation BP coaching method principle
LivenessConstruct a sparse cost function Jsparse(W, b):
In formula, hW,b() indicates nonlinear autocoding network A E function,Indicate using ρ be mean value with
WithFor the relative entropy between two Bernoulli random variables of mean value, λ1And λ2Respectively indicate the weight of hidden layer and output layer
Attenuation parameter, ρ indicate degree of rarefication coefficient, and value is a constant less than 0.1;
3.3) by minimizing cost function Jsparse(W, b) optimized after autocoding network A E network parameter
(Wopt,bopt):
Wherein,Indicate optimization after input layer q-th of node to hidden layer j-th of node weight,
J-th of node of hidden layer be to hidden layer to the weight of k-th of node of output layer after indicating optimization,It is inputted after indicating optimization
Node layer to j-th of node of hidden layer biasing,Indicate biasing of the hidden layer node to k-th of node after optimizing, q=k
=1,2 ..., 64,64 indicate input layer numbers, and output layer number of nodes is equal to input layer number, j=1,2 ..., n, n
Indicate node in hidden layer.
Step 4, ask the convolution of aurora image I from coding characteristic Fr.
4.1) with the weight of j-th of node of q-th of node of input layer after optimization to hidden layerSeek hidden layer
The corresponding trained pixel block feature pf of j-th of nodej:
It 4.2) will training pixel block feature pfjMatrix deformation is carried out, obtains training block of pixels eigenmatrix: fj∈R8×8;
4.3) by aurora image I and training block of pixels eigenmatrix fjConvolution obtains j-th of convolution of aurora image I certainly
Coding characteristic Frj:
Frj=I*fj,
Wherein, I ∈ R128×128, Frj∈R121×121;
4.4) by n convolution from coding characteristic FrjSeries connection obtains the convolution of aurora image I from coding characteristic:
Fr∈R121×121×n。
Step 5, the convolution of aurora image I is subjected to average pondization operation from coding characteristic Fr, i.e., convolution is encoded into spy certainly
Sign Fr is averagely divided into the block of 11 × 11 sizes, every piece is all merged into an average value, then by these average values again group
Conjunction obtains pond feature F ∈ R11×11×n。
Step 6, the pond feature F of aurora image I is input to softmax classifier, obtains a class label, as
The classification of aurora image I.
Effect of the invention can be further described by following emulation experiment.
Test the setting experiment of 1:SCAE model parameter
Experiment condition: 3200 width aurora data of the present invention for testing are from Chinese Arctic Yellow River Station, the database
Include multi sphere shape, radiation crown shape, hot spot crown shape and each 800 width of valance Coronal aurorae image.
Experiment content 1: being arranged different autocoding network A E node in hidden layer n, carries out aurora figure using the present invention
As classification, as a result such as Fig. 3, wherein Fig. 3 (a) is classification accuracy, and Fig. 3 (b) is time consumption for training;
Experiment content 2: being arranged different autocoding network A E hidden layer degree of rarefication ρ, carries out aurora figure using the present invention
As classification, classification accuracy result such as Fig. 4.
As seen from Figure 3, as node in hidden layer n=400, the classification accuracy highest of aurora image;Hidden layer
Number of nodes n is bigger, and time consumption for training is longer.
As seen from Figure 4, when the hidden layer degree of rarefication of autocoding network A E is 0.03, the classification of aurora image is quasi-
True rate highest.
Experiment 2: it is compared with classification accuracy of the different models to aurora image.
Experiment condition: 3200 width aurora images in experiment 1 have been used in this experiment.
Experiment content: existing Le-net5 method, CAE method and the side S-CAE proposed by the invention are used respectively
Method classifies to aurora image, classification accuracy result such as Fig. 5.
As seen from Figure 5, the classification for effectively increasing aurora image using S-CAE method proposed by the invention is accurate
Rate.
Claims (4)
1. a kind of aurora image classification method of the convolution autoencoder network based on optimization, includes the following steps:
(1) aurora image I is inputted, aurora image saliency map is sought and training block of pixels ps is extracted based on its notable figure8×8, composition instruction
Practice block of pixels collection P8×8×100000;
(2) to training block of pixels collection P8×8×100000Carry out whitening pretreatment, the training sample set after obtaining albefaction
(3) training sample set after albefaction is utilizedTraining autocoding network A E:
3a) training sample set is expressed asWherein xp(i)Indicate i-th of instruction
Practice sample, xp(i)∈R64, i=1,2 ..., m, m indicate number of training;According to training sample xp(i)Seek autocoding network A E
The average active degree of j-th of neuron of hidden layer:
Wherein, j=1,2 ..., n, n indicate node in hidden layer, aW,b(xp(i)) indicate to input as xp(i)When autocoding net
The activity of network AE j-th of neuron of hidden layer, (W, b)=(W(1),b(1),W(2),b(2)) indicate autocoding network A E ginseng
Number, wherein W(1)Indicate weight of the input layer to hidden layer node, W(2)Indicate hidden layer node to the power for exporting node layer
Weight, b(1)Indicate biasing of the input layer to hidden layer node, b(2)Indicate the biasing of hidden layer node to output node layer;
3b) according to backpropagation BP coaching method principle, with the parameter (W, b) and hidden layer average active degree of autocoding network A EConstruct a sparse cost function Jsparse(W, b):
In formula, hW,b() indicates nonlinear autocoding network A E function,Indicate using ρ as mean value and withFor
Relative entropy between two Bernoulli random variables of mean value, λ1And λ2Respectively indicate the weight decaying ginseng of hidden layer and output layer
Number, ρ indicate degree of rarefication coefficient, and value is a constant less than 0.1;
3c) by minimizing cost function Jsparse(W, b) optimized after autocoding network A E parameter (Wopt,bopt):
Wherein,Indicate optimization after input layer q-th of node to hidden layer j-th of node weight,It indicates
After optimization j-th of node of hidden layer to k-th of node of output layer weight,Input layer is to hidden layer after indicating optimization
The biasing of j-th of node,Biasing of the hidden layer node to k-th of node of output layer after expression optimization, q=k=1,
2 ..., 64, q and k respectively indicate the number of nodes of input layer and output layer, and output layer number of nodes is equal to input layer number,
It is 64, j=1,2 ..., n, n indicates node in hidden layer;
(4) with the weight of j-th of node of q-th of node of input layer after optimization to hidden layerSeek the volume of aurora image I
It accumulates from coding characteristic Fr;
(5) convolution of aurora image I is subjected to average pondization operation from coding characteristic Fr, i.e., convolution is averaged from coding characteristic Fr
It is divided into the block of 11 × 11 sizes, every piece is all merged into an average value, then reconfigures to obtain pond by these average values
Change feature F ∈ R11×11×n;
(6) the pond feature F of aurora image I is input to softmax classifier, obtains a class label, the as aurora
The classification of image.
2. according to the method described in claim 1, wherein seeking aurora image saliency map in step (1) and being extracted based on its notable figure
Training block of pixels ps8×8, it carries out as follows:
1a) for each pixel I (x, y) of any one width input aurora image I, its brightness L (x, y), ladder are obtained
Feature H (x, y) and edge binaryzation feature B (x, y) are spent, and by these three Fusion Features, obtains aurora image slices vegetarian refreshments I
The conspicuousness value of information of (x, y): S (x, y)=L (x, y)+H (x, y)+B (x, y), then the conspicuousness of aurora image all the points is believed
Breath value S (x, y) forms aurora image saliency map S;
Binarization operation 1b) is carried out to image saliency map S, obtains binaryzation notable figure S1;
1c) at random in binaryzation notable figure S1The upper training block of pixels ps for extracting 8 × 8 sizes8×8, judge the value of the block of pixels: such as
Fruit block of pixels ps8×8In 1 value proportion be greater than 0.8, then extract the block of pixels p of aurora image I in the position8×8;If picture
Plain block ps8×8In 1 value proportion be less than or equal to 0.8, then do not deal with.
3. according to the method described in claim 1, wherein step (2) is to training block of pixels collection P8×8×100000Albefaction is carried out to locate in advance
Reason carries out as follows:
It 2a) will training block of pixels collection P8×8×100000Matrix deformation is carried out, obtains deformation matrix: x ∈ R64×100000;
2b) seek the covariance matrix of x:
Wherein, i=1,2 ..., m, m=100000 indicate number of training, x(i)The i-th column of representing matrix x;
2c) carry out SVD decomposition to deformation matrix x: x=U φ V obtains left basic matrix U and right basic matrix V, and by x in left group moment
The battle array direction U indicates are as follows: xrot=UTx;
2d) according to 2b) and 2c) obtain training sample set
Wherein, ε expression one is not 0 minimum number, value 10-5。
4. according to the method described in claim 1, wherein step (4) asks the convolution of aurora image I from coding characteristic Fr, by as follows
Step carries out:
4a) with the weight of j-th of node of q-th of node of input layer after optimization to hidden layerIt asks j-th of hidden layer
The corresponding trained pixel block feature pf of nodej:
It 4b) will training pixel block feature pfjMatrix deformation is carried out, obtains training block of pixels eigenmatrix: fj∈R8×8;
4c) by aurora image I and training block of pixels eigenmatrix fjConvolution, j-th of convolution for obtaining aurora image I encode spy certainly
Levy Frj:
Frj=I*fj,
Wherein, I ∈ R128×128, Frj∈R121×121;
4d) by n convolution from coding characteristic FrjSeries connection obtains the convolution of aurora image I from coding characteristic:
Fr∈R121×121×n。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510976336.6A CN105550712B (en) | 2015-12-23 | 2015-12-23 | Aurora image classification method based on optimization convolution autocoding network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510976336.6A CN105550712B (en) | 2015-12-23 | 2015-12-23 | Aurora image classification method based on optimization convolution autocoding network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105550712A CN105550712A (en) | 2016-05-04 |
CN105550712B true CN105550712B (en) | 2019-01-08 |
Family
ID=55829895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510976336.6A Active CN105550712B (en) | 2015-12-23 | 2015-12-23 | Aurora image classification method based on optimization convolution autocoding network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105550712B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228130B (en) * | 2016-07-19 | 2019-09-10 | 武汉大学 | Remote sensing image cloud detection method of optic based on fuzzy autoencoder network |
CN107045722B (en) * | 2017-03-27 | 2019-07-30 | 西安电子科技大学 | Merge the video signal process method of static information and multidate information |
CN107832718B (en) * | 2017-11-13 | 2020-06-05 | 重庆工商大学 | Finger vein anti-counterfeiting identification method and system based on self-encoder |
CN113111688B (en) * | 2020-01-13 | 2024-03-08 | 中国科学院国家空间科学中心 | All-sky throat area aurora identification method and system |
CN113128542B (en) * | 2020-01-15 | 2024-04-30 | 中国科学院国家空间科学中心 | All-sky aurora image classification method and system |
CN113642676B (en) * | 2021-10-12 | 2022-02-22 | 华北电力大学 | Regional power grid load prediction method and device based on heterogeneous meteorological data fusion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632166A (en) * | 2013-12-04 | 2014-03-12 | 西安电子科技大学 | Aurora image classification method based on latent theme combining with saliency information |
CN104156736A (en) * | 2014-09-05 | 2014-11-19 | 西安电子科技大学 | Polarized SAR image classification method on basis of SAE and IDL |
CN104462494A (en) * | 2014-12-22 | 2015-03-25 | 武汉大学 | Remote sensing image retrieval method and system based on non-supervision characteristic learning |
-
2015
- 2015-12-23 CN CN201510976336.6A patent/CN105550712B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632166A (en) * | 2013-12-04 | 2014-03-12 | 西安电子科技大学 | Aurora image classification method based on latent theme combining with saliency information |
CN104156736A (en) * | 2014-09-05 | 2014-11-19 | 西安电子科技大学 | Polarized SAR image classification method on basis of SAE and IDL |
CN104462494A (en) * | 2014-12-22 | 2015-03-25 | 武汉大学 | Remote sensing image retrieval method and system based on non-supervision characteristic learning |
Also Published As
Publication number | Publication date |
---|---|
CN105550712A (en) | 2016-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105550712B (en) | Aurora image classification method based on optimization convolution autocoding network | |
CN107564025B (en) | Electric power equipment infrared image semantic segmentation method based on deep neural network | |
CN111709902B (en) | Infrared and visible light image fusion method based on self-attention mechanism | |
CN108615010B (en) | Facial expression recognition method based on parallel convolution neural network feature map fusion | |
CN109145992B (en) | Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination | |
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN110555458B (en) | Multi-band image feature level fusion method for generating countermeasure network based on attention mechanism | |
CN113221639B (en) | Micro-expression recognition method for representative AU (AU) region extraction based on multi-task learning | |
CN105426919B (en) | The image classification method of non-supervisory feature learning is instructed based on conspicuousness | |
CN106570521B (en) | Multilingual scene character recognition method and recognition system | |
CN104268593A (en) | Multiple-sparse-representation face recognition method for solving small sample size problem | |
CN113011357B (en) | Depth fake face video positioning method based on space-time fusion | |
CN106485259B (en) | A kind of image classification method based on high constraint high dispersive principal component analysis network | |
CN112818764B (en) | Low-resolution image facial expression recognition method based on feature reconstruction model | |
CN107657204A (en) | The construction method and facial expression recognizing method and system of deep layer network model | |
CN111652273B (en) | Deep learning-based RGB-D image classification method | |
CN112766283B (en) | Two-phase flow pattern identification method based on multi-scale convolution network | |
CN106529586A (en) | Image classification method based on supplemented text characteristic | |
CN110097033A (en) | A kind of single sample face recognition method expanded based on feature | |
CN115966010A (en) | Expression recognition method based on attention and multi-scale feature fusion | |
CN110826534B (en) | Face key point detection method and system based on local principal component analysis | |
CN110598746A (en) | Adaptive scene classification method based on ODE solver | |
CN107239827B (en) | Spatial information learning method based on artificial neural network | |
CN111209886B (en) | Rapid pedestrian re-identification method based on deep neural network | |
CN110210562B (en) | Image classification method based on depth network and sparse Fisher vector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |