CN105550712A - Optimized convolution automatic encoding network-based auroral image sorting method - Google Patents

Optimized convolution automatic encoding network-based auroral image sorting method Download PDF

Info

Publication number
CN105550712A
CN105550712A CN201510976336.6A CN201510976336A CN105550712A CN 105550712 A CN105550712 A CN 105550712A CN 201510976336 A CN201510976336 A CN 201510976336A CN 105550712 A CN105550712 A CN 105550712A
Authority
CN
China
Prior art keywords
represent
node
hidden layer
pixels
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510976336.6A
Other languages
Chinese (zh)
Other versions
CN105550712B (en
Inventor
韩冰
胡泽骏
宋亚婷
高新波
胡红桥
贾中华
褚福跃
李洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
POLAR RESEARCH INSTITUTE OF CHINA
Xidian University
Original Assignee
POLAR RESEARCH INSTITUTE OF CHINA
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by POLAR RESEARCH INSTITUTE OF CHINA, Xidian University filed Critical POLAR RESEARCH INSTITUTE OF CHINA
Priority to CN201510976336.6A priority Critical patent/CN105550712B/en
Publication of CN105550712A publication Critical patent/CN105550712A/en
Application granted granted Critical
Publication of CN105550712B publication Critical patent/CN105550712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an optimized convolution automatic encoding network-based auroral image sorting method, and mainly aims at solving the problem that the existing technology is relatively low in auroral image sorting accuracy. The method comprises the following realization steps: 1, solving a saliency map of auroral images and extracting training samples on the basis of the saliency map; 2, carrying out pre-whitening on the training samples; 3, training an automatic encoding network AE; 4, solving the convolution self-encoding characteristics of the auroral images by utilizing the trained automatic encoding network; 5, carrying out mean pooling on the convolution self-encoding characteristics of the auroral images; and 6, inputting the pooled convolution self-encoding characteristics into a softmax sorter so as to sort the auroral images.

Description

Based on the aurora image classification method optimizing convolution automatic coding network
Technical field
The invention belongs to technical field of image processing, further relate to the sorting technique of aurora image, can be used for scene classification and the target identification of image.
Background technology
Aurora are various Magnetic storm process ionosphere traces the most intuitively, the all-sky imaging system All-skyCamera of Chinese Arctic Yellow River Station is simultaneously to three typical spectral coverage 427.8nm of aurora, 557.7nm and 630.0nm carries out Continuous Observation, produce ten hundreds of aurora images, data volume is huge.Rationally effective aurora Images Classification to study of various aurora phenomenon and and Magnetic storm process between relation particularly important.
Early stage aurora sort research is based on visual inspection, and manual realization marks and classification work, but aurora image is millions of every year, and the mode of manually carrying out key words sorting no longer meets requirement large-scale data being carried out to objective classification.Until 2004 document " m.T., andDonovanE.F., Diurnalauroraloccurrencestatisticsobtainedviamachinevisi on.AnnalesGeophysicae, 22 (4): 1103-1113,2004. " just image processing techniques is incorporated into aurora classification of images in.The people such as Wang in 2007 at article " WangQian, LiangJimin, HuZeJun, HuHaiHong, ZhaoHeng, HuHongQiao, GaoXinbo, YangHuigen.Spatialtexturebasedautomaticclassificationofd aysideaurorainall-skyimages.JournalofAtmosphericandSolar-TerrestrialPhysics, 2010, 72 (5): 498 – 508. " in use the gray feature of principal component analysis (PCA) PCA to aurora image to extract, propose a kind of aurora sorting technique based on presentation, certain progress is achieved in Coronal aurorae sort research direction, 2008, the people such as Gao publish an article " L.Gao, X.B.Gao, andJ.M.Liang.Daysidecoronaautoradetectionbasedonsamplese lectionandadaBoostalgorithm.J.ImageGraph, 2010,15 (1): 116-121. ", aurora image classification method based on Gabor transformation is proposed, have employed local Gabor filter and extract characteristics of image, reducing feature redundant information when guaranteeing computational accuracy, achieving good classifying quality, 2009, morphology constituent analysis (MCA) combines with aurora image procossing by the people such as Fu in article " FuRu, JieLiandX.B.Gao..Automaticauroraimagesclassificationalgo rithmbasedonseparatedtexture.Proc.Int.Conf.RoboticsandBi omimetics, 2009:1331-1335. ", feature is extracted from the aurora texture subgraph obtained after MCA is separated, for the classification of arc crown two class aurora image, improve the accuracy of arc crown aurora classification.Follow-up correlative study also has: the people such as Han propose again the aurora classification based on the classification of BIFs characteristic sum C average in article " BingHan; XiaojingZhao; DachengTao, etal.DaysideauroraclassificationviaBIFs-basedsparserepre sentationusingmanifoldlearning.InternationalJournalofCom puterMathematics.Publishedonline:12Nov2013. "; The people such as Yang propose multi-level Wavelet Transform conversion and represent aurora characteristics of image in article " YangXi; LiJie; HanBing; GaoXinbo.Wavelethierarchicalmodelforauroraimagesclassifi cation.JournalofXidianUniversity; 2013; 40 (2): 18-24. ", achieve higher classification accuracy; 2013, the people such as Han introduce implicit Dirichlet distribute model LDA in article " HanB; YangC; GaoXB.AuroraimageclassificationbasedonLDAcombiningwithsa liencyinformation.RuanJianXueBao/JournalofSoftware; 2013; 24 (11): 2758-2766. ", and combining image conspicuousness information, further increase again the classification accuracy of aurora image.
But existing aurora image processing algorithm is all based on shallow-layer feature, and characteristic present ability and classification accuracy are all greatly limited.Article " A.Krizhevsky; I.Sutskever; andG.Hinton.ImageNetclassificationwithdeepconvolutionaln euralnetworks.InNIPS; 2012. " proposes convolutional neural networks, its outstanding image characteristics extraction ability is persistently overheating in academia, and its great potential be applied in aurora image characteristics extraction is worth further investigation.
But, still there is following problem in feature extraction degree of depth convolutional network being directly used in aurora image: be first divide owing to there is many complete full blackboards without any information in aurora image, existing degree of depth learning algorithm does not have disposal route for this part redundancy; Secondly due to number of training restriction, existing degree of depth convolutional network technology is not high to the classification accuracy of aurora image; 3rd, degree of depth convolutional network time consumption for training is serious.
Summary of the invention
The object of the invention is to the deficiency existed for above-mentioned prior art, proposing a kind of aurora image classification method based on optimizing convolution automatic coding network, to complete network training fast, improve classification accuracy rate.
The technical scheme realizing above-mentioned purpose of the present invention is: carry out significance analysis to aurora image, significantly scheme to extract the training sample for training automatic coding network A E based on aurora, then the convolution own coding feature of aurora image is extracted with the automatic coding network characterization trained, and utilize average pond about to subtract convolution own coding feature, realize the classification to aurora image finally by softmax sorter.Implementation step comprises as follows:
(1) input aurora image, extract totally 100000 training block of pixels according to aurora image saliency map, composition training block of pixels collection P 8 × 8 × 100000;
(2) to training block of pixels collection P 8 × 8 × 100000carry out whitening pretreatment, obtain the training sample set x after albefaction pCAwhite;
(3) the training sample set x after albefaction is utilized pCAwhite, training automatic coding network A E:
3a) training sample set is expressed as x pCAwhite={ xp (1), xp (2), xp (3)..., xp (i)..., xp (m), wherein xp (i)represent i-th training sample, xp (i)∈ R 64, i=1,2 ..., m, m represent number of training; According to training sample xp (i)ask an automatic coding network A E hidden layer jth neuronic average active degree:
ρ j ^ = 1 m Σ i = 1 m [ a W , b ( xp ( i ) ) ] ,
Wherein, j=1,2 ..., n, n represent node in hidden layer, a w,b(xp (i)) represent when being input as xp (i)time an automatic coding network A E hidden layer jth neuronic activity, (W, b)=(W (1), b (1), W (2), b (2)) represent the parameter of automatic coding network A E, wherein, W (1)represent the weight of input layer to hidden layer node, W (2)represent the weight of hidden layer node to output layer node, b (1)represent input layer being biased to hidden layer node, b (2)represent hidden layer node being biased to output layer node;
3b) according to backpropagation BP coaching method principle, with parameter (W, b) and the hidden layer average active degree of automatic coding network A E construct a sparse cost function J sparse(W, b):
J s p a r s e ( W , b ) = [ 1 m Σ i = 1 m ( 1 2 | | h W , b ( xp ( i ) ) - xp ( i ) | | 2 ) ] + λ 1 2 Σ ( W ( 1 ) ) 2 + λ 2 2 Σ ( W ( 2 ) ) 2 + β Σ l = 1 n K L ( ρ | ρ j ^ )
In formula, h w,b() represents nonlinear automatic coding network A E function, represent with ρ be average and with for average two Bernoulli random variables between relative entropy, λ 1and λ 2represent the weight attenuation parameter of hidden layer and output layer respectively, ρ represents degree of rarefication coefficient, and its value is a constant being less than 0.1;
3c) by minimizing cost function J sparse(W, b) be optimized after the parameter (W of automatic coding network A E opt, b opt):
( W o p t , b o p t ) = m i n W , b J s p a r s e ( W , b ) = ( W o p t ( j q ) ( 1 ) , b o p t ( j ) ( 1 ) , W o p t ( k j ) ( 2 ) , b o p t ( k ) ( 2 ) )
Wherein, represent the weight of q node to a jth node of hidden layer of input layer after optimizing, represent the weight of a hidden layer jth node to hidden layer to an output layer kth node after optimizing, represent input layer being biased to a hidden layer jth node after optimizing, represent hidden layer node being biased to a kth node after optimizing, q=k=1,2 ..., 64,64 represent input layer numbers, and output layer nodes equals input layer number, j=1,2 ..., n, n represent node in hidden layer;
(4) by the weight of q the node to a jth node of hidden layer of optimizing rear input layer ask the convolution own coding feature Fr of aurora image I;
(5) pondization that is averaged by the convolution own coding feature Fr of aurora image I operates, the block of 11 × 11 sizes is on average divided into by convolution own coding feature Fr, every block is all merged into a mean value, then these mean values are reconfigured and obtain pond feature F ∈ R 11 × 11 × n;
(6) the pond feature F of aurora image is input to softmax sorter, obtains a class label, be the classification of this aurora image.
The present invention compared with prior art tool has the following advantages:
The first, the present invention utilizes image saliency map to carry out the preferred of training sample, effectively eliminates invalid training sample, improves network training efficiency, improves the classification accuracy of model to aurora image simultaneously;
The second, automatic coding network A E pre-training convolution filter is selected in invention, builds convolutional network, effectively overcomes the lower problem of classification accuracy that aurora image lack of training samples causes.
Accompanying drawing explanation
Fig. 1 is realization flow figure of the present invention;
Fig. 2 is that the present invention is to aurora image saliency map, significantly figure binaryzation and extraction training fragment result figure;
Fig. 3 is the part convolution filter that the present invention is obtained by training automatic coding network A E;
Fig. 4 is corresponding classification accuracy and classification time comparison diagram when automatic coding network A E node in hidden layer is different in the present invention;
Fig. 5 is that automatic coding network A E hidden layer degree of rarefication of the present invention is to the effect diagram of classification accuracy.
Embodiment
Below in conjunction with accompanying drawing, performing step of the present invention and technique effect are described in further detail.
With reference to Fig. 1, performing step of the present invention is as follows:
Step 1, input aurora image, extracts training block of pixels collection P 8 × 8 × 100000.
1.1) the aurora image of a width as shown in Fig. 2 (a) is inputted, obtain each pixel I (x in this image, y), brightness L (x, y), Gradient Features H (x, and edge binaryzation feature B (x y), y), and by these three kinds of Fusion Features, aurora image slices vegetarian refreshments I (x is obtained, y) conspicuousness value of information S (x, y):
S(x,y)=L(x,y)+H(x,y)+B(x,y);
By aurora image the aurora image saliency map S of conspicuousness value of information S (x, y) composition as shown in Fig. 2 (b) a little;
1.2) carry out binaryzation operation to image saliency map S, the binaryzation obtained as Fig. 2 (c) significantly schemes S 1;
1.3) significantly S is schemed in binaryzation at random 1the training block of pixels ps of upper extraction 8 × 8 size 8 × 8, judge the value of this block of pixels: if block of pixels ps 8 × 8in 1 value proportion be greater than 0.8, then extract the block of pixels p of original image I in this position 8 × 8; If block of pixels ps 8 × 8in 1 value proportion be less than or equal to 0.8, then do not deal with;
1.4) according to 1.3) method, extract aurora images totally 100000 training block of pixels, composition training block of pixels collection P 8 × 8 × 100000.
Step 2, to training block of pixels collection P 8 × 8 × 100000carry out whitening pretreatment, ask for the training sample set x after albefaction pCAwhite.
Whitening pretreatment technology comprises: PCA albefaction, ZCA albefaction and spectral whitening etc.What this example adopted is ZCA whitening approach, and its concrete operation step is described below:
2.1) block of pixels collection P will be trained 8 × 8 × 100000carry out matrix deformation, obtain deformation matrix: x ∈ R 64 × 100000;
2.2) covariance matrix of x is asked:
φ = 1 m Σ i = 1 m ( x ( i ) ) ( x ( i ) ) T ,
Wherein, m=100000 represents number of training, x (i)i-th row of representing matrix x;
2.3) SVD decomposition is carried out to deformation matrix x: x=U φ V, obtain left basis matrix U and right basis matrix V, and x is expressed as in left basis matrix U direction: x rot=U tx;
2.4) according to 2.2) and 2.3) obtain training sample set x pw:
x P w = U x r o t φ + ϵ
Wherein, ε represent one be not 0 minimum number, its value is 10 -5.
Step 3, utilizes training sample set x pw, training autocoder.
3.1) training sample set is expressed as x pw={ xp (1), xp (2), xp (3)..., xp (i)..., xp (m), wherein xp (i)represent i-th training sample, xp (i)∈ R 64, m represents number of training; According to training sample xp (i)ask an automatic coding network A E hidden layer jth neuronic average active degree
ρ j ^ = 1 m Σ i = 1 m [ a W , b ( xp ( i ) ) ] ,
Wherein, j=1,2 ..., n, n represent node in hidden layer, a w,b(xp (i)) represent when being input as xp (i)time an automatic coding network A E hidden layer jth neuronic activity, (W, b)=(W (1), b (1), W (2), b (2)) represent the parameter of automatic coding network A E, wherein, W (1)represent the weight of input layer to hidden layer node, W (2)represent the weight of hidden layer node to output layer node, b (1)represent input layer being biased to hidden layer node, b (2)represent hidden layer node being biased to output layer node;
3.2) according to backpropagation BP coaching method principle, with parameter (W, b) and the hidden layer average active degree of automatic coding network A E construct a sparse cost function J sparse(W, b):
J s p a r s e ( W , b ) = [ 1 m Σ i = 1 m ( 1 2 | | h W , b ( xp ( i ) ) - xp ( i ) | | 2 ) ] + λ 1 2 Σ ( W ( 1 ) ) 2 + λ 2 2 Σ ( W ( 2 ) ) 2 + β Σ l = 1 n K L ( ρ | ρ j ^ )
In formula, h w,b() represents nonlinear automatic coding network A E function, represent with ρ be average and with for average two Bernoulli random variables between relative entropy, λ 1and λ 2represent the weight attenuation parameter of hidden layer and output layer respectively, ρ represents degree of rarefication coefficient, and its value is a constant being less than 0.1;
3.3) by minimizing cost function J sparse(W, b) be optimized after the network parameter (W of automatic coding network A E opt, b opt):
( W o p t , b o p t ) = m i n W , b J s p a r s e ( W , b ) = ( W o p t ( j q ) ( 1 ) , b o p t ( j ) ( 1 ) , W o p t ( k j ) ( 2 ) , b o p t ( k ) ( 2 ) ) ,
Wherein, represent the weight of q node to a jth node of hidden layer of input layer after optimizing, represent the weight of a hidden layer jth node to hidden layer to an output layer kth node after optimizing, represent input layer being biased to a hidden layer jth node after optimizing, represent hidden layer node being biased to a kth node after optimizing, q=k=1,2 ..., 64,64 represent input layer numbers, and output layer nodes equals input layer number, j=1,2 ..., n, n represent node in hidden layer.
Step 4, asks the convolution own coding feature Fr of aurora image I.
4.1) by the weight of q the node to a jth node of hidden layer of optimizing rear input layer ask the training block of pixels feature pf that a hidden layer jth node is corresponding j:
pf j = W o p t ( j q ) ( 1 ) Σ q = 1 64 ( W o p t ( j q ) ( 1 ) ) 2 ;
4.2) block of pixels feature pf will be trained jcarry out matrix deformation, obtain training block of pixels eigenmatrix: f j∈ R 8 × 8;
4.3) by aurora image I and training block of pixels eigenmatrix f jconvolution, obtains a jth convolution own coding feature Fr of aurora image I j:
Fr j=I*f j
Wherein, I ∈ R 128 × 128, Fr j∈ R 121 × 121;
4.4) by n convolution own coding feature Fr jseries connection, obtains the convolution own coding feature of aurora image I:
Fr∈R 121×121×n
Step 5, the pondization that is averaged by the convolution own coding feature Fr of aurora image I operates, on average be divided into the block of 11 × 11 sizes by convolution own coding feature Fr, every block all merged into a mean value, then these mean values are reconfigured and obtain pond feature F ∈ R 11 × 11 × n.
Step 6, is input to softmax sorter by the pond feature F of aurora image I, obtains a class label, is the classification of this aurora image I.
Effect of the present invention further describes by following emulation experiment.
Experiment 1:SCAE model parameter arranges experiment
Experiment condition: the present invention comes from Chinese Arctic Yellow River Station for the 3200 width aurora data of testing, this database comprises multi sphere shape, radiation crown shape, focus crown shape and each 800 width of valance Coronal aurorae image.
Experiment content 1: different automatic coding network A E node in hidden layer n is set, use the present invention to carry out aurora Images Classification, result is as Fig. 3, and wherein Fig. 3 (a) is classification accuracy, and Fig. 3 (b) is time consumption for training;
Experiment content 2: arrange different automatic coding network A E hidden layer degree of rarefication ρ, use the present invention to carry out aurora Images Classification, classification accuracy result is as Fig. 4.
As seen from Figure 3, as node in hidden layer n=400, the classification accuracy of aurora image is the highest; Node in hidden layer n is larger, and time consumption for training is longer.
As seen from Figure 4, when the hidden layer degree of rarefication of automatic coding network A E is 0.03, the classification accuracy of aurora image is the highest.
Experiment 2: contrast with the classification accuracy of different model to aurora image.
Experiment condition: 3200 width aurora images in experiment 1 have been used in this experiment.
Experiment content: use existing Le-net5 method, CAE method respectively, and S-CAE method proposed by the invention is classified to aurora image, classification accuracy result is as Fig. 5.
As seen from Figure 5, S-CAE method proposed by the invention is adopted effectively to improve the classification accuracy of aurora image.

Claims (4)

1., based on an aurora image classification method for the convolution autoencoder network optimized, comprise the steps:
(1) input aurora image, ask aurora image saliency map and significantly scheme to extract training block of pixels ps based on it 8 × 8, composition training block of pixels collection P 8 × 8 × 100000;
(2) to training block of pixels collection P 8 × 8 × 100000carry out whitening pretreatment, obtain the training sample set x after albefaction pw;
(3) the training sample set x after albefaction is utilized pw, training automatic coding network A E:
3a) training sample set is expressed as v={xp (1), xp (2), xp (3)..., xp (i)..., xp (m), wherein xp (i)represent i-th training sample, xp (i)∈ R 64, i=1,2 ..., m, m represent number of training; According to training sample xp (i)ask an automatic coding network A E hidden layer jth neuronic average active degree:
ρ j ^ = 1 m Σ i = 1 m [ a W , b ( xp ( i ) ) ] ,
Wherein, j=1,2 ..., n, n represent node in hidden layer, a w,b(xp (i)) represent when being input as xp (i)time an automatic coding network A E hidden layer jth neuronic activity, (W, b)=(W (1), b (1), W (2), b (2)) represent the parameter of automatic coding network A E, wherein, W (1)represent the weight of input layer to hidden layer node, W (2)represent the weight of hidden layer node to output layer node, b (1)represent input layer being biased to hidden layer node, b (2)represent hidden layer node being biased to output layer node;
3b) according to backpropagation BP coaching method principle, with parameter (W, b) and the hidden layer of automatic coding network A E
Average active degree construct a sparse cost function J sparse(W, b):
J s p a r s e ( W , b ) = [ 1 m Σ i = 1 m ( 1 2 | | h W , b ( xp ( i ) ) - xp ( i ) | | 2 ) ] + λ 1 2 Σ ( W ( 1 ) ) 2 + λ 2 2 Σ ( W ( 2 ) ) 2 + β Σ l = 1 n K L ( ρ | ρ j ^ )
In formula, h w,b() represents nonlinear automatic coding network A E function, represent with ρ be average and with for average two Bernoulli random variables between relative entropy, λ 1and λ 2represent the weight attenuation parameter of hidden layer and output layer respectively, ρ represents degree of rarefication coefficient, and its value is a constant being less than 0.1;
3c) by minimizing cost function J sparse(W, b) be optimized after the parameter (W of automatic coding network A E opt, b opt):
( W o p t , b o p t ) = m i n W , b J s p a r s e ( W , b ) = ( W o p t ( j q ) ( 1 ) , b o p t ( j ) ( 1 ) , W o p t ( k j ) ( 2 ) , b o p t ( k ) ( 2 ) )
Wherein, represent the weight of q node to a jth node of hidden layer of input layer after optimizing, represent the weight of a hidden layer jth node to hidden layer to an output layer kth node after optimizing, represent input layer being biased to a hidden layer jth node after optimizing, represent hidden layer node being biased to a kth node after optimizing, q=k=1,2 ..., 64,64 represent input layer numbers, and output layer nodes equals input layer number, j=1,2 ..., n, n represent node in hidden layer;
(4) by the weight of q the node to a jth node of hidden layer of optimizing rear input layer ask the convolution own coding feature Fr of aurora image I;
(5) pondization that is averaged by the convolution own coding feature Fr of aurora image I operates, the block of 11 × 11 sizes is on average divided into by convolution own coding feature Fr, every block is all merged into a mean value, then these mean values are reconfigured and obtain pond feature F ∈ R 11 × 11 × n;
(6) the pond feature F of aurora image is input to softmax sorter, obtains a class label, be the classification of this aurora image.
2. method according to claim 1, wherein asks aurora image saliency map in step (1) and significantly schemes to extract training block of pixels ps based on it 8 × 8, carry out as follows:
1a) for each pixel I (x of any width input aurora image I, y), obtain its brightness L (x, y), Gradient Features H (x, and edge binaryzation feature B (x y), y), and by these three kinds of Fusion Features, obtain the conspicuousness value of information of aurora image slices vegetarian refreshments I (x, y): S (x, y)=L (x, y)+H (x, y)+B (x, y), again by aurora image conspicuousness value of information S (x, y) a little form aurora image saliency map S;
1b) binaryzation operation is carried out to image saliency map S, obtain binaryzation and significantly scheme S 1;
1c) significantly scheme S in binaryzation at random 1the training block of pixels ps of upper extraction 8 × 8 size 8 × 8, judge the value of this block of pixels: if block of pixels ps 8 × 8in 1 value proportion be greater than 0.8, then extract the block of pixels p of original image I in this position 8 × 8; If block of pixels ps 8 × 8in 1 value proportion be less than or equal to 0.8, then do not deal with.
3. method according to claim 1, wherein step (2) is to training block of pixels collection P 8 × 8 × 100000carry out whitening pretreatment, carry out as follows:
Block of pixels collection P 2a) will be trained 8 × 8 × 100000carry out matrix deformation, obtain deformation matrix: x ∈ R 64 × 100000;
2b) ask the covariance matrix of x:
φ = 1 m Σ i = 1 m ( x ( i ) ) ( x ( i ) ) T ,
Wherein, i=1,2 ..., m, m=100000 represent number of training, x (i)i-th row of representing matrix x;
2c) SVD decomposition is carried out to deformation matrix x: x=U φ V, obtain left basis matrix U and right basis matrix V, and x is expressed as in left basis matrix U direction: x rot=U tx;
2d) according to 2b) and 2c) obtain training sample set x pCAwhite:
x P C A w h i t e = U x r o t φ + ϵ
Wherein, ε represent one be not 0 minimum number, its value is 10 -5.
4. method according to claim 1, wherein step (4) asks the convolution own coding feature Fr of aurora image I, carries out as follows:
4a) by the weight of q the node to a jth node of hidden layer of optimizing rear input layer ask the training block of pixels feature pf that a hidden layer jth node is corresponding j:
pf j = W o p t ( j q ) ( 1 ) Σ q = 1 64 ( W o p t ( j q ) ( 1 ) ) 2 ;
4b) block of pixels feature pf will be trained jcarry out matrix deformation, obtain training block of pixels eigenmatrix: f j∈ R 8 × 8;
4c) by aurora image I and training block of pixels eigenmatrix f jconvolution, obtains a jth convolution own coding feature Fr of aurora image I j:
Fr j=I*f j
Wherein, I ∈ R 128 × 128, Fr j∈ R 121 × 121;
4d) by n convolution own coding feature Fr jseries connection, obtains the convolution own coding feature of aurora image I:
Fr∈R 121×121×n
CN201510976336.6A 2015-12-23 2015-12-23 Aurora image classification method based on optimization convolution autocoding network Active CN105550712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510976336.6A CN105550712B (en) 2015-12-23 2015-12-23 Aurora image classification method based on optimization convolution autocoding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510976336.6A CN105550712B (en) 2015-12-23 2015-12-23 Aurora image classification method based on optimization convolution autocoding network

Publications (2)

Publication Number Publication Date
CN105550712A true CN105550712A (en) 2016-05-04
CN105550712B CN105550712B (en) 2019-01-08

Family

ID=55829895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510976336.6A Active CN105550712B (en) 2015-12-23 2015-12-23 Aurora image classification method based on optimization convolution autocoding network

Country Status (1)

Country Link
CN (1) CN105550712B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228130A (en) * 2016-07-19 2016-12-14 武汉大学 Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN107045722A (en) * 2017-03-27 2017-08-15 西安电子科技大学 Merge the video signal process method of static information and multidate information
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN113111688A (en) * 2020-01-13 2021-07-13 中国科学院国家空间科学中心 All-sky throat region aurora identification method and system
CN113128542A (en) * 2020-01-15 2021-07-16 中国科学院国家空间科学中心 All-sky aurora image classification method and system
CN113642676A (en) * 2021-10-12 2021-11-12 华北电力大学 Regional power grid load prediction method and device based on heterogeneous meteorological data fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632166A (en) * 2013-12-04 2014-03-12 西安电子科技大学 Aurora image classification method based on latent theme combining with saliency information
CN104156736A (en) * 2014-09-05 2014-11-19 西安电子科技大学 Polarized SAR image classification method on basis of SAE and IDL
CN104462494A (en) * 2014-12-22 2015-03-25 武汉大学 Remote sensing image retrieval method and system based on non-supervision characteristic learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632166A (en) * 2013-12-04 2014-03-12 西安电子科技大学 Aurora image classification method based on latent theme combining with saliency information
CN104156736A (en) * 2014-09-05 2014-11-19 西安电子科技大学 Polarized SAR image classification method on basis of SAE and IDL
CN104462494A (en) * 2014-12-22 2015-03-25 武汉大学 Remote sensing image retrieval method and system based on non-supervision characteristic learning

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228130A (en) * 2016-07-19 2016-12-14 武汉大学 Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN106228130B (en) * 2016-07-19 2019-09-10 武汉大学 Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN107045722A (en) * 2017-03-27 2017-08-15 西安电子科技大学 Merge the video signal process method of static information and multidate information
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN107832718B (en) * 2017-11-13 2020-06-05 重庆工商大学 Finger vein anti-counterfeiting identification method and system based on self-encoder
CN113111688A (en) * 2020-01-13 2021-07-13 中国科学院国家空间科学中心 All-sky throat region aurora identification method and system
CN113111688B (en) * 2020-01-13 2024-03-08 中国科学院国家空间科学中心 All-sky throat area aurora identification method and system
CN113128542A (en) * 2020-01-15 2021-07-16 中国科学院国家空间科学中心 All-sky aurora image classification method and system
CN113128542B (en) * 2020-01-15 2024-04-30 中国科学院国家空间科学中心 All-sky aurora image classification method and system
CN113642676A (en) * 2021-10-12 2021-11-12 华北电力大学 Regional power grid load prediction method and device based on heterogeneous meteorological data fusion

Also Published As

Publication number Publication date
CN105550712B (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN109993220B (en) Multi-source remote sensing image classification method based on double-path attention fusion neural network
CN113159051B (en) Remote sensing image lightweight semantic segmentation method based on edge decoupling
CN105550712A (en) Optimized convolution automatic encoding network-based auroral image sorting method
CN105975931B (en) A kind of convolutional neural networks face identification method based on multiple dimensioned pond
Thai et al. Image classification using support vector machine and artificial neural network
CN111612807A (en) Small target image segmentation method based on scale and edge information
CN107766794A (en) The image, semantic dividing method that a kind of Fusion Features coefficient can learn
CN105825511A (en) Image background definition detection method based on deep learning
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN101826161B (en) Method for identifying target based on local neighbor sparse representation
CN114092832A (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN106910202B (en) Image segmentation method and system for ground object of remote sensing image
CN110210027B (en) Fine-grained emotion analysis method, device, equipment and medium based on ensemble learning
CN112200090A (en) Hyperspectral image classification method based on cross-grouping space-spectral feature enhancement network
CN104239902A (en) Hyper-spectral image classification method based on non-local similarity and sparse coding
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN107767416A (en) The recognition methods of pedestrian's direction in a kind of low-resolution image
CN112381144B (en) Heterogeneous deep network method for non-European and Euclidean domain space spectrum feature learning
CN105631477A (en) Traffic sign recognition method based on extreme learning machine and self-adaptive lifting
CN104182771A (en) Time series data graphics analysis method based on automatic coding technology with packet loss
CN106157254A (en) Rarefaction representation remote sensing images denoising method based on non local self-similarity
CN109558880B (en) Contour detection method based on visual integral and local feature fusion
CN103226825A (en) Low-rank sparse model-based remote sensing image change detection method
CN104933410A (en) United classification method for hyper-spectral image spectrum domain and spatial domain

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant