CN104992186A - Aurora video classification method based on dynamic texture model representation - Google Patents

Aurora video classification method based on dynamic texture model representation Download PDF

Info

Publication number
CN104992186A
CN104992186A CN201510412383.8A CN201510412383A CN104992186A CN 104992186 A CN104992186 A CN 104992186A CN 201510412383 A CN201510412383 A CN 201510412383A CN 104992186 A CN104992186 A CN 104992186A
Authority
CN
China
Prior art keywords
aurora
video
test
matrix
dynamic texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510412383.8A
Other languages
Chinese (zh)
Other versions
CN104992186B (en
Inventor
韩冰
宋亚婷
高新波
李洁
贾中华
王平
王颖
王秀美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510412383.8A priority Critical patent/CN104992186B/en
Publication of CN104992186A publication Critical patent/CN104992186A/en
Application granted granted Critical
Publication of CN104992186B publication Critical patent/CN104992186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an aurora video classification method based on a dynamic texture model. The technical scheme is characterized by carrying out universality modeling and characteristic extraction on aurora videos of four kinds of forms and carrying out classification on the aurora videos by utilizing repeatability correlation between the aurora video frames and starting from the dynamic change characteristics between the aurora video frames. The technical scheme is realized through the following steps: 1) obtaining a training aurora video and a test aurora video; 2) extracting dynamic texture characteristics of the test aurora video; 3) extracting a state transition matrix of the test aurora video; 4) calculating the Martin distance between the test aurora video and a training set sample; and 5) carrying out nearest distance classification on the test aurora video according to the Martin distance between the aurora videos. The method can realize automatic computer classification of the four kinds of aurora videos, has the advantage of high classification accuracy, and can be applied to aurora video feature extraction and computer image identification.

Description

Based on the aurora video classification methods that dynamic texture model characterizes
Technical field
The invention belongs to technical field of image processing, relate to the computer classification method of four kinds of form aurora videos, can be used for feature extraction and the computer picture recognition of aurora video.
Background technology
Aurora are solar winds when being injected into earth magnetosphere by a day Cusp/Cleft, side, the gorgeous radiance that fallout particulate interacts along the magnetic line of force and the upper atmosphere and produces.Aurora are observation windows of polar region space weather physical process, reflect solar wind and the coupling process of ground magnetosphere intuitively, contain the electromagnetic activity information of a large amount of solar-terrestrial physics, the Research Significance of own profound.
The all-sky imaging system (All-sky Camera) of Chinese Arctic Yellow River Station carries out Continuous Observation to three typical spectral coverage 427.8nm, 557.7nm and 630.0nm of aurora simultaneously, and produce ten hundreds of aurora images, data volume is huge.Aurora are divided into arcuation, radial, valance shape and focus shape four class by form by the people such as Wang Q in article " Spatial texture based automatic classification of dayside aurora in all-sky images.Journal of Atmospheric and Solar-Terrestrial Physics; 2010; 72 (5): 498-508. ", and have drawn the Statistical Distribution of four kinds of aurora types.Article " the Pedersen T R that the people such as Pedersen deliver, Gerken E A.Creation of visibleartificial optical emission in the aurora by high-power radio waves.Nature, 2005, 433 (7025): 498-500 ", the people such as Hu Zejun publish an article " Hu Z J, Yang H, Huang D, et al.Synoptic distribution ofdayside aurora:Multiple-wavelength all-sky observation at Yellow River Station in Ny-Alesund, Svalbard.J.Atmos.Sol.-Terr.Phys., 2009, 71 (89): 794-804 " and article " LorentzenD A, Moen J, Oksavik K, et al.In situmeasurement of a newly created polar cap patch.J.Geophys.Res., 2010, 115 (A12). ", provide a large amount of research materials, prove that the aurora of different shape correspond to different magnetosphere Boundary Layer Dynamics processes.How accurately and efficiently aurora video to be classified, it is the key disclosing its magnetosphere source region dynamic process, also be the important step of its mechanism research, but aurora form and dynamic change complexity, undoubtedly for polar region researcher brings huge difficulty.
The magnanimity aurora Data classification research that develops into of computer picture recognition and analytical technology provides possibility.2004, deng people at article " Syrjasuo M, Partamies N.Numeric image features for detection ofaurora [J] .Geoscience and Remote Sensing Letters, IEEE, 2012, 9 (2): 176-179. " method of computer vision is introduced in aurora classification of images system in, the method extracts Fourier operator as feature from the auroral region after segmentation, the automatic classification of aurora image is realized by arest neighbors method, owing to being subject to the impact of partitioning algorithm, the method is only respond well to shape facility obvious arcuation aurora Images Classification, the people such as Wang in 2007 at article " Wang Qian, Liang Jimin, Hu ZeJun, Hu HaiHong, Zhao Heng, Hu HongQiao, Gao Xinbo, Yang Huigen.Spatial texture based automatic classification of dayside aurora in all-sky images.Journal of Atmospheric and Solar-Terrestrial Physics, 2010, 72 (5): 498 – 508. " in use the gray feature of principal component analysis (PCA) PCA to aurora image to extract, propose a kind of aurora sorting technique based on presentation, certain progress is achieved in Coronal aurorae sort research direction, 2008, the people such as Gao publish an article " L.Gao, X.B.Gao, and J.M.Liang.Dayside corona autora detection based on sample selection and adaBoostalgorithm.J.I mage Graph, 2010, 15 (1): 116-121. ", aurora image classification method based on Gabor transformation is proposed, have employed local Gabor filter and extract characteristics of image, reducing feature redundant information when guaranteeing computational accuracy, achieving good classifying quality, 2009, morphology constituent analysis (MCA) combines with aurora image procossing by the people such as Fu in article " Fu Ru, Jie Li and X.B.Gao..Automatic aurora images classification algorithm based on separated texture.Proc.Int.Conf.Robotics and Biomimetics, 2009:1331-1335. ", feature is extracted from the aurora texture subgraph obtained after MCA is separated, for the classification of arc crown two class aurora image, improve the accuracy of arc crown aurora classification.Follow-up correlative study also has: the people such as Han propose again the aurora classification based on the classification of BIFs characteristic sum C average in article " Bing Han; Xiaojing Zhao; Dacheng Tao, et al.Dayside aurora classification via BIFs-basedsparse representation using manifold learning.International Journal of Computer Mathematics.Published online:12Nov 2013. "; The people such as Yang propose multi-level Wavelet Transform conversion and represent aurora characteristics of image in article " Yang Xi; Li Jie; Han Bing; Gao Xinbo.Wavelet hierarchical model foraurora images classification.Journal of Xidian University; 2013; 40 (2): 18-24. ", achieve higher classification accuracy; 2013, the people such as Han introduce implicit Dirichlet distribute model LDA in article " Han B; Yang C; Gao XB.Aurora image classification based on LDA combining withsaliency information.RuanJian Xue Bao/Journal of Software; 2013; 24 (11): 2758-2766. ", and combining image conspicuousness information, further increase again the classification accuracy of aurora image.
But mostly these existing aurora graphical analyses above-mentioned are based on single image and static nature, and the related work about the automatic classification of aurora sequence is still also fewer.Relevant progress mainly contains: the people such as Yang proposed the aurora sequence category theory based on Hidden Markov Model (HMM) in 2013 in article " Yang Qiuju.Auroral Events Detection and Analysis Based on ASI and UVIImages [D] .Xi ' an:Xidian university; 2013. ", but the method essence is still based on single image feature; In addition, when Han constructs sky in article " Han B; Liao Q; GaoXB.Spatial-Temporal poleward volume local binary patterns for aurora sequences eventdetection.Ruan Jian Xue Bao/Journal of Software; 2014; 25 (9): 2172-2179. ", pole characterizes the STP-LBP feature of operator extraction aurora sequence to LBP, for detecting pole in aurora video to motion artifacts.But this algorithm, only for the feature of pole to arcuation aurora video, does not have universality.Due at present not for four class form aurora videos universality modeling and extract the method for video behavioral characteristics, computing machine directly cannot carry out automatic classification to four class aurora videos, single frames operation can only be carried out, or analyze a certain class aurora video to unicity, classification accuracy and classification effectiveness are all lower.
Summary of the invention
The object of the invention is to the deficiency for above-mentioned prior art, a kind of aurora video classification methods based on dynamic texture model is proposed, to utilize the repetition correlativity between aurora frame of video, start with from the dynamic variation characteristic between aurora frame of video, universality modeling is carried out to four class form aurora videos, extract dynamic texture feature, realize the computer automatic sorting of four class aurora videos.
Technical scheme of the present invention comprises the steps: for achieving the above object
(1) appoint from other aurora video database of marking class and get N number of video composition training set { y 1, y 2..., y k..., y n, y ka kth training set sample, k=1,2 ..., N, will remain aurora video composition test set, get a sample as test aurora video y from test set test;
(2) the dynamic texture feature of aurora video is extracted;
(2a) aurora video y will be tested testbe expressed as y (t), y (t) ∈ R m, m=I 1× I 2, I 1for the line number of current video frame picture element matrix, I 2for the columns of current video frame picture element matrix, t=1 ..., τ, τ are video totalframes;
(2b) average of all frames is deducted with aurora frame of video y (t) observed obtain video matrix Y:
Y = [ y ( 1 ) - y ‾ , y ( 2 ) - y ‾ , ... , y ( t ) - y ‾ , ... , y ( τ ) - y ‾ ] ;
(2c) SVD decomposition is carried out to video matrix Y, i.e. Y=USV t, wherein, U ∈ R m × nfor left basis matrix; S ∈ R n × nfor singular value matrix; V ∈ R n × τfor right basis matrix;
(2d) represent the observing matrix C of aurora video with left basis matrix U, i.e. C=U, and ask the dynamic texture eigenmatrix X=SV of aurora video t, X ∈ R here n × τ, extract each row of dynamic texture eigenmatrix X as dynamic texture characteristic frame x (t), namely X=[x (1), x (2) ..., x (t) ..., x (τ)];
(3) with the dynamic texture eigenmatrix X obtained, the state-transition matrix A of t dynamic texture characteristic frame x (t) to t+1 dynamic texture characteristic frame x (t+1) is calculated,
A = arg min | | X 2 , ... , τ - AX 1 , ... , τ - 1 | | F 2 = ( X 2 , ... , τ X 1 , ... , τ - 1 ) ( X 1 , ... , τ - 1 X 1 , ... , τ - 1 T ) - 1 ,
X in formula 1 ..., τ-1=[x (1), x (2) ..., x (t) ..., x (τ-1)]; X 2 ..., τ=[x (2), x (3) ..., x (t) ..., x (τ)]; represent the F norm asking matrix;
(4) test aurora video y is asked testto training set sample y kmartin's distance:
(4a) with calculating test pole light video y testobserving matrix C and the method for state-transition matrix A, calculation training collection { y 1, y 2..., y k..., y nmiddle training set sample y kobserving matrix C kwith state-transition matrix A k, obtain observing matrix collection { A 1, A 2..., A k..., A nand state-transition matrix collection { C 1, C 2..., C k..., C n, wherein k=1,2 ..., N;
(4b) according to observing matrix and state-transition matrix, test aurora video y is calculated testto training set sample y kmartin distance d 2(y test, y k), k=1,2 ..., N;
(5) according to the Martin's distance between aurora video, to test aurora video y testcarry out minimum distance classification:
By the N number of Martin distance d obtained in step 3 2(y test, y k) by order arrangement from small to large, take out minimum Martin's distance corresponding aurora sequences y min, and will with this aurora sequences y minplesiomorphic test aurora video y testbe divided into and aurora sequences y minsame class, completes test aurora video y testautomatic classification.
Tool of the present invention has the following advantages:
The present invention utilizes SVD to decompose the dynamic texture feature extracting aurora video, take full advantage of the repetition correlativity between aurora frame of video, have found the dynamic perfromance of four class aurora videos, overcome prior art to start with from single frames feature, cause model generalization scarce capacity, the shortcoming that counting yield is not high, makes the method can carry out universality modeling to four class form aurora videos, realizes the computer automatic sorting of four class aurora videos.
Accompanying drawing explanation
Fig. 1 is embodiments of the invention process flow diagrams;
Fig. 2 is with the visualization result figure of the present invention to arcuation aurora video observing matrix;
Fig. 3 is with the visualization result figure of the present invention to radial aurora video observing matrix;
Fig. 4 is with the visualization result figure of the present invention to focus shape aurora video observing matrix;
Fig. 5 is with the visualization result figure of the present invention to valance shape aurora video observing matrix.
Embodiment
Below in conjunction with accompanying drawing, content of the present invention and effect are described further.
One. know-why
The present invention utilizes the repetition correlativity between aurora frame of video, starts with from the dynamic variation characteristic between aurora frame of video, four class form aurora videos is carried out to universality modeling and extracts feature, realizes the classification to aurora video.
Its modeling process is: tie up dynamic texture characteristic sequence frame x (t) with n and characterize m dimension aurora frame of video y (t): wherein, y (t) ∈ R m, m=I 1× I 2, I 1× I 2for the dimension of current video frame picture element matrix, t=1 ..., τ, τ are video totalframes; X (t) ∈ R nand n < < m; C ∈ R m × nrepresent observing matrix; W (t) ∈ R mfor mean value is 0, variance matrix is zI mwhite Gaussian noise; represent average on time orientation of aurora sequences y (t) that observe.And dynamic texture characteristic sequence x (t) is expressed as second-order stationary stochastic process: x (t+1)=Ax (t)+v (t).Wherein, parameter A ∈ R n × nrepresent state-transition matrix; V (t) ~ N (0, Q) represents that mean value is 0, and variance matrix is the white Gaussian noise of Q.
The average of all frames is deducted with aurora frame of video y (t) observed obtain video matrix Y:
Y = &lsqb; y ( 1 ) - y &OverBar; , y ( 2 ) - y &OverBar; , ... , y ( t ) - y &OverBar; , ... , y ( &tau; ) - y &OverBar; &rsqb; ;
SVD decomposition is carried out to video matrix Y, is decomposed into left basis matrix U ∈ R m × n; Singular value matrix S ∈ R n × n; Right basis matrix V ∈ R n × τ, i.e. Y=USV t.
Represent the observing matrix C of aurora video with left basis matrix U, i.e. C=U, and ask the dynamic texture eigenmatrix X=SV of aurora video t, X ∈ R here n × τ, extract each row of dynamic texture eigenmatrix X as dynamic texture characteristic frame x (t), namely X=[x (1), x (2) ..., x (t) ..., x (τ)];
With the dynamic texture eigenmatrix X obtained, calculate the state-transition matrix A of t dynamic texture characteristic frame x (t) to t+1 dynamic texture characteristic frame x (t+1),
A = arg min | | X 2 , ... , &tau; - AX 1 , ... , &tau; - 1 | | F 2 = ( X 2 , ... , &tau; X 1 , ... , &tau; - 1 ) ( X 1 , ... , &tau; - 1 X 1 , ... , &tau; - 1 T ) - 1 ,
X in formula 1 ..., τ-1=[x (1), x (2) ..., x (t) ..., x (τ-1)]; X 2 ..., τ=[x (2), x (3) ..., x (t) ..., x (τ)]; represent the F norm asking matrix.
For test aurora video y testwith any one training set sample y k, ask test aurora video y as stated above testwith training set sample y kobserving matrix C and C kand state-transition matrix A and A k, and according to observing matrix C aand C band state-transition matrix A aand A b, build expansion observing matrix:
Φ test=[C T,A TC T,(A T) 2C T,...,(A T) nC T,...]
&Phi; k = &lsqb; C k T , A k T C k T , ( A k T ) 2 C k T , ... , ( A k T ) n C k T , ... &rsqb;
According to expansion observing matrix Φ testand Φ kbetween i-th characteristic angle θ i, calculate Martin's distance d 2 ( y t e s t , y k ) = - log &Pi; i = 1 n cos 2 &theta; i .
According to the Martin's distance between aurora video, to test aurora video y testcarry out minimum distance classification.
Two. performing step
Step 1: obtain training aurora video and test aurora video.
Appoint from other aurora video database of marking class and get N number of video composition training set { y 1, y 2..., y k..., y n, wherein y ka kth training aurora video, k=1,2 ..., N;
By remaining aurora video composition test set, from test set, get a sample as test aurora video y test.
Step 2: extract test aurora video y testdynamic texture feature.
(2a) aurora video y will be tested testbe expressed as y (t), y (t) ∈ R m, m=I 1× I 2, I 1for the line number of current video frame picture element matrix, I 2for the columns of current video frame picture element matrix, t=1 ..., τ, τ are video totalframes;
(2b) average of all frames is deducted with aurora frame of video y (t) observed obtain video matrix Y:
Y = &lsqb; y ( 1 ) - y &OverBar; , y ( 2 ) - y &OverBar; , ... , y ( t ) - y &OverBar; , ... , y ( &tau; ) - y &OverBar; &rsqb; ;
(2c) SVD decomposition is carried out to video matrix Y, i.e. Y=USV t, wherein, U ∈ R m × nfor left basis matrix; S ∈ R n × nfor singular value matrix; V ∈ R n × τfor right basis matrix;
(2d) represent the observing matrix C of aurora video with left basis matrix U, i.e. C=U, and ask the dynamic texture eigenmatrix X=SV of aurora video t, X ∈ R here n × τ, extract each row of dynamic texture eigenmatrix X as dynamic texture characteristic frame x (t), namely X=[x (1), x (2) ..., x (t) ..., x (τ)];
Step 3: with the dynamic texture eigenmatrix X obtained, calculate the state-transition matrix A of t dynamic texture characteristic frame x (t) to t+1 dynamic texture characteristic frame x (t+1),
A = arg min | | X 2 , ... , &tau; - AX 1 , ... , &tau; - 1 | | F 2 = ( X 2 , ... , &tau; X 1 , ... , &tau; - 1 ) ( X 1 , ... , &tau; - 1 X 1 , ... , &tau; - 1 T ) - 1 ,
X in formula 1 ..., τ-1=[x (1), x (2) ..., x (t) ..., x (τ-1)]; X 2 ..., τ=[x (2), x (3) ..., x (t) ..., x (τ)]; represent the F norm asking matrix;
Step 4: ask test aurora video y testto training set sample y kmartin's distance.
(4a) with calculating test pole light video y testobserving matrix C and the method for state-transition matrix A, calculation training collection { y 1, y 2..., y k..., y nmiddle training set sample y kobserving matrix C kwith state-transition matrix A k, obtain observing matrix collection { A 1, A 2..., A k..., A nand state-transition matrix collection { C 1, C 2..., C k..., C n, wherein k=1,2 ..., N;
(4b) according to observing matrix and state-transition matrix, test aurora video y is calculated testto training set sample y kmartin distance d 2(y test, y k), k=1,2 ..., N;
(3b1) with observing matrix C and C kand state-transition matrix A and A k, build expansion observing matrix:
Φ test=[C T,A TC T,(A T) 2C T,...,(A T) nC T,...]
&Phi; k = &lsqb; C k T , A k T C k T , ( A k T ) 2 C k T , ... , ( A k T ) n C k T , ... &rsqb;
(3b2) according to expansion observing matrix Φ testand Φ kbetween i-th characteristic angle θ i, calculate Martin's distance d 2 ( y t e s t , y k ) = - l o g &Pi; i = 1 n cos 2 &theta; i .
Step 5: according to the Martin's distance between aurora video, to test aurora video y testcarry out minimum distance classification.
By the N number of Martin distance d obtained in step 3 2(y test, y k) by order arrangement from small to large, take out minimum Martin's distance corresponding aurora sequences y min, and will with this aurora sequences y minplesiomorphic test aurora video y testbe divided into and aurora sequences y minsame class, completes test aurora video y testclassification.
Effect of the present invention is further illustrated by following emulation experiment:
1. simulated conditions and method:
Hardware platform is: Intel Core i5,2.93GHz, 3.45GB RAM;
Software platform is: the MATLAB R2012b under Windows7 operating system;
Experiment data: in Dec, 2003 in the all-sky aurora data of China's Arctic Yellow River Station by 2004 totally 115557 width G-band images carry out manual markings, remove the invalid data that the factors such as weather cause, mark 93 radial aurora sequences, 102 arcuation aurora sequences, 73 focus shape aurora sequences, 95 valance shape aurora sequences, sequence length is between 15 frames to 35 frames, forms typical aurora video database and carries out four classification experiments.
2. emulate content and result:
Experiment simulation 1, from typical aurora video database, random choose goes out each one of four quasi-representative aurora video, i.e. arcuation aurora video, radial aurora video, focus shape aurora video and valance shape aurora video.Carry out down-sampling process to every width frame of video, the frame of video down-sampling by 512 × 512 sizes is 128 × 128.Solve observing matrix C by method of the present invention, the C obtained is the matrix of 16384 × 10 sizes, each column weight of observing matrix C is newly converted into 128 × 128 large minor matrixs, then shows in the form of images.
To the display result of arcuation aurora video as shown in Figure 2, wherein Fig. 2 (a) is former frame of video, and Fig. 2 (b) is the visualization result of the observing matrix of arcuation aurora video;
To the display result of radial aurora video as shown in Figure 3, wherein Fig. 3 (a) is former frame of video, and Fig. 3 (b) is the visualization result of the observing matrix of radial aurora video;
To the display result of focus shape aurora video as shown in Figure 4, wherein Fig. 4 (a) is former frame of video, and Fig. 4 (b) is the visualization result of the observing matrix of focus shape aurora video;
To the display result of valance shape aurora video as shown in Figure 5, wherein Fig. 5 (a) is former frame of video, and Fig. 5 (b) is the visualization result of the observing matrix of valance shape aurora video;
Fig. 2-Fig. 5 shows, observing matrix C only has former row to contain the texture information of aurora image sequence usually, proves the repetition correlativity in the certain life period direction of aurora image sequence, and dynamic texture model effectively can extract its spatial texture feature.
Experiment simulation 2, at training set video number N=30, when 50,80,100,120, respectively with the present invention to typical aurora video database classification experiments, obtain classification accuracy as shown in the table.
Table 1 aurora visual classification accuracy rate
As can be seen from Table 1, when number of training is more than 100, classification accuracy is 77.09%, and category of model accuracy rate is higher, can realize the automatic classification of computing machine to aurora video.

Claims (2)

1., based on the aurora video classification methods that dynamic texture model characterizes, comprise the steps:
(1) appoint from other aurora video database of marking class and get N number of video composition training set { y 1, y 2..., y k..., y n, y ka kth training set sample, k=1,2 ..., N, will remain aurora video composition test set, get a sample as test aurora video y from test set test;
(2) the dynamic texture feature of aurora video is extracted;
(2a) aurora video y will be tested testbe expressed as y (t), y (t) ∈ R m, m=I 1× I 2, I 1for the line number of current video frame picture element matrix, I 2for the columns of current video frame picture element matrix, t=1 ..., τ, τ are video totalframes;
(2b) average of all frames is deducted with aurora frame of video y (t) observed obtain video matrix Y:
Y = &lsqb; y ( 1 ) - y &OverBar; , y ( 2 ) - y &OverBar; , ... , y ( t ) - y &OverBar; , ... , y ( &tau; ) - y &OverBar; &rsqb; ;
(2c) SVD decomposition is carried out to video matrix Y, i.e. Y=USV t, wherein, U ∈ R m × nfor left basis matrix; S ∈ R n × nfor singular value matrix; V ∈ R n × τfor right basis matrix;
(2d) represent the observing matrix C of aurora video with left basis matrix U, i.e. C=U, and ask the dynamic texture eigenmatrix X=SV of aurora video t, X ∈ R here n × τ, extract each row of dynamic texture eigenmatrix X as dynamic texture characteristic frame x (t), namely X=[x (1), x (2) ..., x (t) ..., x (τ)];
(3) with the dynamic texture eigenmatrix X obtained, the state-transition matrix A of t dynamic texture characteristic frame x (t) to t+1 dynamic texture characteristic frame x (t+1) is calculated,
A = arg min | | X 2 , ... , &tau; - AX 1 , ... , &tau; - 1 | | F 2 = ( X 2 , ... , &tau; X 1 , ... , &tau; - 1 ) ( X 1 , ... , &tau; - 1 X 1 , ... , &tau; - 1 T ) - 1 ,
X in formula 1 ..., τ-1=[x (1), x (2) ..., x (t) ..., x (τ-1)]; X 2 ..., τ=[x (2), x (3) ..., x (t) ..., x (τ)]; represent the F norm asking matrix;
(4) test aurora video y is asked testto training set sample y kmartin's distance:
(4a) with calculating test pole light video y testobserving matrix C and the method for state-transition matrix A, calculation training collection { y 1, y 2..., y k..., y nmiddle training set sample y kobserving matrix C kwith state-transition matrix A k, obtain observing matrix collection { A 1, A 2..., A k..., A nand state-transition matrix collection { C 1, C 2..., C k..., C n, wherein k=1,2 ..., N;
(4b) according to observing matrix and state-transition matrix, test aurora video y is calculated testto training set sample y kmartin distance d 2(y test, y k), k=1,2 ..., N;
(5) according to the Martin's distance between aurora video, to test aurora video y testcarry out minimum distance classification:
By the N number of Martin distance d obtained in step 3 2(y test, y k) by order arrangement from small to large, take out minimum Martin's distance corresponding aurora sequences y min, and will with this aurora sequences y minplesiomorphic test aurora video y testbe divided into and aurora sequences y minsame class, completes test aurora video y testclassification.
2. method according to claim 1, solves test aurora video y in wherein said step (4b) testto training set sample y kmartin distance d 2(y test, y k), carry out as follows:
(4b1) with observing matrix C and C kand state-transition matrix A and A k, build expansion observing matrix:
&Phi; t e s t = &lsqb; C T , A T C T , ( A T ) 2 C T , ... , ( A T ) n C T , ... &rsqb; &Phi; k = &lsqb; C k T , A k T C k T , ( A k T ) 2 C k T , ... , ( A k T ) n C k T , ... &rsqb; ;
(4b2) according to expansion observing matrix Φ testand Φ kbetween i-th characteristic angle θ i, calculate Martin's distance:
d 2 ( y t e s t , y k ) = - l o g &Pi; i = 1 n cos 2 &theta; i .
CN201510412383.8A 2015-07-14 2015-07-14 Aurora video classification methods based on dynamic texture model characterization Active CN104992186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510412383.8A CN104992186B (en) 2015-07-14 2015-07-14 Aurora video classification methods based on dynamic texture model characterization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510412383.8A CN104992186B (en) 2015-07-14 2015-07-14 Aurora video classification methods based on dynamic texture model characterization

Publications (2)

Publication Number Publication Date
CN104992186A true CN104992186A (en) 2015-10-21
CN104992186B CN104992186B (en) 2018-04-17

Family

ID=54303999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510412383.8A Active CN104992186B (en) 2015-07-14 2015-07-14 Aurora video classification methods based on dynamic texture model characterization

Country Status (1)

Country Link
CN (1) CN104992186B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631471A (en) * 2015-12-23 2016-06-01 西安电子科技大学 Aurora sequence classification method with fusion of single frame feature and dynamic texture model
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130268476A1 (en) * 2010-12-24 2013-10-10 Hewlett-Packard Development Company, L.P. Method and system for classification of moving objects and user authoring of new object classes
CN103632166A (en) * 2013-12-04 2014-03-12 西安电子科技大学 Aurora image classification method based on latent theme combining with saliency information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130268476A1 (en) * 2010-12-24 2013-10-10 Hewlett-Packard Development Company, L.P. Method and system for classification of moving objects and user authoring of new object classes
CN103632166A (en) * 2013-12-04 2014-03-12 西安电子科技大学 Aurora image classification method based on latent theme combining with saliency information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIAN WANG等: "Spatial texture based automatic classification of dayside aurora in all-sky images", 《ELSEVIER》 *
张鹏祥: "基于纹理特征的全天空极光图像分类算法研究", 《万方数据》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631471A (en) * 2015-12-23 2016-06-01 西安电子科技大学 Aurora sequence classification method with fusion of single frame feature and dynamic texture model
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video
CN108810620B (en) * 2018-07-18 2021-08-17 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for identifying key time points in video

Also Published As

Publication number Publication date
CN104992186B (en) 2018-04-17

Similar Documents

Publication Publication Date Title
Benyang et al. Safety helmet detection method based on YOLO v4
Alvarez et al. Semantic road segmentation via multi-scale ensembles of learned features
CN112836646B (en) Video pedestrian re-identification method based on channel attention mechanism and application
CN105335975B (en) Polarization SAR image segmentation method based on low-rank decomposition and statistics with histogram
CN105930846A (en) Neighborhood information and SVGDL (support vector guide dictionary learning)-based polarimetric SAR image classification method
CN104992187B (en) Aurora video classification methods based on tensor dynamic texture model
CN103824062B (en) Motion identification method for human body by parts based on non-negative matrix factorization
CN103268484A (en) Design method of classifier for high-precision face recognitio
Liu et al. Semantic segmentation based on deeplabv3+ and attention mechanism
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN104992186A (en) Aurora video classification method based on dynamic texture model representation
Yang et al. Exploring rich intermediate representations for reconstructing 3D shapes from 2D images
CN105631471A (en) Aurora sequence classification method with fusion of single frame feature and dynamic texture model
CN105718934A (en) Method for pest image feature learning and identification based on low-rank sparse coding technology
CN104463091A (en) Face image recognition method based on LGBP feature subvectors of image
CN105844662A (en) Aurora motion direction determining method based on hydrodynamics
Xu et al. Research on recognition of landslides with remote sensing images based on extreme learning machine
Shi et al. Building footprint extraction with graph convolutional network
CN105427351A (en) High spectral image compression sensing method based on manifold structuring sparse prior
Jiang et al. Semantic segmentation of remote sensing images based on dual‐channel attention mechanism
Islam et al. TL-GAN: Transfer Learning with Generative Adversarial Network Model for Satellite Image Resolution Enhancement
CN103886293B (en) Human body behavior recognition method based on history motion graph and R transformation
CN103824297B (en) In complicated high dynamic environment, background and the method for prospect is quickly updated based on multithreading
CN105426811A (en) Crowd abnormal behavior and crowd density recognition method
CN105139014A (en) Method for describing image local characteristic descriptor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant