CN106056540A - Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment - Google Patents

Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment Download PDF

Info

Publication number
CN106056540A
CN106056540A CN201610538641.1A CN201610538641A CN106056540A CN 106056540 A CN106056540 A CN 106056540A CN 201610538641 A CN201610538641 A CN 201610538641A CN 106056540 A CN106056540 A CN 106056540A
Authority
CN
China
Prior art keywords
video sequence
motion
space
time
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610538641.1A
Other languages
Chinese (zh)
Inventor
杜军平
梁美玉
刘红刚
曹守鑫
李玲慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201610538641.1A priority Critical patent/CN106056540A/en
Publication of CN106056540A publication Critical patent/CN106056540A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/004Predictors, e.g. intraframe, interframe coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video time-space super-resolution reconstruction method based on a robust optical flow and Zernike invariant moment. The method comprises the following steps: performing motion analysis on a video sequence in a time-space domain, constructing a robust optical flow motion estimation model of the video sequence, and obtaining a motion vector; according to the motion vector, performing bidirectional time-space motion compensation on the video sequence to obtain a compensated video sequence; and by use of a cross-scale fusion strategy of a Zernike invariant moment based rapid non-local fuzzy registering mechanism, performing time-space super-resolution reconstruction on the compensated video sequence to obtain a video sequence with high time-space resolution. According to the invention, the method is not dependent on accurate sub-pixel motion estimation, can be applied to various complex motion modes, such as angle rotation, local motion and the like, can provide clear and smooth video information for accurate identification and tracking of a motion object, and has a quite high actual application value.

Description

Video space-time super-resolution rebuilding based on robust light stream and Zernike not bending moment Method
Technical field
The present invention relates to image/video enhancement techniques field, particularly relate to a kind of constant based on robust light stream and Zernike The video space-time super resolution ratio reconstruction method of square.
Background technology
Resolution is the important indicator of space motion picture quality, and resolution is the highest, acquired from motion image sequence The detailed information about space movement target arrived is the abundantest, thus is more beneficial for accurately identifying space movement target And tracking.Due to factors such as motion or optical dimming, lack sampling and noise jamming so that the visual effect of motion image sequence is relatively Difference.
Traditional super resolution ratio reconstruction method depends on accurate sub-pel motion estimation, is therefore limited only to some overall situations The simple motor pattern such as translation.Video sequence also exists some complex motor patterns, when video sequence exists During the rotation of different angles, the space-time similarity of interframe becomes the faintest, and inter-frame information hardly results in and effectively utilizes, and then Have influence on the quality of super-resolution rebuilding.Traditional frame interpolation technology based on motion vector is owing to inevitably being moved The impact of estimation difference, can make to occur in interpolated frame visual blocking effect or cavity effect.
Iterative backprojection method, MAP estimation MAP, projections onto convex sets POCS etc. tend to rely on accurate sub-pix fortune Dynamic estimation, only can obtain preferably under the simple motor patterns, and single moving target scene such as some global translation Effect, it is impossible to effectively process the moving scene of some complexity, it is difficult to realize accurate estimation, badly influence super-resolution The quality rebuild.
Summary of the invention
In view of this, when it is an object of the invention to propose a kind of video based on robust light stream and Zernike not bending moment Empty super resolution ratio reconstruction method, is no longer dependent on accurate sub-pel motion estimation, to adapt to the motor pattern of various complexity.
The video space-time super-resolution based on robust light stream and Zernike not bending moment provided based on the above-mentioned purpose present invention Method for reconstructing, including:
Video sequence is carried out motion analysis in time-space domain, builds the light stream estimation mould of described video sequence robustness Type, obtains motion vector;
According to described motion vector, described video sequence is carried out two-way spatiotemporal motion compensation, the video after being compensated Sequence;
Use based on Zernike not bending moment quick non local fuzzy registration mechanism across yardstick convergence strategy, to described Video sequence after compensation carries out space-time super-resolution rebuilding, obtains the video sequence of high-spatial and temporal resolution.
In an alternate embodiment of the invention, described video sequence is carried out in time-space domain motion analysis, build described video sequence The light stream motion estimation model of robustness, obtains motion vector, including:
It is calculated brightness conservation constraint and gradient conservation constraints combines the data item of driving;
In the motion smoothing of optical flow objective energy function retrains, introduce motion structure adaptive strategy, be just calculated Then item;
According to adaptive weighted averaging filter, it is calculated Nonlocal Terms;
Set up the light stream estimation simulated target energy function comprising described data item, regular terms, Nonlocal Terms, by minimum Change described light stream and estimate that simulated target energy function is calculated described motion vector.
In an alternate embodiment of the invention, described according to described motion vector, described video sequence is carried out two-way spatiotemporal motion Compensate, including:
In n-th frame, coordinate be (x, the calculating formula of pixel energy value y) is:
In(x, y)=λ1×In-1(x+0.5×u,y+0.5×v)+λ2×In+1(x-0.5 × u, y-0.5 × v),
Wherein, In(x y) represents that in n-th frame, coordinate is that ((u v) is described motion vector for x, pixel energy value y).
In an alternate embodiment of the invention, described employing based on Zernike not bending moment quick non local fuzzy registration mechanism Across yardstick convergence strategy, the video sequence after described compensation is carried out space-time super-resolution rebuilding, obtains high-spatial and temporal resolution Video sequence, including:
Use interpolation mechanism based on iteration curvature that the video sequence after described compensation is processed, obtain initial estimation Sequence;
Use based on Zernike not bending moment quick non local fuzzy registration mechanism to described initial estimation sequence at Reason, carries out merging across yardstick to the successive video frames of different time and space scales, obtains merging estimated sequence;
Use Fuzzy Processing and iteration update mechanism that described fusion estimated sequence is processed, obtain described high time space division The video sequence of resolution.
From the above it can be seen that the present invention provide based on robust light stream and the video space-time of Zernike not bending moment Super resolution ratio reconstruction method, does not relies on accurate sub-pel motion estimation, is applicable to the motor pattern of various complexity, such as angle Degree rotates and local motion etc., and has preferable noise robustness and rotational invariance.The present invention improves space-time super-resolution The overall performance that rate is rebuild, can become apparent from the video information of smoothness for the offer that accurately identifies and follow the tracks of of moving target, have The strongest actual application value.
Accompanying drawing explanation
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 1 provides for the present invention The schematic flow sheet of the embodiment of method;
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 2 provides for the present invention The schematic flow sheet of one alternative embodiment of method;
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 3 provides for the present invention One alternative embodiment of method realize schematic flow sheet;
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 4 provides for the present invention The schematic flow sheet of another alternative embodiment of method;
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 5 provides for the present invention Another alternative embodiment of method realize schematic flow sheet.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference Accompanying drawing, the present invention is described in more detail.
It should be noted that the statement of all uses " first " and " second " is for distinguishing two in the embodiment of the present invention The entity of individual same names non-equal or the parameter of non-equal, it is seen that " first " " second ", only for the convenience of statement, should not Being interpreted as the restriction to the embodiment of the present invention, this is illustrated by subsequent embodiment the most one by one.
For making the purpose of the present invention, technical scheme and advantage clearer, referring to the drawings and give an actual example to this Invention is described in detail.
In one aspect of the invention, it is provided that video space-time super-resolution based on robust light stream and Zernike not bending moment An optional embodiment of method for reconstructing.
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 1 provides for the present invention The schematic flow sheet of the embodiment of method.As it can be seen, in the present embodiment, including:
S10, carries out motion analysis to video sequence in time-space domain, and the light stream campaign building described video sequence robustness is estimated Meter model, obtains motion vector.
S11, according to described motion vector, carries out two-way spatiotemporal motion compensation, after being compensated to described video sequence Video sequence.
S12, use based on Zernike not bending moment quick non local fuzzy registration mechanism across yardstick convergence strategy, right Video sequence after described compensation carries out space-time super-resolution rebuilding, obtains the video sequence of high-spatial and temporal resolution.
The video space-time super resolution ratio reconstruction method based on robust light stream and Zernike not bending moment that the present invention provides, no Depend on accurate sub-pel motion estimation, be applicable to the motor pattern of various complexity, as angle rotates and local motion etc., And there is preferable noise robustness and rotational invariance.The present invention improves the overall performance of space-time super-resolution rebuilding, energy The offer that accurately identifies and follow the tracks of for moving target becomes apparent from the video information of smoothness, has the strongest actual application value.
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 2 provides for the present invention The schematic flow sheet of one alternative embodiment of method;Fig. 3 for the present invention provide based on robust light stream and Zernike not bending moment One alternative embodiment of video space-time super resolution ratio reconstruction method realize schematic flow sheet.As it can be seen, the present invention's In some optional embodiments, step S10, video sequence is carried out motion analysis in time-space domain, builds described video sequence Shandong The light stream motion estimation model of rod, obtains motion vector, specifically includes:
S20, is calculated brightness conservation constraint and gradient conservation constraints combines the data item of driving.
S21, in the motion smoothing of optical flow objective energy function retrains, introduces motion structure adaptive strategy, calculates To regular terms.
S22, according to adaptive weighted averaging filter, is calculated Nonlocal Terms.
S23, sets up the light stream estimation simulated target energy function comprising described data item, regular terms, Nonlocal Terms, passes through Minimize described light stream and estimate that simulated target energy function is calculated described motion vector.
Light stream motion estimation model is improved and has been optimized by the present embodiment, the robustness of further lift scheme and fortune Dynamic estimated accuracy.It is in particular in following three aspect:
In order to strengthen the model robustness for factors such as illumination noises, the present embodiment number to light stream motion estimation model Being optimized according to item, structure brightness conservation constraint and gradient conservation constraints combine the data item of driving, and computational methods are as follows:
E d ( u , v ) = Σ x , y | I 2 ( x + u x , y + v y ) - I 1 ( x , y ) | + ζ × | ▿ I 2 ( x + u x , y + v y ) - ▿ I 1 ( x , y ) | = Σ x , y | I x u x + I y v y + I t | + ζ × | ▿ I 2 ( x + u x , y + v y ) - ▿ I 1 ( x , y ) |
Wherein parameter ζ is the weight regulatory factor between two kinds of constraints, and (x, y, (x, y) at time point t) to represent pixel for I The brightness value of t, Ix,Iy,ItFor I (x, y, t) about x, the partial derivative of y, t, (ux,vy) be light stream estimate obtain motion vector.
For protection motion discontinuities and edge details, embodiment is just retraining at the motion smoothing of optical flow objective energy function Then in item, introducing motion structure adaptive strategy, the regular terms after improvement is defined as follows:
E r ( u , v ) = Σ x , y ω ( x , y ) ( | ▿ u x | + | ▿ v y | )
Wherein, | ux|+|▽vy| it is traditional TV regularization operation.(x y) is the adaptivity protecting motion details to ω Weight, its computational methods are as follows:
ω (x, y)=exp (-| I1|k)
According to Theoretical Calculation and experimental verification, when parameter k takes 0.8, motion estimation performance is best.
The present embodiment introduces a heuristic Nonlocal Terms in optical flow objective energy equation, adaptive weighted by using The light stream motion estimation result of each layer is optimized by medium filtering, the precision of further method for improving and robustness.To this mistake Cheng Jinhang mathematical modeling, and following problem can be solved:
E W N L = Σ x , y Σ ( i , j ) ∈ N x , y ω x , y , i , j ( | u ^ x - u ^ i | + | v ^ y - v ^ j | )
Wherein, ωx,y,i,jIt it is adaptivity weight factor.The calculating of weight by space length, color aberration distance and Three factors of blocked state determine jointly, and computing formula is as follows:
ω x , y , i , j = 1 25 × exp { - | i - i ′ | 2 + | j - j ′ | 2 2 σ 1 2 - | I ( i , j ) - I ( i ′ , j ′ ) | 2 4 σ 2 2 + o ( i ′ , j ′ ) o ( i , j ) }
Wherein ((i j) represents o I with o (i', j') for i, color vector j) and in I (i', j') expression Lab color space Inaccessible variable.In formula, σ1=7, σ2=7.
By considering light stream difference and two factors of pixel projection difference, equation below is used to carry out occlusion areas inspection Survey, so solve inaccessible variable o (i, j).
O (x, y)=N (d (x, y), σd)×N(e(x,y),σe)
d ( x , y ) = d i v ( x , y ) , d i v ( x , y ) < 0 0 , o t h e r s = &part; &part; x u + &part; &part; y v , d i v ( x , y ) < 0 0 , o t h e r s
E (x, y)=I (x, y)-I (x+u, y+v)
Wherein N () obeys zero-mean abnormal Gaussian prior (x y) represents light stream variance factor, e (x, y) table it is assumed that d Show pixel projection variance factor.Take σd=0.3, σe=20.
Build the light stream being shown below and estimate simulated target energy function, and obtain by minimizing this object function High-precision light stream motion vector (u, v):
E (u, v)=Ed(u,v)+αEr(u,v)+βEWNL
Wherein parameter alpha and β are Ed(u, v), Er(u, v) and EWNLWeight regulatory factor between three.
For the light stream motion vector obtained by estimation, the motion vector in a convective boundary region, 15 × 15 Non local window in use adaptive weighted averaging filter it is optimized.Motion vector to non-athletic borderline region, It is optimized by the medium filtering using equivalence weighting in the neighborhood window of 5 × 5.The extraction in convective boundary region, uses Canny edge detection operator obtains and moving boundaries detected, by the motion to detecting of the mask method of employing 5 × 5 Border expands, thus obtains stream borderline region.
In some optional embodiments of the present invention, while ensureing to obtain more preferable compensation effect, carry further The time efficiency of lifting method, introduces a kind of bidirectional weighting convergence strategy and carries out spatiotemporal motion compensation.Concrete, step S11, root According to described motion vector, described video sequence is carried out two-way spatiotemporal motion compensation, in the video sequence after being compensated, calculate In n-th frame, coordinate be (x, the calculating formula of pixel energy value y) is:
In(x, y)=λ1×In-1(x+0.5×u,y+0.5×v)+λ2×In+1(x-0.5×u,y-0.5×v)
Wherein, In(x y) represents that in n-th frame, coordinate is that ((u v) is described motion vector for x, pixel energy value y).
The video space-time super-resolution rebuilding side based on robust light stream and Zernike not bending moment that Fig. 4 provides for the present invention The schematic flow sheet of another alternative embodiment of method;Fig. 5 for the present invention provide based on robust light stream and Zernike not bending moment Another alternative embodiment of video space-time super resolution ratio reconstruction method realize schematic flow sheet.As it can be seen, the present invention's In some optional embodiments, propose a kind of quick non local fuzzy registration mechanism based on Zernike not bending moment, pass through multiframe Information realize space-time super-resolution rebuilding rapidly and efficiently across yardstick convergence strategy, the video sequence after compensating is rebuild And optimization, to obtain high-quality compensated video frames, promote the temporal resolution of video sequence further.This mechanism realizes former The super-resolution rebuilding of low-resolution video sequence space resolution, the final video sequence obtaining high-spatial and temporal resolution.This reality Execute in example, step S12, use based on Zernike not bending moment quick non local fuzzy registration mechanism across yardstick convergence strategy, Video sequence after described compensation is carried out space-time super-resolution rebuilding, obtains the video sequence of high-spatial and temporal resolution, specifically wrap Include:
S30, uses interpolation mechanism based on iteration curvature to process the video sequence after described compensation, obtains initial Estimated sequence.In the present embodiment, the interpolation mechanism based on iteration curvature introducing a kind of novel and high-efficiency obtains motion compensation Rear video sequenceHigh-resolution initial estimation, i.e.
S31, uses quick non local fuzzy registration mechanism based on Zernike not bending moment to enter described initial estimation sequence Row processes, and carries out merging across yardstick to the successive video frames of different time and space scales, obtains merging estimated sequence.At the present embodiment In, in initial estimation sequenceOn the basis of, use quick non local obscuring based on Zernike not bending moment to join Quasi-mechanism, carries out merging across yardstick to the successive video frames of different time and space scales, it is achieved super-resolution rebuilding.
S32, uses Fuzzy Processing and iteration update mechanism to process described fusion estimated sequence, obtain described high time The video sequence of space division resolution.In the present embodiment, Fuzzy Processing and iteration update mechanism are used, to the result after converged reconstruction Carry out further optimization process, promote reconstruction quality, obtain the video sequence of the high-spatial and temporal resolution after rebuilding
In the optional embodiment of the present embodiment, it is achieved during described step S30, introduce a kind of quick and efficient Interpolation based on iteration curvature (ICBI) mechanism, for subsequent step converged reconstruction process provide the preferable high-resolution of effect Initial estimation.The primary power estimated value of each interpolating pixel I (2u+1,2v+1) is determined by method calculated as below:
I ( 2 u + 1 , 2 v + 1 ) = I ( 2 u , 2 v ) + I ( 2 u + 2 , 2 v + 2 ) 2 , v 1 ( 2 u + 1 , 2 v + 1 ) < v 2 ( 2 u + 1 , 2 v + 1 ) I ( 2 u + 2 , 2 v ) + I ( 2 u , 2 v + 2 ) 2 , v 1 ( 2 u + 1 , 2 v + 1 ) &GreaterEqual; v 2 ( 2 u + 1 , 2 v + 1 )
v1(2u+1,2v+1)=I (2u-2,2v+2)+I (2u, 2v)+I (2u+2,2v-2)-3I (2u, 2v+2)
-3I(2u+2,2v)+I(2u,2v+4)+I(2u+2,2v+2)+I(2u+4,2v)
v2(2u+1,2v+1)=I (2u, 2v-2)+I (2u+2,2v)+I (2u+4,2v+2)-3I (2u, 2v)
-3I(2u+2,2v+2)+I(2u-2,2v)+I(2u,2v+2)+I(2u+2,2v+4)
Wherein v1(2u+1,2v+1) and v2(2u+1,2v+1) represent that eight neighborhood territory pixel energy values are along two diagonal respectively The second dervative in direction.The coarse estimation more than obtained, needs constantly to be iterated updating, to obtain higher-quality interpolation Effect.Acquired coarse estimated value I (2u+1,2v+1) needs to be modified through equation below.
E (2x+1,2y+1)=α Ec(2u+1,2v+1)+βEe(2u+1,2v+1)+γEi(2u+1,2v+1)
Wherein Ec,EeAnd EiRepresenting continual curvature energy respectively, curvature strengthens energy and curvature smoothing energy, parameter alpha, β and γ represents the weight regulatory factor controlling three energy proportions respectively.
In the optional embodiment of the present embodiment, after obtaining initial estimation sequence, continue executing with step S31, use Described initial estimation sequence is processed, during to difference by quick non local fuzzy registration mechanism based on Zernike not bending moment The successive video frames of empty yardstick carries out merging across yardstick, obtains merging estimated sequence.
Optionally, in the described quick non local fuzzy registration mechanism proposed, introduce based on zone leveling energy Adaptivity area coherence determination strategy.To pixel to be reconstructed (k, all pixels in non local region of search l) (i, j) Corresponding neighborhood region carries out dependency judgement, is divided into relevant range and uncorrelated region, only selects relevant region right to participate in Value calculates, the time efficiency of further method for improving.In dependency judge process, introduce adaptive threshold δadapStrategy, if Be correlated with in two regions, then computing formula is as follows:
| E &OverBar; ( k , l ) - E &OverBar; ( i , j ) | < &delta; a d a p
The size of threshold value is by pixel to be reconstructed (k, l) average energy in corresponding neighborhood regionCarry out self adaptation Ground determines, can judge interregional dependency, adaptivity threshold calculations is as follows:
&delta; a d a p = &lambda; E &OverBar; ( k , l )
Wherein, λ is for controlling δadapRegulatory factor, when λ takes 0.08, the effect of reconstruction is best.
Build weight calculation formula based on Zernike invariant moment features Similarity measures as follows:
&omega; e z e r &lsqb; k , l , i , j , t &rsqb; = 1 C ( k , l ) &times; R F S ( R ( k , l ) , R ( i , j ) ) = 1 C ( k , l ) &times; exp { - &Sigma; | | Z M ( k , l ) - ZM &tau; ( i , j ) | | 2 2 &epsiv; 2 } , | E &OverBar; ( k , l ) - E &OverBar; ( i , j ) | < &delta; a d a p 0 , o t h e r w i s e .
C ( k , l ) = &Sigma; ( i , j ) &Element; N n o n l o c ( k , l ) exp { - &Sigma; | | Z M ( k , l ) - ZM &tau; ( i , j ) | | 2 2 &epsiv; 2 }
Wherein (k, l) represents pixel to be reconstructed, and (i j) represents pixel to be reconstructed non local region Nnonloc(k, l) interior Pixel, the attenuation rate of parameter ε control characteristic function and the attenuation rate of weights, C (k, l) represent normaliztion constant.
Work as weights omegaezerAfter [k, l, i, j, t] determines, the high-resolution of each pixel of frame of video to be reconstructed is estimated to lead to In crossing the non local region of interframe many to its adjacent continuous, pixel weighted average carries out merging across yardstick and obtaining, it is assumed that Z=HX, H is fuzzy factor, then the high-resolution of Z is estimated to obtain by minimizing following energy function:
Z ^ S R 1 &lsqb; x &rsqb; = &Sigma; ( k , l ) &Element; &Psi; &Sigma; t &Element; &lsqb; 1 , ... , T &rsqb; &Sigma; ( i , j ) &Element; N n o n l o c ( k , l ) &omega; e z e r &lsqb; k , l , i , j , t &rsqb; y t &lsqb; i , j &rsqb; &Sigma; ( k , l ) &Element; &Psi; &Sigma; t &Element; &lsqb; 1 , ... , T &rsqb; &Sigma; ( i , j ) &Element; N n o n l o c ( k , l ) &omega; e z e r &lsqb; k , l , i , j , t &rsqb;
Thus obtain merging estimated sequence
In the optional embodiment of the present embodiment, after obtaining initial estimation sequence, continue executing with step S32, use Described fusion estimated sequence is processed by Fuzzy Processing and iteration update mechanism, obtains the video sequence of described high-spatial and temporal resolution Row.In this optional embodiment, introduce a kind of efficient adaptivity regularization mechanism and come merging the reconstructed results obtained Carrying out deblurring process, the video sequence X of high-spatial and temporal resolution can obtain by minimizing following objective energy function.
Z ^ S R 2 &lsqb; x &rsqb; = | | Z - H X | | 2 2 + &lambda; A R E G ( X )
Wherein λ is the weight parameter of adaptive regularization deblurring process AREG (X).
In another optional embodiment, in order to promote reconstruction quality further, the result rebuild is carried out constantly Iteration updates and optimizes.The result of iterative process will provide more accurate similarity weight meter for next iteration process every time Calculate, thus promote the resolution of the video sequence finally given.
Those of ordinary skill in the field are it is understood that the discussion of any of the above embodiment is exemplary only, not It is intended to imply that the scope of the present disclosure (including claim) is limited to these examples;Under the thinking of the present invention, above example Or can also be combined between the technical characteristic in different embodiments, step can realize with random order, and exists such as Other change of the many of the different aspect of the upper described present invention, in order to concisely they do not provide in details.Therefore, all Within the spirit and principles in the present invention, any omission of being made, amendment, equivalent, improvement etc., should be included in the present invention's Within protection domain.

Claims (4)

1. a video space-time super resolution ratio reconstruction method based on robust light stream and Zernike not bending moment, it is characterised in that bag Include:
Video sequence is carried out motion analysis in time-space domain, builds the light stream motion estimation model of described video sequence robustness, Obtain motion vector;
According to described motion vector, described video sequence is carried out two-way spatiotemporal motion compensation, the video sequence after being compensated;
Use based on Zernike not bending moment quick non local fuzzy registration mechanism across yardstick convergence strategy, to described compensation After video sequence carry out space-time super-resolution rebuilding, obtain the video sequence of high-spatial and temporal resolution.
Method the most according to claim 1, it is characterised in that described video sequence is carried out in time-space domain motion analysis, Build the light stream motion estimation model of described video sequence robustness, obtain motion vector, including:
It is calculated brightness conservation constraint and gradient conservation constraints combines the data item of driving;
In the motion smoothing of optical flow objective energy function retrains, introduce motion structure adaptive strategy, be calculated regular terms;
According to adaptive weighted averaging filter, it is calculated Nonlocal Terms;
Set up the light stream estimation simulated target energy function comprising described data item, regular terms, Nonlocal Terms, by minimizing State light stream and estimate that simulated target energy function is calculated described motion vector.
Method the most according to claim 1, it is characterised in that described according to described motion vector, to described video sequence Carry out two-way spatiotemporal motion compensation, including:
In n-th frame, coordinate be (x, the calculating formula of pixel energy value y) is:
In(x, y)=λ1×In-1(x+0.5×u,y+0.5×v)+λ2×In+1(x-0.5 × u, y-0.5 × v),
Wherein, In(x y) represents that in n-th frame, coordinate is that ((u v) is described motion vector for x, pixel energy value y).
Method the most according to claim 1, it is characterised in that described employing quick non-office based on Zernike not bending moment Portion obscure registration mechanism across yardstick convergence strategy, the video sequence after described compensation is carried out space-time super-resolution rebuilding, To the video sequence of high-spatial and temporal resolution, including:
Use interpolation mechanism based on iteration curvature that the video sequence after described compensation is processed, obtain initial estimation sequence Row;
Use quick non local fuzzy registration mechanism based on Zernike not bending moment that described initial estimation sequence is processed, Carry out merging across yardstick to the successive video frames of different time and space scales, obtain merging estimated sequence;
Use Fuzzy Processing and iteration update mechanism that described fusion estimated sequence is processed, obtain described high-spatial and temporal resolution Video sequence.
CN201610538641.1A 2016-07-08 2016-07-08 Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment Pending CN106056540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610538641.1A CN106056540A (en) 2016-07-08 2016-07-08 Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610538641.1A CN106056540A (en) 2016-07-08 2016-07-08 Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment

Publications (1)

Publication Number Publication Date
CN106056540A true CN106056540A (en) 2016-10-26

Family

ID=57185223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610538641.1A Pending CN106056540A (en) 2016-07-08 2016-07-08 Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment

Country Status (1)

Country Link
CN (1) CN106056540A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242891A (en) * 2018-08-03 2019-01-18 天津大学 A kind of method for registering images based on improvement light stream field model
CN109658361A (en) * 2018-12-27 2019-04-19 辽宁工程技术大学 A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account
CN109819321A (en) * 2019-03-13 2019-05-28 中国科学技术大学 A kind of video super-resolution Enhancement Method
CN110163892A (en) * 2019-05-07 2019-08-23 国网江西省电力有限公司检修分公司 Learning rate Renewal step by step method and dynamic modeling system based on estimation interpolation
CN114419517A (en) * 2022-01-27 2022-04-29 腾讯科技(深圳)有限公司 Video frame processing method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126019A1 (en) * 2004-12-10 2006-06-15 Junzhong Liang Methods and systems for wavefront analysis
CN102722863A (en) * 2012-04-16 2012-10-10 天津大学 Super-resolution reconstruction method for depth map by adopting autoregressive model
CN103210645A (en) * 2010-09-10 2013-07-17 汤姆逊许可公司 Methods and apparatus for decoding video signals using motion compensated example-based super-resolution for video compression

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126019A1 (en) * 2004-12-10 2006-06-15 Junzhong Liang Methods and systems for wavefront analysis
CN103210645A (en) * 2010-09-10 2013-07-17 汤姆逊许可公司 Methods and apparatus for decoding video signals using motion compensated example-based super-resolution for video compression
CN102722863A (en) * 2012-04-16 2012-10-10 天津大学 Super-resolution reconstruction method for depth map by adopting autoregressive model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘红刚等: "基于Zernike矩的运动图像序列时空超分辨率重建", 《中南大学学报(自然科学版)》 *
梁美玉: "跨尺度空间运动图像增强和重建研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242891A (en) * 2018-08-03 2019-01-18 天津大学 A kind of method for registering images based on improvement light stream field model
CN109658361A (en) * 2018-12-27 2019-04-19 辽宁工程技术大学 A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account
CN109658361B (en) * 2018-12-27 2022-12-06 辽宁工程技术大学 Motion scene super-resolution reconstruction method considering motion estimation errors
CN109819321A (en) * 2019-03-13 2019-05-28 中国科学技术大学 A kind of video super-resolution Enhancement Method
CN109819321B (en) * 2019-03-13 2020-06-26 中国科学技术大学 Video super-resolution enhancement method
CN110163892A (en) * 2019-05-07 2019-08-23 国网江西省电力有限公司检修分公司 Learning rate Renewal step by step method and dynamic modeling system based on estimation interpolation
CN110163892B (en) * 2019-05-07 2023-06-20 国网江西省电力有限公司检修分公司 Learning rate progressive updating method based on motion estimation interpolation and dynamic modeling system
CN114419517A (en) * 2022-01-27 2022-04-29 腾讯科技(深圳)有限公司 Video frame processing method and device, computer equipment and storage medium
CN114419517B (en) * 2022-01-27 2024-09-27 腾讯科技(深圳)有限公司 Video frame processing method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN106056540A (en) Video time-space super-resolution reconstruction method based on robust optical flow and Zernike invariant moment
CN111210477B (en) Method and system for positioning moving object
CN103426184B (en) A kind of optical flow tracking method and apparatus
CN107274336B (en) A kind of Panorama Mosaic method for vehicle environment
CN102202164B (en) Motion-estimation-based road video stabilization method
CN109741356B (en) Sub-pixel edge detection method and system
CN107025632B (en) Image super-resolution reconstruction method and system
CN103606132B (en) Based on the multiframe Digital Image Noise method of spatial domain and time domain combined filtering
CN109615653B (en) Leakage water area detection and identification method based on deep learning and visual field projection model
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN101551901B (en) Method for compensating and enhancing dynamic shielded image in real time
CN103106667A (en) Motion target tracing method towards shielding and scene change
CN106709878B (en) A kind of rapid image fusion method
CN103903278A (en) Moving target detection and tracking system
CN104952102B (en) Towards the unified antialiasing method of delay coloring
CN111145094A (en) Depth map enhancement method based on surface normal guidance and graph Laplace prior constraint
CN112927251B (en) Morphology-based scene dense depth map acquisition method, system and device
CN111383182A (en) Image denoising method and device and computer readable storage medium
CN104574443B (en) The cooperative tracking method of moving target between a kind of panoramic camera
CN115063599A (en) Wavelet optical flow estimation and image-related deformation identification method applied to small and medium reservoir dam monitoring
CN114719873B (en) Low-cost fine map automatic generation method and device and readable medium
CN106289181A (en) A kind of real-time SLAM method that view-based access control model is measured
CN106920213B (en) Method and system for acquiring high-resolution image
CN111950599B (en) Dense visual odometer method for fusing edge information in dynamic environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161026

RJ01 Rejection of invention patent application after publication