CN101511024A - Movement compensation method of real time electronic steady image based on motion state recognition - Google Patents

Movement compensation method of real time electronic steady image based on motion state recognition Download PDF

Info

Publication number
CN101511024A
CN101511024A CN 200910081057 CN200910081057A CN101511024A CN 101511024 A CN101511024 A CN 101511024A CN 200910081057 CN200910081057 CN 200910081057 CN 200910081057 A CN200910081057 A CN 200910081057A CN 101511024 A CN101511024 A CN 101511024A
Authority
CN
China
Prior art keywords
motion
parameter
movement
compensation
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200910081057
Other languages
Chinese (zh)
Inventor
赵丹培
冯昊
姜志国
安萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 200910081057 priority Critical patent/CN101511024A/en
Publication of CN101511024A publication Critical patent/CN101511024A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a motion compensation method based on motion state identification in real-time electronic image stabilization, which comprises the steps that: (1) the likelihood function of a motion type is trained; (2) a motion compensation model is built; (3) the offset of a current frame image with correspondence to the center point of a window is calculated; (4) a component motion is filtered, and an ideal motion parameter and a linear fitting parameter are calculated; (5) the motion state of the current frame image is identified, and a motion compensation parameter is set up; (6) according to the motion compensation quantity of three stages, the output position of the current frame image is determined. The method divides motion compensation into the three parts of jitter compensation, smooth motion compensation and deviation compensation; camera motion is divided into staring shoot and scanning shoot; by identifying the current motion state of a scene, the proportion of the three motion compensation parameters are adaptively adjusted, therefore the 'too smooth' and the 'lack of smooth' problems during the motion compensation solving process is solved. The motion compensation method has the advantages that, the image stabilization problems in various complicated motion states and during free shooting mode conversion are effectively solved, and the objective to real-timely output stable video is achieved.

Description

The motion compensation process of discerning based on motion state in the real time electronic steady image
(1) technical field
The invention belongs to the application of image processing techniques, relate to a kind of motion compensation process in the video image real time electronic steady image.This method estimates by interframe movement and motion compensation can be eliminated the influence that causes to image because of the irregular movement of taking carrier in the sequence of video images, the stable video image of output improves the picture quality of video equipment and the accuracy of system's subsequent treatment with this in real time.
(2) technical background
Along with the extensive use of optical imaging apparatus, more and more higher to the requirement of image quality at traffic, medical treatment, precise guidance, military surveillance and space industry etc.The video signal that is collected by photography platforms such as hand-held picture pick-up device, vehicle-mounted pick-up system, aircraft, guided missile and satellites not only comprises the easy motion of video camera carrier, a lot of random motions have also been comprised, as low-frequency jitter, dither and irregularly rock, perturbation etc., the type of sports that relates to mainly contains rotation, translation and yardstick etc.Owing to the existence of this random motion, caused the instability of capture video, have a strong impact on visual effect, be unfavorable for observing and monitoring, cause operator's visual fatigue even emotional change easily, also be unfavorable for subsequent treatment simultaneously.
For avoiding the influence of above-mentioned irregular random motion to the optical device image quality, just need judge easy motion and the randomized jitter of taking carrier itself, utilize image process method to eliminate randomized jitter, the easy motion that only keeps video camera itself, make the image sequence of output steady and audible, electronic steady image technology that Here it is.This image stabilization method has characteristics such as low, the effective and applying flexible of cost, has obtained extensive concern, at home and abroad all gets rapid development.
The electronic steady image technology comprises two key technologies: (1), interframe movement method of estimation.By estimating that the kinematic parameter between consecutive frame or the multiple image describes the easy motion track of video camera, these movement relations comprise translation, rotation or change of scale etc.; (2), motion compensation process.This part is mainly used in easy motion composition and the randomized jitter composition of analyzing in the movement locus, rejects randomized jitter when keeping the easy motion composition.
Wherein motion compensation is a key technology that relates in the electronic steady image.Can keep the easy motion amount of video camera in the global motion vector by motion compensation process, remove the unsteady motion amount in the movement locus simultaneously, and the position of definite output image in the visual field.The video that video camera is taken can be divided into two big classes: (1), stare shooting, promptly the picture taken of video camera remains the shooting to Same Scene, and the interframe movement of video all is the irregular chatterin that needs removal; (2), scanning shoot, promptly video camera is done random free movement with carrier, the video interframe movement had both comprised the easy motion of video camera with carrier, also comprised the randomized jitter of video camera.Generally the video of Pai Sheing can be similar to the combination of thinking this two classes style of shooting and the transition between them.The simplest method is as the reference frame with a certain frame, direct and the reference frame registration with each follow-up frame, the advantage of this method is to obtain very stable video, is applicable to and stares style of shooting, but this method can't be applicable to scanning shoot, and is subjected to accumulated error and The noise easily.In order to handle the video of scanning shoot, usually adopt the method for filtering that movement locus is carried out smoothly, but this method can not good treatment be stared and is scanned and mixes the video of taking, on the one hand, though adapting to forms of motion, less filter bandwidht changes, but sharp-pointed noise can heavy damage surely look like effect, causes the judder or the jumping phenomena of continuous several frames, is called " owing level and smooth "; On the other hand,, easy motion significantly can be suppressed as noise, the video hysteresis occur, be called " level and smooth excessively " though excessive smoothly can avoid sharp-pointed The noise to a certain extent.Therefore the simple satisfied effect of the often very difficult acquisition of a kind of motion compensation process that adopts, in addition, when all considering scanning shoot and stare the transition of shooting two states, existing method, usually do not cause motion compensation to lag behind the variation of actual motion to the influence of steady picture effect.Publication number is that reference-frame-method is adopted in the Chinese invention patent application of CN101281650A, and this method only is applicable to that the video sequence of staring shooting surely looks like.Publication number is that the Chinese invention patent application of CN101316368A adopts filtering method to carry out motion compensation, and the method is not considered the problem of shooting state.By Yasuyuki Matsushita, Eyal Ofek, WeinaGe, Xiaoou Tang and Heung-Yeung Shum are disclosed in PAMI VOL.28, NO.7, the Full-frame Video Stabilization with Motion Inpainting of JULY 2006 has adopted the motion compensation process of light stream and filtering, and it is also unsatisfactory that this method surely looks like effect for the video sequence that has compound movement.
In sum, present existing electronic steady image technology also exists a lot of defectives and deficiency for the steady of any random motion of shooting carrier as ability, particularly motion compensation process does not have adaptive ability for multiple motion state yet, cause optical imaging apparatus can not effectively eliminate randomized jitter, can't export the stable video image of true reflection shooting carrier movement situation, phenomenons such as that image may exist is fuzzy, beat, work of treatment brings difficulty to successive image.
(3) summary of the invention
The objective of the invention is defective and deficiency at motion compensation process in the steady picture of above-mentioned conditional electronic, for satisfying the purpose of carrying out the proper exercise compensation in the shooting process under the multiple compound movement state, a kind of motion compensation process based on motion state identification has been proposed, this method is divided into jitter compensation with motion compensation, easy motion compensation and three parts of runout compensation, camera motion is divided into staring takes and scanning shoot, motion state track by the identification scene, and adjust the ratio of three part compensating parameters, thereby solve " level and smooth excessively " and " owing level and smooth " problem in the movement compensation process according to the classification of motions self adaptation.Steady picture problem when the present invention can effectively solve multiple compound movement state and style of shooting and freely changes, the steady picture that can realize the sequence of video images of any random shooting under the complex environment requires, and reach the purpose of real-time output stable video, not only the visual observation effect can be improved, and the precision that follow-up is handled can be ensured.
The technical solution adopted in the present invention is:
Based on the motion compensation process of motion state identification, its concrete steps are as follows in a kind of real time electronic steady image of the present invention:
The likelihood function training of step 1. type of sports
For motion state and type to photographed scene are discerned, before need the likelihood function of type of sports is trained at steady picture.In following step 5, adopt the maximum a posteriori probability method motion state to be estimated the likelihood function of every kind of state must pass through sample training, the specific implementation process is as follows:
(1), gathers the video sample of training
At its application background, as vehicle-mounted, airborne, handheld device, gather required training sample, promptly to this application background acquisition scans and stare the video sample of two kinds of shot types respectively, video sample is for only existing a kind of video clips of type of sports.
(2), interframe movement is estimated and the shooting state mark
Employing is carried out the interframe movement estimation based on the real time kinematics method of estimation of polarity spatial layout feature descriptor to training sample, and writes down the interframe movement estimated parameter of each frame.The interframe movement parameter can comprise move both vertically, horizontal movement and rotatablely moving etc.Concrete steps are:
1., construct Gauss's metric space, extract minutiae;
2., make up polarity spatial layout feature descriptor;
3., local feature point coupling;
4., calculate the interframe movement model, obtain the interframe movement estimated parameter.
(3), calculate linearity and fit parameter
If i group training sample is made up of the m frame, establish movement locus and frame number is linear, promptly represent by following formula:
Figure A200910081057D00081
Obtain fitting parameter a with the above-mentioned equation of least square solution iAnd b i, y wherein iBe the interframe movement parameter that obtains by estimation (as level, vertical and rotatablely move etc.).
(4), calculate the likelihood function parameter
According to the definition in the following step 5, before need to calculate respectively two likelihood function p (a| ω c) and p (the b| ω of each classification ω c at steady picture c) parameter, that is:
P (a| ω c=' stare ')=N (0, σ Stable, a)
P (b| ω c=' stare ')=N (0, σ Stable, b)
P (a| ω c=' scanning ')=N (0, σ Stable, a)
Figure A200910081057D00082
In the present invention, likelihood function adopts Gaussian function and logarithm Gaussian function, and its parameter is average μ and variances sigma 2
Step 2. is set up the motion compensation model
The motion model of definition video scene, wherein x nBe the absolute position parameter (comprise motions such as translation and rotation) of current frame image with respect to output window; x N-1It is the absolute position parameter (comprising motions such as translation and rotation) that previous frame is equivalent to output window; Err (n) is the motion parts that need compensate; Position relation between two frames is as follows:
x n=x n-1+err(n)
Definition motion compensation model is:
err(n)=α·wrap(n)+β·motion(n)+γ·departure(n)
Wherein wrap (n) is the caused jittering component of interframe movement, and corresponding to the caused degeneration of randomized jitter, concrete numerical value is calculated by the method for registering images in the estimation.Wrap (n) is surely as the motion composition that must remove in the system.
Motion (n) is the ideal movements that is caused by the video camera carrier, just can obtain the direction and the movement velocity of camera motion by registration parameter between analysis frame, can the motion of video camera be compensated.
Departure (n) is the runout compensation part, and this part makes output image remain at the window center position, can in time correct image drift slowly.Staring under the state, because the unsteadiness of taking and the accumulated error of algorithm, exist viewpoint slowly to depart from the drift phenomenon of output window, make in the video mass data lose and can't in time obtain correcting, utilize this compensated part then can finely to address this problem.
Step 3. is calculated the side-play amount of current frame image outgoing position with respect to window center point
The distance of output window and output image position has been represented the degree of skew.For the curve movement that guarantees image parameter keeps level and smooth, all images frame all must carry out bias correction, and departure (n) is defined as the absolute value of output video and output window deviation, and the weight of bias correction is determined by parameter γ:
γ = 1 threshold - abs ( departure ) 0 ≤ γ ≤ 1
Threshold has defined boundary threshold in the formula, promptly bias limitations in the threshold scope, and threshold abs (departure) * 2.The value of threshold is relevant with the size of image.
Step 4. pair component motion filtering is calculated ideal movements parameter and linearity and is fitted parameter
Definition
Figure A200910081057D00092
Be the interframe transformation matrix, if t is the frame number that present frame is wanted smoothed image, the consecutive frame set that defines it is N={j:t-k≤j≤t+k}, and the transformation matrix of consecutive frame and current frame image is
Figure A200910081057D0009142947QIETU
, then the transformation matrix motion (n) of current frame image after level and smooth is
Motion ( n ) = Σ i ∈ N T t i * G ( k )
Here G ( k ) = 1 2 π δ e - k 2 / 2 δ 2 It is gaussian kernel function.Motion (n) is an ideal movements track of having removed high dither, reflection be the ideal movements of video camera carrier.
Step 5. identification current frame image motion state is provided with motion compensation parameters
Jitter compensation factor alpha and ideal movements penalty coefficient β determine by the method for identification, and different forms of motion such as static, uniform motion have determined different penalty coefficients with variable motion.The training sample that the user provided is to take the sequence of video images that contains shake and do not comprise the photographed scene motion under application background, in order to calculate the statistical nature of shake.The present invention utilizes consecutive frame estimation parameter to come the match linear function, and as sample, function parameter has not only embodied motion state, has also embodied the trend of motion change with the parameter of linear function.
For i group sample, sample by the linear function call of m frame interframe movement estimated parameter match to, linear relationship such as the formula of they and time:
Figure A200910081057D00101
With the above-mentioned equation of least square solution, obtain parametric slope a iWith intercept b ia iEmbodied movement tendency, b iIt is motion amplitude.(MAP) derives by maximum a posteriori probability:
C = arg max c = 1,2,3 p ( ω c | a , b ) = arg max c = 1,2,3 p ( a , b | ω c ) p ( ω c )
Wherein C is a classification.Trend a and amplitude b by motion are separate, then have:
C = arg max c = 1,2,3 p ( ω c | a , b )
= arg max c = 1,2,3 p ( a , b | ω c ) p ( ω c )
= arg max c = 1,2,3 p ( a | ω c ) p ( b | ω c ) p ( ω c )
Shake can think that average is 0 Gaussian noise, and then the likelihood function of establishing shot state may be defined as:
P (a| ω c=' stare ')=N (0, σ Stable, a)
P (b| ω c=' stare ')=N (0, σ Stable, b)
The steady picture application background that system satisfied is relevant with video camera self configuration with the video camera carrier.Take for motion at the uniform velocity, its bearer rate is limited, and the movement velocity of carrier can not be infinitely great, and the motion probability of happening that speed is big more is also just more little; The movement velocity of Shanghai Communications University has also caused motion blur, can't be used for steady picture.Therefore, the movement velocity of different carriers is concentrated within the specific limits usually, can adopt and mix the likelihood function that logarithm normal distribution is represented motion amplitude, and μ is an average movement velocity.
P (a| ω c=' scanning ')=N (0, σ Stable, a)
Figure A200910081057D00106
According to motion state motion compensation coefficient and jitter compensation coefficient are set.ω c=' stare ' time only need compensate for jitter, i.e. α=1, β=0.In scanning shoot state following time, except the needs compensate for jitter, also need motion is compensated α=1 then, β=1.
Step 6. is calculated the position of current frame image output according to three stage motion compensation quantities
After each parameter that calculates the motion compensation function, calculate the outgoing position of present frame according to following formula:
x n=x n-1+err(n)
X wherein nBe the absolute position parameter of present frame with respect to output window, x N-1Be the absolute movement parameter that previous frame is equivalent to output window, err (n) is the motion compensation function between two two field pictures, and the position of current frame image output is exactly kinematic parameter x nOutput valve.
Based on the motion compensation process of motion state identification, its advantage and effect are in a kind of real time electronic steady image of the present invention:
(1), the present invention proposes a kind of three stage motion compensation process, guarantee that on the one hand steady viewpoint as output window is maintained fixed, avoid window drift phenomenon (promptly the output video that is caused by accumulated error departs from output window), avoid the problem of conventional motion compensation method " level and smooth excessively " and " owing level and smooth " on the other hand.
The present invention is divided into jitter compensation, easy motion compensation and migration with motion compensation, by the coefficient adjustment compensating proportion of every part.Wherein jitter compensation is removed the shake composition in the video, and easy motion compensation compensation is moved because of carrier movement produces scene, corrects output window viewpoint center by migration.Steady and the amount of information of the omnibearing assurance output video of steady picture system of the present invention's design.
(2), the present invention designed based on identification the type of sports sorting technique, by identifying dissimilar motions, different motion compensation parameters is set.Avoid the mistake smoothing problasm and the owe smoothing problasm of staring shooting of single traditional filtering method to scanning shoot.
The present invention carries out linearity to the interframe movement parameter and fits, and fitted results is projected to parameter space, carries out modeling by probability method.In use discern motion state, and adjust motion compensation each several part coefficient on this basis by the maximum a posteriori probability method.When staring shooting, any interframe movement all is the shake composition that is harmful to, thereby should improve the jitter compensation coefficient and export the stable video of staring by reducing easy motion compensated part coefficient; When scanning shoot, should improve motion compensation coefficient and jitter compensation coefficient simultaneously, to adapt to the compensation under the carrier easy motion.The steady of the present invention's design satisfies the variation that gets final product the fast adaptation easy motion in the real time electronic steady image, the shake in the smooth track that can be stable again as system.
(4) description of drawings
The schematic flow sheet of Fig. 1 motion compensation process of the present invention in steady picture process
Fig. 2 type of sports likelihood function of the present invention training process schematic diagram
Fig. 3 a is without level and smooth scene motion track
The scene motion track that Fig. 3 b traditional filtering method obtains
The scene motion track that Fig. 3 c motion compensation process of the present invention obtains
Fig. 4 the present invention is to the steady picture design sketch of the sequence of video images that has random motion of random shooting
(5) embodiment
With reference to Fig. 1, based on the motion compensation process of motion state identification, its concrete implementation step is as follows in a kind of real time electronic steady image of the present invention:
The likelihood function training of step 1. type of sports
For motion state and type to photographed scene are discerned, before need the likelihood function of type of sports is trained at steady picture; In following step 5, adopt the maximum a posteriori probability method motion state to be estimated the likelihood function of every kind of state must pass through sample training, the specific implementation process is as follows:
(1), gathers the video sample of training
At its application background, as vehicle-mounted, airborne, handheld device, gather required training sample, promptly to this application background acquisition scans and stare the video sample of two kinds of shot types respectively, video sample is for only existing a kind of video clips of type of sports.
(2), interframe movement is estimated and the shooting state mark
Employing is carried out the interframe movement estimation based on the real time kinematics method of estimation of polarity spatial layout feature descriptor to training sample, and writes down the interframe movement estimated parameter of each frame.The interframe movement parameter can comprise move both vertically, horizontal movement and rotatablely moving etc.Concrete steps are:
1., construct Gauss's metric space, extract minutiae;
2., make up polarity spatial layout feature descriptor;
3., local feature point coupling;
4., calculate the interframe movement model, obtain the interframe movement estimated parameter.
(3), calculate linearity and fit parameter
If i group training sample is made up of the m frame, establish movement locus and frame number is linear, promptly represent by following formula:
Figure A200910081057D00121
The above-mentioned equation of available least square solution obtains fitting parameter a iAnd b i, y wherein iBe the interframe movement parameter that obtains by estimation (as level, vertical and rotatablely move etc.).
(4), calculate the likelihood function parameter
According to the definition in the step 5, before need to calculate respectively each classification ω at steady picture cTwo likelihood function p (a| ω c) and p (b| ω c) parameter, that is:
P (a| ω c=' stare ')=N (0, σ Stable, a)
P (b| ω c=' stare ')=N (0, σ Stable, b)
P (a| ω c=' scanning ')=N (0, σ Stable, a)
Figure A200910081057D00122
In the present invention, likelihood function adopts Gaussian function and logarithm Gaussian function, and its parameter is average μ and variances sigma 2
Step 2. is set up the motion compensation model
The motion model of definition video scene, wherein x nBe the absolute position parameter (comprise motions such as translation and rotation) of current frame image with respect to output window; x N-1It is the absolute position parameter (motions such as translation and rotation) that previous frame is equivalent to output window; Err (n) is the motion parts that need compensate; Position relation between two frames is as follows:
x n=x n-1+err(n)
Definition motion compensation model is:
err(n)=α·wrap(n)+β·motion(n)+γ·departure(n)
Wherein wrap (n) is the caused jittering component of interframe movement, and corresponding to the caused degeneration of randomized jitter, concrete numerical value is calculated by the method for registering images in the estimation; Wrap (n) is surely as the motion composition that must remove in the system.
Motion (n) is the ideal movements that is caused by the video camera carrier, just can obtain the direction and the movement velocity of camera motion by registration parameter between analysis frame, can the motion of video camera be compensated.
Departure (n) is the runout compensation part, and this part makes output image remain at the window center position, can in time correct image drift slowly.Staring under the state, because the unsteadiness of taking and the accumulated error of algorithm, exist viewpoint slowly to depart from the drift phenomenon of output window, make in the video mass data lose and can't in time obtain correcting, utilize this compensated part then can finely to address this problem.
α, beta, gamma is the weight at various compensation, is used to weigh each influence to overall movement compensation err (n).
Step 3. is calculated the side-play amount of current frame image outgoing position with respect to window center point
The distance of output window and output image position has been represented the degree of skew.For the curve movement that guarantees image parameter keeps level and smooth, all images frame all must carry out bias correction, and departure (n) is defined as the absolute value of output video and output window deviation, and the weight of bias correction is determined by parameter γ:
γ = 1 threshold - abs ( departure ) 0 ≤ γ ≤ 1
Threshold has defined boundary threshold in the formula, promptly bias limitations in the threshold scope, and threshold abs (departure) * 2.The value of threshold is relevant with the size of image.
Step 4. pair component motion carries out filtering, calculates ideal movements parameter and linearity and fits parameter
Definition
Figure A200910081057D00132
Be the interframe transformation matrix, if t is the frame number that present frame is wanted smoothed image, the consecutive frame set that defines it is N={j:t-k≤j≤t+k}, and the transformation matrix of consecutive frame and current frame image is T t j, then the transformation matrix motion (n) of current frame image after level and smooth is
Motion ( n ) = Σ i ∈ N T t i * G ( k )
Here G ( k ) = 1 2 π δ e - k 2 / 2 δ 2 It is gaussian kernel function.Motion (n) is an ideal movements track of having removed high dither, reflection be the ideal movements of video camera carrier.
Step 5. identification current frame image motion state is provided with motion compensation parameters
Jitter compensation factor alpha and ideal movements penalty coefficient β determine by the method for identification, and different forms of motion such as static, uniform motion have determined different penalty coefficients with variable motion.The training sample that the user provided is to take the sequence of video images that contains shake and do not comprise the photographed scene motion under application background, in order to calculate the statistical nature of shake.The present invention utilizes consecutive frame estimation parameter to come the match linear function, and as sample, function parameter has not only embodied motion state, has also embodied the trend of motion change with the parameter of linear function.
For i group sample, sample by the linear function call of m frame interframe movement estimated parameter match to, linear relationship such as the formula of they and time:
Figure A200910081057D00143
With the above-mentioned equation of least square solution, obtain parametric slope a iWith intercept b ia iEmbodied movement tendency, b iIt is motion amplitude.(MAP) derives by maximum a posteriori probability:
C = arg max c = 1,2,3 p ( ω c | a , b ) = arg max c = 1,2,3 p ( a , b | ω c ) p ( ω c )
Wherein C is a classification.Trend a and amplitude b by motion are separate, then have:
C = arg max c = 1,2,3 p ( ω c | a , b )
= arg max c = 1,2,3 p ( a , b | ω c ) p ( ω c )
= arg max c = 1,2,3 p ( a | ω c ) p ( b | ω c ) p ( ω c )
Shake can think that average is 0 Gaussian noise, and then the likelihood function of establishing shot state may be defined as:
P (a| ω c=' stare ')=N (0, σ Stable, a)
P (b| ω c=' stare ')=N (0, σ Stable, b)
The steady picture application background that system satisfied is relevant with video camera self configuration with the video camera carrier.Take for motion at the uniform velocity, its bearer rate is limited, and the movement velocity of carrier can not be infinitely great, and the motion probability of happening that speed is big more is also just more little; Bigger movement velocity has also caused motion blur, can't be used for steady picture.Therefore, the movement velocity of different carriers is concentrated within the specific limits usually, can adopt and mix the likelihood function that logarithm normal distribution is represented motion amplitude, and μ is an average movement velocity.
P (a| ω c=' scanning ')=N (0, σ Stable, a)
According to motion state motion compensation coefficient and jitter compensation coefficient are set.ω c=' stare ' time only need compensate for jitter, i.e. α=1, β=0.In scanning shoot state following time, except the needs compensate for jitter, also need motion is compensated α=1 then, β=1.
Step 6. is calculated the position of current frame image output according to three stage motion compensation quantities
After each parameter that calculates the motion compensation function, calculate the outgoing position of present frame according to following formula:
x n=x n-1+err(n)
X wherein nBe the absolute position parameter of present frame with respect to output window, x N-1Be the absolute movement parameter that previous frame is equivalent to output window, err (n) is the motion compensation function between two two field pictures, and the position of current frame image output is exactly kinematic parameter x nOutput valve.
The present invention to the motion compensation effect of the video sequence that has compound movement relation and stablizing effect by experiment situation further specify.Fig. 3 has provided the track and the not comparison of stabilization method and the resultant scene motion track of traditional filter method after the present invention stablizes video, described " time-distance " curve of transverse movement among the figure, abscissa express time among the figure, ordinate are represented the distance of x axle with respect to initial point.Fig. 3 (a) is the movement locus of not handling through steady picture, has wherein comprised situations such as inactive state, acceleration, deceleration and uniform motion.Fig. 3 (b) is the scene motion track after utilizing traditional filtering method stable.Fig. 3 (c) is the movement locus that adopts the inventive method surely to export as the back.From Fig. 3 (c) as can be seen, because the present invention's discerning to the track classification, therefore can restore the video of original shooting state more accurately, in figure, in movement locus figure, show as horizontal linear (not having displacement) when staring state, and filtering method has been introduced discrete point owing to front and back frame motion state changes, and can't reach real video stabilization effect thereby caused staring when taking.Experimental result shows that variation has good adaptive to our bright method to motion state.
Fig. 4 has provided and has utilized the steady picture effect of motion compensation process of the present invention for the video image that has the compound movement relation, wherein Fig. 4 (a) is original shake video (c), Fig. 4 (b) (d) be utilize that the present invention proposes based on the motion compensation process of the motion state identification result after to the steady picture of sequence of video images, in the raw video image sequence, not only comprised translation, rotation, yardstick and view transformation, multiple motion state and style of shooting have also been comprised, for such complex video motion image sequence, the inventive method can obtain surely to look like preferably effect, and has reached real-time rate request.
The present invention has realized the steady picture speed of per second 30 frames for 176 * 144 sequence of video images under VC2005.

Claims (1)

  1. In the real time electronic steady image based on the motion compensation process of motion state identification, it is characterized in that: its concrete steps are as follows:
    The likelihood function training of step 1. type of sports
    For motion state and type to photographed scene are discerned, before need the likelihood function of type of sports is trained at steady picture; In following step 5, adopt the maximum a posteriori probability method motion state to be estimated the likelihood function of every kind of state must pass through sample training, the specific implementation process is as follows:
    (1), gathers the video sample of training
    At its application background, as vehicle-mounted, airborne, handheld device, gather required training sample, to this application background acquisition scans and stare the video sample of two kinds of shot types respectively, video sample is for only existing a kind of video clips of type of sports;
    (2), interframe movement is estimated and the shooting state mark
    Employing is carried out the interframe movement estimation based on the real time kinematics method of estimation of polarity spatial layout feature descriptor to training sample, and writes down the interframe movement estimated parameter of each frame; The interframe movement parameter comprise move both vertically, horizontal movement and rotatablely moving, concrete steps are:
    1., construct Gauss's metric space, extract minutiae;
    2., make up polarity spatial layout feature descriptor;
    3., local feature point coupling;
    4., calculate the interframe movement model, obtain the interframe movement estimated parameter;
    (3), calculate linearity and fit parameter
    If i group training sample is made up of the m frame, establish movement locus and frame number is linear, promptly represent by following formula:
    Figure A200910081057C00021
    With the above-mentioned equation of least square solution, obtain fitting parameter alpha iAnd b i, y wherein iIt is the interframe movement parameter that obtains by estimation;
    (4), calculate the likelihood function parameter
    According to the definition in the following step 5, before need to calculate respectively each classification ω at steady picture cTwo likelihood function p (a| ω c) and p (b| ω c) parameter, that is:
    P (a| ω c=' stare ')=N (0, σ Stable, a)
    P (b| ω c=' stare ')=N (0, σ Stable, b)
    P (a| ω c=' scanning ')=N (0, σ Stable, a)
    Figure A200910081057C00022
    Wherein, likelihood function adopts Gaussian function and logarithm Gaussian function, and its parameter is average μ and variances sigma 2Step 2. is set up the motion compensation model
    The motion model of definition video scene, wherein x nBe the absolute position parameter of current frame image with respect to output window; x N-1It is the absolute position parameter that previous frame is equivalent to output window; Err (n) is the motion parts that need compensate; Position relation between two frames is as follows:
    x n=x n-1+err(n)
    Definition motion compensation model is:
    err(n)=α·wrap(n)+β·motion(n)+γ·departure(n)
    Wherein wrap (n) is the caused jittering component of interframe movement, and corresponding to the caused degeneration of randomized jitter, concrete numerical value is calculated by the method for registering images in the estimation; Wrap (n) is surely as the motion composition that must remove in the system;
    Motion (n) is the ideal movements that is caused by the video camera carrier, just can obtain the direction and the movement velocity of camera motion by registration parameter between analysis frame, can the motion of video camera be compensated;
    Departure (n) is the runout compensation part, and this part makes output image remain at the window center position, can in time correct image drift slowly; Staring under the state, because the unsteadiness of taking and the accumulated error of algorithm, exist viewpoint slowly to depart from the drift phenomenon of output window, make in the video mass data lose and can't in time obtain correcting, utilize this compensated part then can finely to address this problem;
    α, beta, gamma is the weight at various compensation, is used to weigh each ratio that influences to overall movement compensation err (n);
    Step 3. is calculated the side-play amount of current frame image outgoing position with respect to window center point
    The distance of output window and output image position has been represented the degree of skew, for the curve movement that guarantees image parameter keeps level and smooth, the all images frame all must carry out bias correction, departure (n) is defined as the absolute value of output video and output window deviation, and the weight of bias correction is determined by parameter γ:
    γ = 1 threshold - abs ( departure ) 0 ≤ γ ≤ 1
    Threshold has defined boundary threshold in the formula, promptly bias limitations in the threshold scope, and
    Threshold〉abs (departure) * 2, the value of threshold is relevant with the size of image;
    Step 4. pair component motion filtering is calculated ideal movements parameter and linearity and is fitted parameter
    Definition Be the interframe transformation matrix, if t is the frame number that present frame is wanted smoothed image, the consecutive frame set that defines it is N={j:t-k≤j≤t+k}, and the transformation matrix of consecutive frame and current frame image is
    Figure A200910081057C0004123730QIETU
    , then the transformation matrix motion (n) of current frame image after level and smooth is expressed as:
    Motion ( n ) = Σ i ∈ N T t i * G ( k )
    Here G ( k ) = 1 2 π δ e - k 2 / 2 δ 2 Be gaussian kernel function, motion (n) is an ideal movements track of having removed high dither, reflection be the ideal movements of video camera carrier;
    The motion state of step 5. identification current frame image is provided with motion compensation parameters
    Jitter compensation factor alpha and ideal movements penalty coefficient β determine by the method for identification, different forms of motion such as static, uniform motion have determined different penalty coefficients with variable motion, the training sample that the user provided is to take the sequence of video images that contains shake and do not comprise the photographed scene motion under application background, in order to calculate the statistical nature of shake; The present invention utilizes consecutive frame estimation parameter to come the match linear function, and as sample, function parameter has not only embodied motion state, has also embodied the trend of motion change with the parameter of linear function;
    For i group sample, sample by the linear function call of m frame interframe movement estimated parameter match to, linear relationship such as the formula of they and time:
    Figure A200910081057C00044
    With the above-mentioned equation of least square solution, obtain parametric slope α iWith intercept b ia iEmbodied movement tendency, b iIt is motion amplitude; (MAP) derives by maximum a posteriori probability:
    C = arg max c = 1,2,3 p ( ω c | a , b ) = arg max c = 1,2,3 p ( a , b | ω c ) p ( ω c )
    Wherein C is a classification; Because the trend a and the amplitude b of motion are separate, then have:
    C = arg max c = 1,2,3 p ( ω c | a , b )
    = arg max c = 1,2,3 p ( a , b | ω c | ) p ( ω c )
    = arg max c = 1,2,3 p ( a | ω c ) p ( b | ω c ) p ( ω c )
    If shake is that average is 0 Gaussian noise, then the likelihood function of establishing shot state is defined as:
    P (a| ω c=' stare ')=N (0, σ Stable, a)
    P (b| ω c=' stare ')=N (0, σ Stable, b)
    The steady picture application background that system satisfied is relevant with video camera self configuration with the video camera carrier, takes for uniform motion, and its bearer rate is limited, and the movement velocity of carrier can not be infinitely great, and the motion probability of happening that speed is big more is also just more little; Bigger movement velocity can cause motion blur, can't be used for steady picture; Therefore, the movement velocity of different carriers is concentrated within the specific limits usually, therefore adopts and mixes the likelihood function that logarithm normal distribution is represented motion amplitude, and μ is an average movement velocity;
    P (a| ω c=' scanning ')=N (0, σ Stable, a)
    Figure A200910081057C00051
    According to motion state motion compensation coefficient and jitter compensation coefficient, ω are set c=' stare ' time only need compensate for jitter, i.e. α=1, β=0; In scanning shoot state following time, except the needs compensate for jitter, also need motion is compensated α=1 then, β=1;
    Step 6. is calculated the position of current frame image output according to three stage motion compensation quantities
    After each parameter that calculates the motion compensation function, calculate the outgoing position of present frame according to following formula:
    x n=x n-1+err(n)
    X wherein nBe the absolute position parameter of present frame with respect to output window; x N-1It is the absolute movement parameter that previous frame is equivalent to output window; Err (n) is the motion compensation function between two two field pictures; The position of current frame image output is exactly kinematic parameter x nOutput valve.
CN 200910081057 2009-04-01 2009-04-01 Movement compensation method of real time electronic steady image based on motion state recognition Pending CN101511024A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910081057 CN101511024A (en) 2009-04-01 2009-04-01 Movement compensation method of real time electronic steady image based on motion state recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910081057 CN101511024A (en) 2009-04-01 2009-04-01 Movement compensation method of real time electronic steady image based on motion state recognition

Publications (1)

Publication Number Publication Date
CN101511024A true CN101511024A (en) 2009-08-19

Family

ID=41003253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910081057 Pending CN101511024A (en) 2009-04-01 2009-04-01 Movement compensation method of real time electronic steady image based on motion state recognition

Country Status (1)

Country Link
CN (1) CN101511024A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096912A (en) * 2009-12-14 2011-06-15 北京中星微电子有限公司 Method and device for processing image
CN102148934A (en) * 2011-04-02 2011-08-10 北京理工大学 Multi-mode real-time electronic image stabilizing system
CN102202164A (en) * 2011-05-20 2011-09-28 长安大学 Motion-estimation-based road video stabilization method
CN102377730A (en) * 2010-08-11 2012-03-14 中国电信股份有限公司 Audio/video signal processing method and mobile terminal
CN101742122B (en) * 2009-12-21 2012-06-06 汉王科技股份有限公司 Method and system for removing video jitter
CN102509285A (en) * 2011-09-28 2012-06-20 宇龙计算机通信科技(深圳)有限公司 Processing method and system for shot fuzzy picture and shooting equipment
CN102572278A (en) * 2010-12-23 2012-07-11 三星电子株式会社 Digital image stabilization device and method using adaptive filter
CN101771811B (en) * 2010-01-14 2013-04-17 北京大学 Avionic image stabilizer
CN103079037A (en) * 2013-02-05 2013-05-01 哈尔滨工业大学 Self-adaptive electronic image stabilization method based on long-range view and close-range view switching
CN103813056A (en) * 2012-11-15 2014-05-21 浙江大华技术股份有限公司 Image stabilization method and device
CN103841297A (en) * 2012-11-23 2014-06-04 中国航天科工集团第三研究院第八三五七研究所 Electronic image-stabilizing method suitable for resultant-motion camera shooting carrier
CN104349039A (en) * 2013-07-31 2015-02-11 展讯通信(上海)有限公司 Video anti-jittering method and apparatus
CN104361600A (en) * 2014-11-25 2015-02-18 苏州大学 Motion recognition method and system
CN105446351A (en) * 2015-11-16 2016-03-30 杭州码全信息科技有限公司 Robotic airship system capable of locking target area for observation based on autonomous navigation
CN105812788A (en) * 2016-03-24 2016-07-27 北京理工大学 Video stability quality assessment method based on interframe motion amplitude statistics
WO2016197898A1 (en) * 2015-06-09 2016-12-15 同济大学 Image encoding and decoding method, image processing device, and computer storage medium
CN106655939A (en) * 2016-08-31 2017-05-10 上海交通大学 Permanent magnet synchronous motor control method based on motion trend multi-model adaptive mixed control
CN107197270A (en) * 2011-11-07 2017-09-22 佳能株式会社 Make the method and apparatus that the coding/decoding for the compensation skew of one group of reconstruction sample of image is optimized
CN107564084A (en) * 2017-08-24 2018-01-09 腾讯科技(深圳)有限公司 A kind of cardon synthetic method, device and storage device
CN109492637A (en) * 2018-11-06 2019-03-19 西安艾润物联网技术服务有限责任公司 Method of adjustment, identifier terminal and the readable storage medium storing program for executing of cog region
CN110062165A (en) * 2019-04-22 2019-07-26 联想(北京)有限公司 Method for processing video frequency, device and the electronic equipment of electronic equipment
CN110248048A (en) * 2019-06-21 2019-09-17 苏宁云计算有限公司 A kind of detection method and device of video jitter
CN110602393A (en) * 2019-09-04 2019-12-20 南京博润智能科技有限公司 Video anti-shake method based on image content understanding
US10575022B2 (en) 2015-06-09 2020-02-25 Zte Corporation Image encoding and decoding method, image processing device and computer storage medium
CN108337428B (en) * 2017-01-20 2020-11-06 佳能株式会社 Image stabilization apparatus, control method thereof, image capturing apparatus, and storage medium
CN112136314A (en) * 2018-05-18 2020-12-25 高途乐公司 System and method for stabilizing video
CN112957062A (en) * 2021-05-18 2021-06-15 雅安市人民医院 Vehicle-mounted CT imaging system and imaging method based on 5G transmission

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096912A (en) * 2009-12-14 2011-06-15 北京中星微电子有限公司 Method and device for processing image
CN101742122B (en) * 2009-12-21 2012-06-06 汉王科技股份有限公司 Method and system for removing video jitter
CN101771811B (en) * 2010-01-14 2013-04-17 北京大学 Avionic image stabilizer
CN102377730A (en) * 2010-08-11 2012-03-14 中国电信股份有限公司 Audio/video signal processing method and mobile terminal
CN102572278B (en) * 2010-12-23 2016-08-31 三星电子株式会社 Utilize the digital image stabilization method and device of adaptive-filtering
CN102572278A (en) * 2010-12-23 2012-07-11 三星电子株式会社 Digital image stabilization device and method using adaptive filter
CN102148934A (en) * 2011-04-02 2011-08-10 北京理工大学 Multi-mode real-time electronic image stabilizing system
CN102148934B (en) * 2011-04-02 2013-02-06 北京理工大学 Multi-mode real-time electronic image stabilizing system
CN102202164A (en) * 2011-05-20 2011-09-28 长安大学 Motion-estimation-based road video stabilization method
CN102202164B (en) * 2011-05-20 2013-03-20 长安大学 Motion-estimation-based road video stabilization method
CN102509285A (en) * 2011-09-28 2012-06-20 宇龙计算机通信科技(深圳)有限公司 Processing method and system for shot fuzzy picture and shooting equipment
US10743033B2 (en) 2011-11-07 2020-08-11 Canon Kabushiki Kaisha Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
US10462493B2 (en) 2011-11-07 2019-10-29 Canon Kabushiki Kaisha Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
US10575020B2 (en) 2011-11-07 2020-02-25 Canon Kabushiki Kaisha Method and device for providing compensation offsets for a set of reconstructed samples of an image
CN107197270B (en) * 2011-11-07 2020-03-17 佳能株式会社 Method and apparatus for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
CN107295337A (en) * 2011-11-07 2017-10-24 佳能株式会社 Make the method and apparatus that the coding/decoding for the compensation skew of one group of reconstruction sample of image is optimized
CN107197270A (en) * 2011-11-07 2017-09-22 佳能株式会社 Make the method and apparatus that the coding/decoding for the compensation skew of one group of reconstruction sample of image is optimized
US11076173B2 (en) 2011-11-07 2021-07-27 Canon Kabushiki Kaisha Method and device for providing compensation offsets for a set of reconstructed samples of an image
US10771819B2 (en) 2011-11-07 2020-09-08 Canon Kabushiki Kaisha Sample adaptive offset filtering
CN107295337B (en) * 2011-11-07 2020-03-31 佳能株式会社 Method and apparatus for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
CN103813056B (en) * 2012-11-15 2016-03-16 浙江大华技术股份有限公司 A kind of digital image stabilization method and device
CN103813056A (en) * 2012-11-15 2014-05-21 浙江大华技术股份有限公司 Image stabilization method and device
CN103841297B (en) * 2012-11-23 2016-12-07 中国航天科工集团第三研究院第八三五七研究所 A kind of electronic image stabilization method being applicable to resultant motion shooting carrier
CN103841297A (en) * 2012-11-23 2014-06-04 中国航天科工集团第三研究院第八三五七研究所 Electronic image-stabilizing method suitable for resultant-motion camera shooting carrier
CN103079037B (en) * 2013-02-05 2015-06-10 哈尔滨工业大学 Self-adaptive electronic image stabilization method based on long-range view and close-range view switching
CN103079037A (en) * 2013-02-05 2013-05-01 哈尔滨工业大学 Self-adaptive electronic image stabilization method based on long-range view and close-range view switching
CN104349039A (en) * 2013-07-31 2015-02-11 展讯通信(上海)有限公司 Video anti-jittering method and apparatus
CN104361600B (en) * 2014-11-25 2017-08-25 苏州大学 motion recognition method and system
CN104361600A (en) * 2014-11-25 2015-02-18 苏州大学 Motion recognition method and system
US10575022B2 (en) 2015-06-09 2020-02-25 Zte Corporation Image encoding and decoding method, image processing device and computer storage medium
WO2016197898A1 (en) * 2015-06-09 2016-12-15 同济大学 Image encoding and decoding method, image processing device, and computer storage medium
CN105446351A (en) * 2015-11-16 2016-03-30 杭州码全信息科技有限公司 Robotic airship system capable of locking target area for observation based on autonomous navigation
CN105812788A (en) * 2016-03-24 2016-07-27 北京理工大学 Video stability quality assessment method based on interframe motion amplitude statistics
CN106655939A (en) * 2016-08-31 2017-05-10 上海交通大学 Permanent magnet synchronous motor control method based on motion trend multi-model adaptive mixed control
CN106655939B (en) * 2016-08-31 2020-05-22 上海交通大学 Permanent magnet synchronous motor control method based on motion trend multi-model adaptive hybrid control
CN108337428B (en) * 2017-01-20 2020-11-06 佳能株式会社 Image stabilization apparatus, control method thereof, image capturing apparatus, and storage medium
CN107564084A (en) * 2017-08-24 2018-01-09 腾讯科技(深圳)有限公司 A kind of cardon synthetic method, device and storage device
CN107564084B (en) * 2017-08-24 2022-07-01 腾讯科技(深圳)有限公司 Method and device for synthesizing motion picture and storage equipment
CN112136314A (en) * 2018-05-18 2020-12-25 高途乐公司 System and method for stabilizing video
CN109492637A (en) * 2018-11-06 2019-03-19 西安艾润物联网技术服务有限责任公司 Method of adjustment, identifier terminal and the readable storage medium storing program for executing of cog region
CN110062165A (en) * 2019-04-22 2019-07-26 联想(北京)有限公司 Method for processing video frequency, device and the electronic equipment of electronic equipment
CN110062165B (en) * 2019-04-22 2021-09-14 联想(北京)有限公司 Video processing method and device of electronic equipment and electronic equipment
CN110248048A (en) * 2019-06-21 2019-09-17 苏宁云计算有限公司 A kind of detection method and device of video jitter
CN110248048B (en) * 2019-06-21 2021-11-09 苏宁云计算有限公司 Video jitter detection method and device
CN110602393A (en) * 2019-09-04 2019-12-20 南京博润智能科技有限公司 Video anti-shake method based on image content understanding
CN112957062A (en) * 2021-05-18 2021-06-15 雅安市人民医院 Vehicle-mounted CT imaging system and imaging method based on 5G transmission

Similar Documents

Publication Publication Date Title
CN101511024A (en) Movement compensation method of real time electronic steady image based on motion state recognition
Stoffregen et al. Event cameras, contrast maximization and reward functions: An analysis
CN110675435B (en) Vehicle trajectory tracking method based on Kalman filtering and chi 2 detection smoothing processing
US11215700B2 (en) Method and system for real-time motion artifact handling and noise removal for ToF sensor images
US9947077B2 (en) Video object tracking in traffic monitoring
KR101830804B1 (en) Digital image stabilization method with adaptive filtering
CN106991650A (en) A kind of method and apparatus of image deblurring
US9292934B2 (en) Image processing device
TW201319954A (en) Image stabilization method and image stabilization device
US20080100716A1 (en) Estimating A Point Spread Function Of A Blurred Digital Image Using Gyro Data
US9336577B2 (en) Image processing apparatus for removing haze contained in still image and method thereof
CN110580713A (en) Satellite video target tracking method based on full convolution twin network and track prediction
CN101551901B (en) Method for compensating and enhancing dynamic shielded image in real time
CN102202164A (en) Motion-estimation-based road video stabilization method
CN105957108A (en) Passenger flow volume statistical system based on face detection and tracking
Hsu et al. Moving camera video stabilization using homography consistency
CN109029425A (en) A kind of fuzzy star chart restored method filtered using region
CN105100546A (en) Movement estimation method and device
CN111539872A (en) Real-time electronic image stabilization method for video image under random jitter interference
CN107886526A (en) Sequence image weak and small target detection method based on time domain filtering
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN111105429B (en) Integrated unmanned aerial vehicle detection method
CN105869108B (en) A kind of method for registering images in the mobile target detecting of moving platform
Thillainayagi Video stabilization technique for thermal infrared Aerial surveillance
Ryu et al. Video stabilization for robot eye using IMU-aided feature tracker

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20090819