CN111355881A - Video stabilization method for simultaneously eliminating rolling artifacts and jitters - Google Patents
Video stabilization method for simultaneously eliminating rolling artifacts and jitters Download PDFInfo
- Publication number
- CN111355881A CN111355881A CN201911260902.8A CN201911260902A CN111355881A CN 111355881 A CN111355881 A CN 111355881A CN 201911260902 A CN201911260902 A CN 201911260902A CN 111355881 A CN111355881 A CN 111355881A
- Authority
- CN
- China
- Prior art keywords
- frame
- grid
- motion
- video
- intra
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 238000005096 rolling process Methods 0.000 title claims abstract description 54
- 230000006641 stabilisation Effects 0.000 title claims abstract description 30
- 238000011105 stabilization Methods 0.000 title claims abstract description 30
- 230000009466 transformation Effects 0.000 claims abstract description 27
- 238000009499 grossing Methods 0.000 claims abstract description 18
- 238000005457 optimization Methods 0.000 claims abstract description 18
- 230000003044 adaptive effect Effects 0.000 claims abstract description 11
- 238000011084 recovery Methods 0.000 claims abstract description 6
- 238000006243 chemical reaction Methods 0.000 claims abstract description 5
- 238000006073 displacement reaction Methods 0.000 claims description 35
- 239000011159 matrix material Substances 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 14
- 238000009825 accumulation Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 230000000996 additive effect Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 18
- 238000012360 testing method Methods 0.000 description 15
- 230000000694 effects Effects 0.000 description 8
- 230000008439 repair process Effects 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/689—Motion occurring during a rolling shutter mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a video stabilization method for simultaneously eliminating rolling artifacts and jitter, which comprises the following steps: 1) gridding the video frame; 2) estimating inter-frame motion; 3) constructing a data fidelity term and a motion smoothing regular term; 4) constructing an interframe intraframe motion joint optimization model; 5) estimating intra-frame motion; 6) setting an adaptive sliding window; 7) calculating self-adaptive weight; 8) solving restoration transformation; 9) while de-rolling artifact stabilization video generation. According to the method, each frame of a gridded video is converted by utilizing homography to simulate inter-frame motion, and rigid conversion simulates intra-frame motion, an inter-frame intra-frame motion combined model is established, and the recovery conversion for removing rolling artifacts and dithering is directly solved; the method can simultaneously realize the removal of the rolling artifact of the video and the stabilization of the video, avoids the problem of over-smoothness in the video shaking removal process, and can be widely applied to various types of video stabilization such as mobile phone shooting, unmanned aerial vehicle shooting, vehicle navigation and the like which adopt CMOS cameras.
Description
Technical Field
The invention relates to a dithering video stabilization technology, in particular to a video stabilization method for simultaneously eliminating rolling artifacts and dithering.
Background
In the field of video processing and display, video signals shot by a vehicle-mounted camera shooting platform, an unmanned aerial vehicle or ship camera shooting system, a handheld camera shooting device and the like by adopting a CMOS camera often have rolling artifacts due to line-by-line imaging of the camera, and the shooting process is subject to random disturbance and also easily causes video jitter. On one hand, these video degradations are very likely to cause visual fatigue of video observers and affect the content understanding of video images, resulting in erroneous judgment or missed judgment of the observers; on the other hand, these video jitters and rolling effects often hinder subsequent processing of these videos, such as tracking, recognition, pattern analysis, and the like.
Currently, there are many methods for the separation process of rolling artifacts and video judder, such as Robust Mesh repair method [ Yeong Jun Koh, Chulwoo Lee, and Chang-Su Kim.2015.video stabilization based on Feature extraction and Selection and Robust Mesh identification, IEEE Transactions on Image Processing 24,12(2015), 5260 Transactions 5273 ], and subspace method [ Feng Liu, Michael Gleicher, Jue Wang, Hailin Jin and AsmAgarwala. 2011.subspace video stabilization. ACM Transactions on Graphics 30,1(2011),4 ]. The robust mesh restoration method adopts a separation processing frame, firstly, the motion trail without rolling artifacts is estimated, and then the motion smoothing is carried out on the corrected characteristic trail. And the subspace method takes the rolling artifacts as a set of structured noise, and implicitly removes the rolling artifacts in the video image stabilization process.
However, whether the robust mesh repairing method or the subspace method is based on processing rolling artifacts or video jitter respectively, the algorithm process is complex.
Disclosure of Invention
The invention aims to provide a video stabilization method for simultaneously eliminating rolling artifacts and jitters.
The technical solution for realizing the purpose of the invention is as follows: a video stabilization method for simultaneously eliminating rolling artifacts and jitter, the method comprising the steps of:
According to the fidelity between the sum of the intra-frame motion under the same grid column and the inter-frame motion, constructing the fidelity item of the inter-frame intra-frame motion data
According to the similarity of the intra-frame motion under the condition of dense sampling density, constructing an intra-frame motion smoothing regular term
And 4, constructing an interframe intraframe motion joint optimization model: establishing interframe intraframe motion according to the constraint items constructed in the step 3A combined optimization model: argmin{F}P (F) +. lambda Q (F), where the regularization parameter lambda>0;
and 9, simultaneously generating a rolling artifact stabilization video: according to a transformation matrix And redrawing each grid of each frame of video image to finally generate a stable video image sequence with rolling artifact removed.
Further, when constructing the data fidelity item in step 3, according to the gridWill be the same as other grids in the same grid columnSharing, availability of propertiesLeft side of equal type is a gridAfter 8 units of time frame motion accumulation, the right side is the corresponding frame motion of the grid, and according to the property, the data fidelity item P (F) between the sum of the frame motion and the frame motion can be designed.
Further, when the motion smoothing regularization term is constructed in step 3, the mesh is subjected to similarity under the high-frequency sampling condition according to the intra-frame motionIn a frameWhich is a grid with the next grid row of the same grid columnIn a frameShould there be similarity, then a smooth regularization term can be designed
Further, the rigid transformation in step 5 is defined as three-degree-of-freedom transformationWherein theta, x and y respectively represent the rotation angle, horizontal displacement and vertical displacement between every two adjacent grids; the transformation having the additive property that
Further, if the rigid transformation in step 5 has an additive property, the optimization model in step 4 can be converted into an optimization model in three degrees of freedom, and for horizontal displacement, f represents horizontal displacement in a frame, and r represents horizontal displacement between frames, the data fidelity term and the motion smoothing term in step 3 can be converted into:
conversion of the optimization model into argmin{f}P (f) + lambda Q (f), the horizontal displacement of each grid can be obtained by a new modelThe vertical displacement and the rotation angle solving method are the same as the horizontal displacement; the model can be expressed in matrix-vector form as:
further, when the adaptive weight is constructed in step 7Then, define the gridTo the gridThe time distance and the space distance are respectively | t-k |, and then the horizontal distance between the two grids,is the vertical distance between the two grids.
Further, step 8 definesMotion accumulation in the frame from ith row and j column grids to global shutter point of the kth frame, if the 4 th grid row is taken as the global shutter point, the t th frame ItThe intra motion restoration matrix for the grid of row k and column j can be expressed as:by passingThe rolling artifacts of each grid can be removed.
Compared with the prior art, the invention has the following remarkable advantages: (1) aiming at the degradation problem that rolling artifacts and jitter occur simultaneously in low-quality video, the prior art usually needs two separate preprocessing processes of rolling artifact removal and jitter removal, but the invention establishes a joint processing optimization model, and can overcome the problems of under-smoothing and over-smoothing in video stabilization through intra-frame and inter-frame motion estimation of gridding video frames and full-time adjustment of self-adaptive sliding windows; (2) the method makes full use of the correlation of the interframe intraframe motion and adaptively adjusts the weight parameters, so that the method has good effects on the aspects of removing the jitter and rolling artifacts of the video; (3) the method simulates interframe motion by utilizing homography transformation and rigid transformation of each frame of the gridded video, establishes an interframe intraframe motion combined model, directly solves recovery transformation for removing rolling artifacts and shaking, and can be widely applied to various types of video stabilization such as mobile phone shooting, unmanned aerial vehicle shooting, vehicle navigation and the like by adopting a CMOS camera.
Drawings
Fig. 1 is a flow chart of a video stabilization method for simultaneously eliminating rolling artifacts and jitter according to the present invention.
Fig. 2(a) is a first frame diagram of a first test video.
Fig. 2(b) is a second frame diagram of the first test video.
Fig. 2(c) is a graph of the residual between the original two frames.
Fig. 2(d) is a graph of the residual between two frames after de-jittering using the method alone.
Fig. 2(e) is a diagram of the residual between two frames after being processed by the robust mesh repair method.
Fig. 2(f) is a diagram of the residual between two frames after the method simultaneously de-jitters and de-rolling the artifacts.
Fig. 3(a) is an original feature point trajectory diagram.
Fig. 3(b) is a feature point trajectory diagram after processing by the robust mesh repairing method.
Fig. 3(c) is a feature point trajectory diagram after processing by the subspace method.
FIG. 3(d) is a trace diagram of feature points after processing by the method of the present invention.
Fig. 4(a) is a randomly chosen three-frame original picture in a test video.
Fig. 4(b) is a result diagram of the three-frame image after being processed by the robust mesh repair method.
Fig. 4(c) is a result diagram of the three-frame image processed by the subspace method.
FIG. 4(d) is a graph of the results of three frame images processed by the method of the present invention.
Fig. 5 (1) - (10) are 10 test video images containing dithering and rolling artifacts, respectively, used in the experiment of the present invention.
Fig. 6 is a video visual evaluation result chart for 35 users.
Detailed Description
The invention provides a joint optimization method for simultaneously eliminating rolling artifacts and jitters by establishing an estimation model for interframe intra-frame motion aiming at videos shot by a CMOS camera and realizing the effect of simultaneously removing the rolling artifacts and jitters in the videos. The following describes the implementation process of the present invention in detail with reference to fig. 1:
According to the fidelity between the sum of the intra-frame motion under the same grid column and the inter-frame motion, constructing the fidelity item of the inter-frame intra-frame motion data
According to the similarity of the intra-frame motion under the condition of dense sampling density, constructing an intra-frame motion smoothing regular term
Step 3.1, when constructing the data fidelity item, according to the gridWill be the same as other grids in the same grid columnSharing, availability of propertiesLeft side of equation is a gridAfter 8 units of time frame motion accumulation, the right side is the corresponding frame motion of the grid, and according to the property, the data fidelity item P (F) between the sum of the frame motion and the frame motion can be designed.
Step 3.2, when constructing the motion smoothing regular term, according to the similarity that the intra-frame motion should have under the high-frequency sampling condition, for the gridIn a frameWhich is a grid with the next grid row of the same grid columnIn a frameShould there be similarity, then a smooth regularization term can be designed
And 4, constructing an interframe intraframe motion joint optimization model: and (3) establishing an interframe intraframe motion joint optimization model according to the constraint items constructed in the step 3: argmin{F}P (F) +. lambda Q (F), where the regularization parameter lambda>0。
Step 5.1, rigid transformation is defined as three-degree-of-freedom transformationWhere θ, x and y represent the rotation angle, horizontal displacement and vertical displacement, respectively, between two adjacent grids. According to the additivity that the transformation has, i.e.
Step 5.2, according to the additive property of the rigid transformation, the optimization model in step 4 can be converted into an optimization model in three degrees of freedom, for example, for horizontal displacement, f represents horizontal displacement in a frame, and r represents horizontal displacement between frames, the data fidelity term and the motion smoothing term in step 3 are converted into:
conversion of the optimization model into argmin{f}P (f) + lambda Q (f), the horizontal displacement of each grid can be obtained by a new modelAnd spread to vertical displacement and rotation angle. The model can be expressed in matrix-vector form as:
When constructing adaptive weightsThen, define the gridTo the gridThe temporal distance and the spatial distance are respectively | t-k |, and then it is the horizontal distance between the two grids,is the vertical distance between the two grids.
Definition ofThe motion accumulation in the frame from the I-th frame row and j-column grid to the global shutter point of the grid column is performed, if the 4-th grid row is used as the global shutter point, the t-th frame ItThe intra motion restoration matrix for the grid of row k and column j can be expressed as:by passingThe rolling artifacts for each grid can be removed.
And 9, simultaneously generating a rolling artifact stabilization video: according to a transformation matrix And redrawing each grid of each frame of video image to finally generate a stable video image sequence with rolling artifact removed.
The effect of the invention can be further illustrated by the following simulation experiment:
(1) simulation conditions
The simulation experiment adopts ten groups of shaking video data containing rolling artifacts, and the simulation experiment is completed by adopting Matlab R2012 under a Windows 7 operating system. The processor is Xeon W3520 CPU (2.66GHz) and the memory is 4 GB. The initialization values of the parameters in the simulation experiment are as follows: the regularization parameter λ is set to 1, the standard deviation of the two gaussian functions is 6 and 30, respectively, the window length is 2s +1 and s is 30.
The performance of the method is analyzed through the subjective visual experience qualitative evaluation of the user. In the experiment, 35 participants subjectively scored the result videos of different stabilization methods, and for fairness testing, the participants selected videos with better human visual perception when the specific methods are unknown.
(2) Emulated content
The method adopts the de-jitter performance of the real jitter video data inspection algorithm, and the test video is the jitter video containing the rolling shutter artifact. In order to test the performance of the algorithm of the invention, the proposed video stabilization method is compared with the current mainstream method in China. The comparison method comprises the following steps: a robust mesh repair method and subspace approach.
(3) Analysis of simulation experiment results
Fig. 2(a) to 2(f) are residual maps of two frames of images obtained by different methods for removing a rolling artifact, fig. 3(a) to 3(d) are feature point trajectory maps processed by different video stabilization methods, fig. 4(a) to 4(d) are result maps obtained by different methods for simultaneously removing a rolling artifact and a shake, fig. 5 is 10 test videos, and fig. 6 is a visual evaluation result of 10 test videos by 35 users.
In fig. 2(a) to 2(f), the restoration effect is visualized by using a frame difference method. Fig. 2(a) is a first frame image of a first test video, fig. 2(b) is a second frame image of the first test video, fig. 2(c) is a residual image between two original frames, fig. 2(d) is a residual image between two frames after being de-jittered only by the method, fig. 2(e) is a residual image between two frames after being processed by the robust mesh repairing method, and fig. 2(f) is a residual image between two frames after being de-jittered and de-rolling artifacts simultaneously by the method. It can be obviously observed that the residual error between two frames of images is minimum by using the method to simultaneously remove the shaking and rolling artifacts, and the rolling artifacts and shaking effects contained in the video can be well removed.
Fig. 3(a) is an original feature point trajectory diagram, fig. 3(b) is a feature point trajectory diagram processed by the robust mesh restoration method, fig. 3(c) is a feature point trajectory diagram processed by the subspace method, and fig. 3(d) is a feature point trajectory diagram processed by the method of the present invention. It can be observed that the method of the present invention has good effect when processing video jitter, and the adoption of the adaptive weight design can not easily cause the phenomenon of over-smoothing when processing the jitter of step motion type, which shows that the method of the present invention has excellent effect in the aspect of video debouncing.
Fig. 4(a) is a diagram of three randomly selected frames of original images in a test video, fig. 4(b) is a diagram of the results of the three frames of images processed by the robust mesh repairing method, fig. 4(c) is a diagram of the results of the three frames of images processed by the subspace method, and fig. 4(d) is a diagram of the results of the three frames of images processed by the method of the present invention. As can be seen from fig. 4(a) -4 (d), the subspace method only treats the rolling artifacts as a structured noise implicitly in the video de-jittering process, and as a result, a spatial distortion phenomenon occurs, and both the robust mesh repairing method and the method of the present invention have a specific step to treat the rolling artifacts, so that the object can be corrected to some extent. In addition, because the invention adopts the self-adaptive weight, the result of the invention saves more image information under the condition of fast motion than other two methods.
Fig. 6 shows a video visual evaluation result chart for 35 users. Since people may have different standards, it is difficult to design an index to quantitatively assess the effectiveness of stabilization and roller door correction. Thus, we performed a user survey of 35 participants, and performed a qualitative comparison. Fig. 5 shows randomly selected test videos, and for each test video, the user selects the video considered to have the best visual effect without knowing the three repair methods (robust mesh repair method, subspace method, and method of the present invention). In the test cases of 7 th, 9 th and 10 th videos, no participant selects a subspace method, and the visual experience is greatly reduced because of obvious geometric distortion. In addition, more users prefer the method, and the method is considered to have better balance between motion smoothing and information storage, so that the method is considered to be superior to the other two most advanced methods, and the video jitter and rolling artifacts can be removed simultaneously.
Claims (7)
1. A video stabilization method for simultaneously eliminating rolling artifacts and jitter, the method comprising the steps of:
step 1, gridding video frames: suppose that the observed jittered video sequence containing rolling artifacts is: { It|t∈[1,N]Where N denotes the number of frames of the video sequence, for each frame of the video image a grid of the form 8 × 8 is defined, and the t-th frame ItGrid of ith row and jth columnCan be expressed asi,j∈[1,8],t∈[1,N]Defining the exposure time between adjacent grids under the same grid column as unit time;
step 2, estimating inter-frame motion: detecting the motion characteristic points by using the characteristic points of the grids corresponding to two adjacent frames in the video sequence, and calculating a rigid transformation matrix and a homography matrix of each frame of video image by using a random sampling consistency method, wherein the rigid transformation matrix and the homography matrix of the ith row and the jth column grid of the t frame can be respectively expressed as follows:wherein i, j ∈ [1,8 ]],t∈[1,N];
Step 3, constructing a data fidelity term and a motion smoothing regular term: defining a gridThe intra-frame motion in unit time ofUsing the current frame ItBefore and after frame It-1And It+1;
According to the fidelity between the sum of the intra-frame motion under the same grid column and the inter-frame motion, constructing the fidelity item of the inter-frame intra-frame motion data
According to the similarity of the intra-frame motion under the condition of dense sampling density, constructing an intra-frame motion smoothing regular term
And 4, constructing an interframe intraframe motion joint optimization model: establishing frames according to the constraint item constructed in the step 3The intra-frame motion joint optimization model is as follows: arg min{F}P (F) + λ Q (F), where the regularization parameter λ > 0;
step 5, estimating intra-frame motion: respectively optimizing the combined model in the step 4 according to the additivity of the angle theta, the horizontal displacement x and the vertical displacement y in the rigid transformation matrix parameters, finally respectively solving the three parameters of the angle theta, the horizontal displacement x and the vertical displacement y of the intra-frame motion, and synthetically solving the intra-frame motion matrix
Step 6, setting an adaptive sliding window: adopting windowing processing to each grid, setting the window size to be s, and obtaining the t-th frame I in the step 2tInter-frame motion matrix of And the intra-frame motion matrix obtained in the step 5s has a value in the range of [0,30 ]]An integer of (d);
step 7, calculating self-adaptive weight: calculating the time distance and the space distance between the current grid and the k frame global shutter point grid, and then weightingWhere G (-) represents a Gaussian function, a set of weight vectors is finally obtainedAnd estimating a uniform weight vector for the grid of the same frame:being a matrixAn L1 norm;
step 8, solving restoration transformation: according to the adaptive weight obtained in step 7 and the relationi,j∈[1,8],t∈[1,N]The tth frame I can be solvedtA recovery change of the grid of the ith row and the jth column, wherein wt,kIn order to adapt the weights adaptively to each other,as a grid in the windowTo the gridThe accumulation of motion between the frames in between,for the i row and j column grid of the k frame to the grid column global shutter point, thenMay represent the total motion of the current mesh to the k-th frame global shutter point;
2. The video stabilization method for simultaneous removal of rolling artifacts and jitter according to claim 1, wherein: step by stepWhen constructing the data fidelity item in step 3, according to the gridWill be the same as other grids in the same grid columnSharing, availability of propertiesGrid on the left side of the equationAfter 8 units of time frame motion accumulation, the right side is the corresponding frame motion of the grid, and according to the property, the data fidelity item P (F) between the sum of the frame motion and the frame motion can be designed.
3. The video stabilization method for simultaneous removal of rolling artifacts and jitter according to claim 1, wherein: when the motion smoothing regular term is constructed in the step 3, according to the similarity of the intra-frame motion under the high-frequency sampling condition, for the gridIn a frameWhich is a grid with the next grid row of the same grid columnIn a frameShould there be similarity, then a smooth regularization term can be designed
4. The video stabilization method for simultaneous removal of rolling artifacts and jitter according to claim 1, wherein: the rigid transformation in step 5 is defined as a three-degree-of-freedom transformationWherein theta, x and y respectively represent a rotation angle, horizontal displacement and vertical displacement between every two adjacent grids; the transformation having the additive property that
5. The video stabilization method for simultaneous removal of rolling artifacts and jitter according to claim 1, wherein: in step 5, according to the additive property of the rigid transformation, the optimization model in step 4 can be converted into an optimization model in three degrees of freedom, for horizontal displacement, f represents horizontal displacement in a frame, and r represents horizontal displacement between frames, the data fidelity term and the motion smoothing term in step 3 are converted into:
conversion of the optimization model to argmin{f}P (f) + lambda Q (f), the horizontal displacement of each grid can be obtained by a new modelThe vertical displacement and the rotation angle solving method are the same as the horizontal displacement; the model can be expressed in matrix-vector form as:
6. the video stabilization method for simultaneous removal of rolling artifacts and jitter according to claim 1, wherein: when constructing adaptive weights in step 7Then, define the gridTo the gridThe temporal distance and the spatial distance are respectively | t-k |, and then it is the horizontal distance between the two grids,is the vertical distance between the two grids.
7. The video stabilization method for simultaneous removal of rolling artifacts and jitter according to claim 1, wherein: defined in step 8The motion accumulation in the frame from the ith row and j column grids of the kth frame to the global shutter point, if the 4 th grid row is taken as the global shutter point, the t-th frame ItThe intra motion restoration matrix for the grid of row k and column j can be expressed as:by passingThe rolling artifacts for each grid can be removed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911260902.8A CN111355881B (en) | 2019-12-10 | 2019-12-10 | Video stabilization method for simultaneously eliminating rolling artifacts and jitters |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911260902.8A CN111355881B (en) | 2019-12-10 | 2019-12-10 | Video stabilization method for simultaneously eliminating rolling artifacts and jitters |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111355881A true CN111355881A (en) | 2020-06-30 |
CN111355881B CN111355881B (en) | 2021-09-21 |
Family
ID=71193967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911260902.8A Active CN111355881B (en) | 2019-12-10 | 2019-12-10 | Video stabilization method for simultaneously eliminating rolling artifacts and jitters |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111355881B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014016451A (en) * | 2012-07-09 | 2014-01-30 | Ricoh Co Ltd | Imaging device, method for calculating camera shake correction amount, and program for calculating camera shake correction amount |
WO2016019238A1 (en) * | 2014-07-31 | 2016-02-04 | Apple Inc. | Piecewise perspective transform engine |
CN108010059A (en) * | 2017-12-05 | 2018-05-08 | 北京小米移动软件有限公司 | The method for analyzing performance and device of electronic flutter-proof algorithm |
CN108886584A (en) * | 2016-04-20 | 2018-11-23 | 三星电子株式会社 | Method and apparatus for generating high-fidelity scaling for mobile video |
CN109905565A (en) * | 2019-03-06 | 2019-06-18 | 南京理工大学 | Video stabilization method based on motor pattern separation |
-
2019
- 2019-12-10 CN CN201911260902.8A patent/CN111355881B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014016451A (en) * | 2012-07-09 | 2014-01-30 | Ricoh Co Ltd | Imaging device, method for calculating camera shake correction amount, and program for calculating camera shake correction amount |
WO2016019238A1 (en) * | 2014-07-31 | 2016-02-04 | Apple Inc. | Piecewise perspective transform engine |
CN108886584A (en) * | 2016-04-20 | 2018-11-23 | 三星电子株式会社 | Method and apparatus for generating high-fidelity scaling for mobile video |
CN108010059A (en) * | 2017-12-05 | 2018-05-08 | 北京小米移动软件有限公司 | The method for analyzing performance and device of electronic flutter-proof algorithm |
CN109905565A (en) * | 2019-03-06 | 2019-06-18 | 南京理工大学 | Video stabilization method based on motor pattern separation |
Non-Patent Citations (1)
Title |
---|
张博: "基于NSC1003的CMOS图像处理技术研究", 《中国优秀硕士学位论文全文数据库》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111355881B (en) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11699217B2 (en) | Generating gaze corrected images using bidirectionally trained network | |
Liu et al. | Blind quality assessment of camera images based on low-level and high-level statistical features | |
WO2021208122A1 (en) | Blind video denoising method and device based on deep learning | |
CN111193923B (en) | Video quality evaluation method and device, electronic equipment and computer storage medium | |
CN105052129B (en) | Camera motion estimation, rolling shutter detection and the camera shake for video stabilisation is cascaded to detect | |
EP2999210B1 (en) | Generic platform video image stabilization | |
Ciancio et al. | No-reference blur assessment of digital pictures based on multifeature classifiers | |
US9824426B2 (en) | Reduced latency video stabilization | |
CN104103050B (en) | A kind of real video restored method based on local policy | |
US11303793B2 (en) | System and method for high-resolution, high-speed, and noise-robust imaging | |
US8264614B2 (en) | Systems and methods for video processing based on motion-aligned spatio-temporal steering kernel regression | |
EP3216216A1 (en) | Methods and systems for multi-view high-speed motion capture | |
CN111901532B (en) | Video stabilization method based on recurrent neural network iteration strategy | |
WO2014070273A1 (en) | Recursive conditional means image denoising | |
CN112184779A (en) | Method and device for processing interpolation image | |
KR101341871B1 (en) | Method for deblurring video and apparatus thereof | |
CN107360377B (en) | Vehicle-mounted video image stabilization method | |
Yasarla et al. | CNN-based restoration of a single face image degraded by atmospheric turbulence | |
CN112287819A (en) | High-speed multi-channel real-time image stabilizing method for video recording equipment | |
CN111371983A (en) | Video online stabilization method and system | |
CN111355881B (en) | Video stabilization method for simultaneously eliminating rolling artifacts and jitters | |
CN109905565B (en) | Video de-jittering method based on motion mode separation | |
CN103634591A (en) | Method, device and system for evaluating video quality | |
Mohan | Adaptive super-resolution image reconstruction with lorentzian error norm | |
Lee et al. | Efficient Low Light Video Enhancement Based on Improved Retinex Algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Xiao Liang Inventor after: Wu Huicong Inventor after: Yang Fan Inventor before: Wu Huicong Inventor before: Yang Fan Inventor before: Xiao Liang |
|
CB03 | Change of inventor or designer information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |