CN105049678A - Self-adaptation camera path optimization video stabilization method based on ring winding - Google Patents
Self-adaptation camera path optimization video stabilization method based on ring winding Download PDFInfo
- Publication number
- CN105049678A CN105049678A CN201510504730.XA CN201510504730A CN105049678A CN 105049678 A CN105049678 A CN 105049678A CN 201510504730 A CN201510504730 A CN 201510504730A CN 105049678 A CN105049678 A CN 105049678A
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- prime
- camera path
- motion model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Studio Devices (AREA)
Abstract
The invention discloses a self-adaptation camera path optimization video stabilization method based on ring winding. The method solves the problems of great limitation by environment, poor effect and the like of the existing video stabilizing technology. The video stabilization method comprises the following steps of: 1, movement estimation; 2, smoothening operation; 3, twisting degree detection on smoothened frames, and retention degree detection on smoothened frames; 4, camera path ring winding; 5, repeating the operations from step 2 to step 4 until high-frequency components and low-frequency components in a video are removed; and 6, obtaining of the final stable video through image conversion rendering. The video stabilization method has the advantages that the conventional movement estimation is optimized; by adopting a ring winding mode, a set of path optimization methods capable of realizing the self adaptation to the video twisting degree and the video content retention degree is built; the video twisting is reduced; the problem of too low video content retention degree is solved; meanwhile, the low-frequency components in the video are effectively inhibited, so that the video stability is improved; and the effect of well coping with the scene conversion and camera movement conversion can be achieved.
Description
Background technology
The present invention relates to the image processing method of mobile terminal equipment shooting, specifically, relate to the anti-fluttering method of the video of a kind of mobile terminal shooting.
Technical field
The video of mobile terminal shooting, because camera is rocking in shooting process, is erraticly rocking the picture weave that can shine into and shoot, thus reduce the viewing quality of video, therefore, in order to provide the viewing quality of mobile terminal capture video, the process of video stabilization seems particularly important.
Video stabilization is devoted to reduce the shake in video capture process, produces stable video, thus promotes the viewing quality of video.Existing video anti-fluttering method roughly can be divided into three modules, motion estimation module, path optimization's module and Video Rendering module.Wherein, motion estimation module is devoted to utilize various form to characterize the original motion of camera, and conventional method comprises BMA, method of characteristic point, three-dimensional reconstruction method; Path optimization's module is devoted to camera to go out stable camera motion from original dither motion converter, and conventional method has low pass filter, smoothed motion vector; Video Rendering module is by each frame in original video from the original motion path changeover of shake to stable camera path, thus produce stable video effect, conventional method has based on single homography transform method with based on block homography transform method.
But the video content that mobile terminal is shot is varied, the forms of motion of video is also varied: content can contain daytime, night, indoor, outdoor, open spaciousness, the sea of faces are noisy; Video motion form contains common shooting of walking, airborne fast-motion shot, outdoor exercises shooting, the shortening of video zoom in, the quick transition of video.The diversity of video capture content and video motion form, all causes impact in various degree to the module of three in video stabilization, cause stable video produce distortion phenomenon (frame of video occur distortion or video content retain spend little).
Specifically, in motion estimation module, BMA and method of characteristic point are too single or too under complicated situation in scene, precision can be subject to certain impact, inaccurate estimation can bring impact to the module of two afterwards, thus causing stablizing effect to decline, serious situation also can cause the distortion distortion of frame of video.The method of three-dimensional reconstruction, because rely on the three-dimensional reconstruction to scene, consumes a large amount of computational resources and time, and the reconstructed results of generation mistake of easily makeing mistakes, therefore lack practicality.In path optimization's module, low-pass filtering and smoothed motion vector effectively can remove the high fdrequency component in camera shake, but be difficult to some the low frequency component motions effectively got rid of in camera motion, its high frequency components correspond to trembling of camera, and low frequency component correspond to level and smooth the rocking of camera.If remove rocking of camera by force, then can bring puzzlement to rendering module.In the scene of rapid movement, the result video content retention that final rendering can be made to go out of rocking of removing camera is spent little, affects viewing effect.
Summary of the invention
In order to overcome the problems referred to above, the invention provides a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path.
To achieve these goals, the technical solution used in the present invention is as follows:
Based on the video anti-fluttering method optimized around loop self-adaptive camera path, comprise the following steps:
Step 1., by consecutive frame Feature Points Matching and the acting in conjunction of consecutive frame block search, estimates the inter frame motion model based on homography conversion between consecutive frame;
The inter frame motion model of step 2. to shake frame carries out Gaussian smoothing, obtains the inter frame motion model of stabilizer frame;
Step 3. is made torsion resistance to the frame after level and smooth and is detected, and does the detection of retentions degree to the frame after level and smooth;
Step 4. camera path is around ring;
Step 5. repeats step 2 to step 4 until the high fdrequency component got rid of in video and low frequency component;
Step 6. renders final stable video by image conversion.
Further, the concrete mode of described step 1 is as follows:
(11) estimation is carried out to interframe, adopt homography conversion as inter frame motion model, to describe the motion between consecutive frame
In formula, (x ', y ', z ') represent conversion after coordinate, correspond to the position of picture point in the former frame of present frame, (x, y) represent the coordinate before conversion, correspond to the position of present frame graphical dots, a in homography transformation matrix, the rotation of b, d, e token image, convergent-divergent, g, h characterize perspective figure transformation, the translation transformation of c, f token image;
(12) by formula A={ (x ', y ')-(x, y) }, to estimate in former frame and a rear frame corresponding points to position, and then estimate the value of the parameters in inter frame motion model.
Further, the concrete mode of described step (12) is as follows:
(121) by the Feature Points Matching of image, obtain initial point to set, utilize RANSAC based on homography transformation model, the point weeding out matching error is right, and finally formed first marks words and phrases for special attention is designated as A1 to set;
(122) to image block, image is divided into the region unit of the 16x16 of size equalization, each frame in video is converted to gray level image simultaneously, calculates the variance yields of gray level image in each block, and leave the block that variance yields is greater than specified threshold;
(123) Block-matching: to each block stayed, the block mated most that search is corresponding in former frame gray level image content, by Block-matching, have collected second and marks words and phrases for special attention to set, be recorded as A2;
(124) two groups of points are put collection in a pair to collective combinations, be designated as A=A1 ∪ A2;
(125) by least square method, by two groups of points to the following formula of write:
Least square method is utilized to solve the parameters obtained in inter frame motion model.
Further, the concrete mode of described step 2 is as follows: from the 0th moment, until video terminates, calculates a F between every two frames
t, by the F between two between frame
tcascade up successively and obtain the set of strings of shake frame inter frame motion model:
F={F
0,F
0F
1,…F
0F
1..F
t-1F
t}
Smooth operation is done to each element in set, obtains the set of stabilizer frame inter frame motion model:
H={H
0,H
0H
1,…H
0H
1..H
t-1H
t}。
Further, in described step 4, camera path passes through around ring formula around ring:
Carry out repeatedly smooth operation, until eliminate the low frequency component existed in video; Wherein, with the H that formula (3) calculates
t, replace the F in original path
tsmooth operation next time can be implemented, B
trepresent that each moment t frame becomes stable transformation matrix from shake, it is also a homography conversion motion model.Wherein, H
trefer to the inter frame motion model of the stabilizer frame obtained after F smooth operation between every two frames.
Further, in described step 3 to transformation matrix B
tperform an analysis, what obtain frame of video torsion resistance objectively responds value:
Get transformation matrix B
tfour, upper left corner element, forms a little 4x4 matrix
Singular value decomposition is done to this matrix, obtains
Ratio kappa=λ on its leading diagonal
1/ λ
2, for characterizing figure whether distortion, this value, more close to 1, illustrates that distortion is less, otherwise then distortion is larger;
To transformation matrix B
tperform an analysis, obtain the retention degree of content frame: by carrying out homography conversion to 4 angle points of I picture, obtain 4 new angle points, then a new quadrangle can be obtained according to 4 new angle points, maximum inscribe rectangle is found in new quadrangle, calculate the area ratio that maximum inscribe rectangle accounts for original rectangular, be designated as π.4 original angle points are 4 summits of picture, and as shown in left-hand component in Fig. 3, be 4 angles of rectangle, the form that each angle point can be write as [x, y, 1], then utilizes B
tconvert according to formula (1),
Further, in the process of each smooth operation, observe the value of κ and π, as κ <0.9, or κ >1.1, or when π <0.8, by the transformation matrix B that this frame is corresponding
treduce by half, the mode of reducing by half is B
t=B
t+ I, wherein I representation unit matrix.
It should be noted that, " camera " in present specification refers to the equipment with shoot function, as: smart mobile phone, panel computer etc., instead of refer in particular to traditional camera.
The present invention compared with prior art, has the following advantages:
The present invention is by being optimized traditional estimation, and by the mode around ring, establish a set of can the method for optimizing route of adaptive video torsion resistance and video content retention degree, reduction frame of video distortion and overcome video content retain spend minor issue while, low frequency component in effective suppression video, thus promote the stability of video, scene change can be tackled well and camera motion converts.
Accompanying drawing explanation
Fig. 1 is the graph of a relation between the movement position of camera motion position initial in the present invention-embodiment and stable rear camera.
Fig. 2 is low pass gaussian filtering schematic diagram in the present invention-embodiment.
Fig. 3 is that the retention degree of content frame in the present invention-embodiment calculates schematic diagram.
Fig. 4 is schematic flow sheet of the present invention.
Embodiment
Embodiment
As shown in Figure 4, present embodiments provide a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path, the method is made up of three parts: (1) motion estimation module; (2) path optimization's module; (3) Video Rendering module.
Motion estimation module utilizes homography to convert based on (Homography), and to estimating that the process of homography conversion is improved, effectively raises the accuracy of estimation.Inter frame motion model is obtained by estimation, specific as follows:
1. estimation, carries out estimation to interframe, adopts homography conversion as inter frame motion model, to describe the motion between consecutive frame.Describable type of sports comprises: rotate, translation, convergent-divergent and perspective transform, inter frame motion model is as follows:
Wherein, (x ', y ', z ') represents the coordinate after conversion, correspond to the position of picture point in the former frame of present frame.(x, y) represents the coordinate before conversion, correspond to the position of present frame graphical dots.Specifically, the true X-Y coordinate after conversion is:
wherein, parameter to be determined in homography transformation matrix be a, b, c, d, e, f, g, h. wherein, the rotation of a, b, d, e token image, convergent-divergent; G, h characterize perspective figure transformation; The translation transformation of c, f token image;
2. by estimating in former frame and a rear frame corresponding points to position, A={ (x ', y ')-(x, y) }, and then estimate the value of the parameters in homography conversion, the position that acquisition point is right comprises the following steps:
A (), by the Feature Points Matching of image, obtain initial point to set, utilize RANSAC based on homography transformation model, the point weeding out matching error is right, and finally formed first marks words and phrases for special attention is designated as A1 to set;
B () makes piecemeal to image, image is divided into the region unit of the 16x16 of size equalization, each frame in video is converted to gray level image simultaneously, calculates in each block, the variance yields of gray level image, leaves the block that variance yields is greater than specified threshold.
C (), to each block stayed, searches for the corresponding block mated most in former frame gray level image content.The condition of mating most utilizes MAD (MeanAbsoluteDifference) as criterion, i.e. mean absolute difference.Former block central point is designated as (x, y), and search radius is designated as r=16, and namely region of search is former block locations peripheral region: (x-r, y-r) ~ (x+r, y+r).The optimal location found in search procedure, corresponding minimum MAD value, be recorded as (x ', y ').Like this by Block-matching, have collected second and mark words and phrases for special attention to set, be recorded as A2.
D two groups of points are put collection collective combinations by () in a pair, be designated as A=A1 ∪ A2
(e) by least square method, by two groups of points to write following formula:
Wherein, in A, each group point is right, can write out above-mentioned two row equatioies, all-pair formulate in A can be obtained the system of linear equations of an overdetermination.Utilize least square method to solve and can obtain interframe movement homography transformation parameter.The interframe homography conversion motion model obtained, by present frame is obtained toward computing in former frame.If the present frame moment is t, the former frame moment is t-1, and note homography conversion motion model is F
t.
From the 0th moment, until video terminates, a F between every two frames, can be calculated
t, by the F between two between frame
tcascade up successively and can obtain set of strings a: F={F
0, F
0f
1... F
0f
1..F
t-1f
t.
Path optimization's module is based on the camera path optimization method around loop self-adaptive, wherein, refer in path optimization around ring, between initial jitter path and path optimizing, the ring of a tandem type is there is between consecutive frame, by optimization camera path that can be iterative around the mode of ring, thus effectively remove the low frequency component retained in camera path; Self adaptation refers to, in the process optimized around ring, automatically detects the position can not carried out around ring at camera path, thus avoids the video content brought because of transition optimization to retain spending little problem.By in the process around ring, the homography conversion representing interframe movement is set to basis matrix (IdentityMatrix), can also by inaccurate for estimation part in ring optimizing process, be reduced to original video frame, be about to the part Bu Zuo path optimization can not doing path optimization, partial results becomes original frame of video, or close to the position of original video frame, thus effectively reduce the frame of video distortion in result.Adaptively can overcome while video content retains and spend minor issue in reduction frame of video distortion around endless path optimization, effectively suppress the low frequency component in video, thus promote the stability of video.
In the present embodiment, the particular content of path optimization is as follows:
To set of strings: F={F
0, F
0f
1... F
0f
1..F
t-1f
tin each element make smooth operation, a new set can be obtained: H={H
0, H
0h
1... H
0h
1..H
t-1h
t.As shown in Figure 1, illustrate the graph of a relation between initial camera motion position and the movement position stablizing camera afterwards, first row is original dither frame, and the motion of interframe is by homography transformation model F
trepresent, second row represents the stabilizer frame in optimization or after optimizing, and the motion of interframe is by homography transformation model H
trepresent.Each moment t, the transformation matrix that frame transforms to settling position (second row) by dither positions (first row) is expressed as B
t, this transformation matrix B
talso be a homography conversion.
In level and smooth process, low pass gaussian filtering is done to the relevant position in inter frame motion model matrix.Be specially: low-pass filtering is made in each position in matrix, as shown in Figure 2, each nine grids represents a homography transformation matrix F
t, by the value in each corresponding grid, be linked to be a string, form a signal, as upper left corner signal can be characterized by a
0a
1a
2a
t, except the lower right corner is except 1,8 useful signals can be obtained altogether.To 8 signals, making smooth operation and can obtain smooth signal independently, level and smooth core adopts common Gauss's core, is designated as G, this smoothing process is recorded as:
transformation matrix B corresponding to each frame can be obtained after level and smooth
t, and then calculate the conversion H of settling position
t.
Once smoothly grasp the high fdrequency component can effectively got rid of in video, but the very difficult low frequency component effectively got rid of in video, therefore need repeatedly filtering.Here one is related to around ring operation, by H
tpass through F
twith B
tand B
t-1, show.Specifically, around ring formula be:
with the H calculated around ring formula, replace the F in original path, just can implement smooth operation next time, until eliminate the low frequency component existed in video.
In calculating around ring, also should make torsion resistance to the frame after level and smooth and detect, the detection of retentions degree is done to the frame after level and smooth, specific as follows:
1, to transformation matrix B
tperform an analysis, what can obtain frame of video twist angle objectively responds value.Transformation matrix B
talso be the conversion of homography, get four, its upper left corner element, form a little 4x4 matrix
Make singular value decomposition (singularvaluedecompositionSVD) to this matrix to decompose, obtain
Ratio kappa=λ on its leading diagonal
1/ λ
2, can be used to characterize figure whether distortion, this value, close to 1, illustrate that distortion is less, otherwise then distortion is larger.
2, to transformation matrix B
tperform an analysis, the retention degree of this content frame can also be obtained.As shown in Figure 3, by carrying out homography conversion to 4 angle points, can obtain a new quadrangle, in new quadrangle, find maximum inscribe rectangle, dotted line represents.The area ratio that maximum inscribe rectangle accounts for original rectangular can be calculated, be designated as π.
In at every turn level and smooth process, observe the value of κ and π, when κ <0.9 or κ >1.1 or π <0.8 time, by the transition matrix B that this frame is corresponding
treduce by half, the mode of reducing by half is B
t=B
t+ I, wherein I representation unit matrix, numeric representation stop condition.Until the B of all frames
tcorresponding κ and π all meets the condition of the 14th article, terminates computing.
Use the B reduced by half
ttransformed to by incoming frame in video on stable position, the mapping mode of employing is image coordinate conversion (Imagewarping) based on homography conversion.After completing path optimization, render final stable video finally by image conversion.
According to above-described embodiment, just the present invention can be realized well.What deserves to be explained is; under the prerequisite of technique scheme, for solving same technical problem, even if make some on the invention without substantial change or polishing; the essence of the technical scheme adopted is still the same with the present invention, therefore it also should in protection scope of the present invention.
Claims (7)
1., based on the video anti-fluttering method optimized around loop self-adaptive camera path, it is characterized in that, comprise the following steps:
Step 1., by consecutive frame Feature Points Matching and the acting in conjunction of consecutive frame block search, estimates the inter frame motion model based on homography conversion between consecutive frame;
The inter frame motion model of step 2. to shake frame carries out Gaussian smoothing, obtains the set of the inter frame motion model of stabilizer frame;
Step 3. is made torsion resistance to the frame after level and smooth and is detected, and does the detection of retentions degree to the frame after level and smooth;
Step 4. camera path is around ring;
Step 5. repeats step 2 to step 4 until the high fdrequency component got rid of in video and low frequency component;
Step 6. renders final stable video by image conversion.
2. a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path according to claim 1, it is characterized in that, the concrete mode of described step 1 is as follows:
(11) estimation is carried out to interframe, adopt homography conversion as inter frame motion model, to describe the motion between consecutive frame
In formula, (x ', y ', z ') represent conversion after coordinate, correspond to the position of picture point in the former frame of present frame, (x, y) represent the coordinate before conversion, correspond to the position of present frame graphical dots, a in homography transformation matrix, the rotation of b, d, e token image, convergent-divergent, g, h characterize perspective figure transformation, c, the translation transformation of f token image
Represent inter frame motion model;
(12) by formula A={ (x ', y ')-(x, y) }, to estimate in former frame and a rear frame corresponding points to position, and then estimate the value of the parameters in inter frame motion model.
3. a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path according to claim 2, it is characterized in that, the concrete mode of described step (12) is as follows:
(121) by the Feature Points Matching of image, obtain initial point to set, utilize RANSAC based on homography transformation model, the point weeding out matching error is right, and finally formed first marks words and phrases for special attention is designated as A1 to set;
(122) to image block, image is divided into the region unit of the 16x16 of size equalization, each frame in video is converted to gray level image simultaneously, calculates the variance yields of gray level image in each block, and leave the block that variance yields is greater than specified threshold;
(123) Block-matching: to each block stayed, the block mated most that search is corresponding in former frame gray level image content, by Block-matching, have collected second and marks words and phrases for special attention to set, be recorded as A2;
(124) two groups of points are put collection in a pair to collective combinations, be designated as A=A1 ∪ A2;
(125) by least square method, by two groups of points to the following formula of write:
Least square method is utilized to solve the parameters obtained in inter frame motion model.
4. a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path according to claim 3, it is characterized in that, the concrete mode of described step 2 is as follows: from the 0th moment, until video terminates, calculates a F between every two frames
t, by the F between two between frame
tcascade up successively and obtain the set of strings of shake frame inter frame motion model:
F={F
0,F
0F
1,…F
0F
1..F
t-1F
t}
Smooth operation is done to each element in set, obtains the set of stabilizer frame inter frame motion model:
H={H
0,H
0H
1,…H
0H
1..H
t-1H
t}。
5. a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path according to claim 4, is characterized in that, in described step 4, camera path passes through around ring formula around ring:
Carry out repeatedly smooth operation, until eliminate the low frequency component existed in video; Wherein, with the H that formula (3) calculates
t, replace the F in original path
tsmooth operation next time can be implemented, B
trepresent that each moment t frame becomes stable transformation matrix from shake, it is also a homography conversion motion model.
6. a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path according to claim 5, is characterized in that, to transformation matrix B in described step 3
tperform an analysis, what obtain frame of video torsion resistance objectively responds value:
Get transformation matrix B
tfour, upper left corner element, forms a little 4x4 matrix
Singular value decomposition is done to this matrix, obtains
Ratio kappa=λ on its leading diagonal
1/ λ
2, for characterizing figure whether distortion, this value, more close to 1, illustrates that distortion is less, otherwise then distortion is larger;
To transformation matrix B
tperform an analysis, obtain the retention degree of content frame: by carrying out homography conversion to 4 angle points of I picture, obtain 4 new angle points, then a new quadrangle can be obtained according to 4 new angle points, maximum inscribe rectangle is found in new quadrangle, calculate the area ratio that maximum inscribe rectangle accounts for original rectangular, be designated as π.
7. a kind of video anti-fluttering method based on optimizing around loop self-adaptive camera path according to claim 6, it is characterized in that, in the process of each smooth operation, observe the value of κ and π, as κ <0.9, or κ >1.1, or when π <0.8, by the transformation matrix B that this frame is corresponding
treduce by half, the mode of reducing by half is B
t=B
t+ I, wherein I representation unit matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510504730.XA CN105049678B (en) | 2015-08-17 | 2015-08-17 | It is a kind of based on the video anti-fluttering method optimized around loop self-adaptive camera path |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510504730.XA CN105049678B (en) | 2015-08-17 | 2015-08-17 | It is a kind of based on the video anti-fluttering method optimized around loop self-adaptive camera path |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105049678A true CN105049678A (en) | 2015-11-11 |
CN105049678B CN105049678B (en) | 2018-08-03 |
Family
ID=54455853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510504730.XA Expired - Fee Related CN105049678B (en) | 2015-08-17 | 2015-08-17 | It is a kind of based on the video anti-fluttering method optimized around loop self-adaptive camera path |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105049678B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330909A (en) * | 2017-05-08 | 2017-11-07 | 上海交通大学 | Pure rotational motion decision method based on homography matrix characteristic |
CN108257155A (en) * | 2018-01-17 | 2018-07-06 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN108702442A (en) * | 2016-10-14 | 2018-10-23 | 深圳市大疆创新科技有限公司 | System and method for capturing constantly |
CN109905565A (en) * | 2019-03-06 | 2019-06-18 | 南京理工大学 | Video stabilization method based on motor pattern separation |
CN115209031A (en) * | 2021-04-08 | 2022-10-18 | 北京字跳网络技术有限公司 | Video anti-shake processing method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060078162A1 (en) * | 2004-10-08 | 2006-04-13 | Dynapel, Systems, Inc. | System and method for stabilized single moving camera object tracking |
CN102685371A (en) * | 2012-05-22 | 2012-09-19 | 大连理工大学 | Digital video image stabilization method based on multi-resolution block matching and PI (Portion Integration) control |
CN103139568A (en) * | 2013-02-05 | 2013-06-05 | 上海交通大学 | Video image stabilizing method based on sparseness and fidelity restraining |
-
2015
- 2015-08-17 CN CN201510504730.XA patent/CN105049678B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060078162A1 (en) * | 2004-10-08 | 2006-04-13 | Dynapel, Systems, Inc. | System and method for stabilized single moving camera object tracking |
CN102685371A (en) * | 2012-05-22 | 2012-09-19 | 大连理工大学 | Digital video image stabilization method based on multi-resolution block matching and PI (Portion Integration) control |
CN103139568A (en) * | 2013-02-05 | 2013-06-05 | 上海交通大学 | Video image stabilizing method based on sparseness and fidelity restraining |
Non-Patent Citations (2)
Title |
---|
朱娟娟: "电子稳像理论及其应用研究", 《中国优秀硕博士论文信息科技辑》 * |
王栋: "复合运动下的电子稳像算法研究", 《中国优秀硕博士论文信息科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108702442A (en) * | 2016-10-14 | 2018-10-23 | 深圳市大疆创新科技有限公司 | System and method for capturing constantly |
US10896520B2 (en) | 2016-10-14 | 2021-01-19 | SZ DJI Technology Co., Ltd. | System and method for moment capturing |
CN108702442B (en) * | 2016-10-14 | 2021-04-16 | 深圳市大疆创新科技有限公司 | System and method for time of day capture |
CN107330909A (en) * | 2017-05-08 | 2017-11-07 | 上海交通大学 | Pure rotational motion decision method based on homography matrix characteristic |
CN108257155A (en) * | 2018-01-17 | 2018-07-06 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN108257155B (en) * | 2018-01-17 | 2022-03-25 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN109905565A (en) * | 2019-03-06 | 2019-06-18 | 南京理工大学 | Video stabilization method based on motor pattern separation |
CN109905565B (en) * | 2019-03-06 | 2021-04-27 | 南京理工大学 | Video de-jittering method based on motion mode separation |
CN115209031A (en) * | 2021-04-08 | 2022-10-18 | 北京字跳网络技术有限公司 | Video anti-shake processing method and device, electronic equipment and storage medium |
CN115209031B (en) * | 2021-04-08 | 2024-03-29 | 北京字跳网络技术有限公司 | Video anti-shake processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105049678B (en) | 2018-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021208122A1 (en) | Blind video denoising method and device based on deep learning | |
CN105049678A (en) | Self-adaptation camera path optimization video stabilization method based on ring winding | |
CN108805908B (en) | Real-time video image stabilization method based on time sequence grid stream superposition | |
CN111028150A (en) | Rapid space-time residual attention video super-resolution reconstruction method | |
CN103440662B (en) | Kinect depth image acquisition method and device | |
CN103079037B (en) | Self-adaptive electronic image stabilization method based on long-range view and close-range view switching | |
CN104867111B (en) | A kind of blind deblurring method of non-homogeneous video based on piecemeal fuzzy core collection | |
CN105872345A (en) | Full-frame electronic image stabilization method based on feature matching | |
CN111462019A (en) | Image deblurring method and system based on deep neural network parameter estimation | |
CN102360498A (en) | Reconstruction method for image super-resolution | |
CN103220488A (en) | Up-conversion device and method of video frame rate | |
CN104144282A (en) | Fast digital image stabilization method applicable to space robot visual system | |
CN111369466B (en) | Image distortion correction enhancement method of convolutional neural network based on deformable convolution | |
CN101853497A (en) | Image enhancement method and device | |
CN104159119A (en) | Super-resolution reconstruction method and system for video images during real-time sharing playing | |
Song et al. | An adaptive l 1–l 2 hybrid error model to super-resolution | |
CN110390646B (en) | Detail-preserving image denoising method | |
CN111614965B (en) | Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering | |
CN109978774A (en) | Multiframe continuously waits the denoising fusion method and device of exposure images | |
CN103310486A (en) | Reconstruction method of atmospheric turbulence degraded images | |
CN105282400B (en) | A kind of efficient video antihunt means based on geometry interpolation | |
CN107295217B (en) | Video noise estimation method based on principal component analysis | |
CN110533608B (en) | Image banding noise suppression method and device, electronic device and storage medium | |
CN101272450B (en) | Global motion estimation exterior point removing and kinematic parameter thinning method in Sprite code | |
CN101600105A (en) | Frame frequency lifting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180803 Termination date: 20210817 |