CN111614965B - Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering - Google Patents

Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering Download PDF

Info

Publication number
CN111614965B
CN111614965B CN202010376138.7A CN202010376138A CN111614965B CN 111614965 B CN111614965 B CN 111614965B CN 202010376138 A CN202010376138 A CN 202010376138A CN 111614965 B CN111614965 B CN 111614965B
Authority
CN
China
Prior art keywords
optical flow
grid
image
filtering
image stabilization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010376138.7A
Other languages
Chinese (zh)
Other versions
CN111614965A (en
Inventor
何楚
李勇超
林明远
田玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010376138.7A priority Critical patent/CN111614965B/en
Publication of CN111614965A publication Critical patent/CN111614965A/en
Application granted granted Critical
Publication of CN111614965B publication Critical patent/CN111614965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes

Abstract

The invention provides an unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering, wherein the input video is subjected to grid division, the vertexes of a grid are set as optical flow tracking points, and the optical flows of each frame of image and the previous frame of image at the vertexes of the grid are solved; carrying out primary time domain analysis on the grid light stream, removing high-frequency components, and then comparing before and after filtering, and screening out points with large changes as outer points; filtering the optical flow of the outer point to make the optical flow of the outer point continuous with the optical flow of the vertex of the surrounding mesh in space; performing time domain filtering on the optimized optical flow, and limiting by the grid vertex to ensure that the output video does not generate large deformation so as to obtain a motion compensation vector of the grid vertex; and carrying out grid transformation according to the motion compensation vector to obtain an output image stabilizing video. The invention adopts a coarse-to-fine method to filter the discontinuous optical flow and then carry out motion compensation constrained by grids, thereby reducing the influence caused by deformation and moving objects and improving the image stabilization quality.

Description

Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
Technical Field
The invention belongs to the field of video processing, and particularly relates to an unmanned aerial vehicle video image stabilization technical scheme based on image grid optical flow filtering.
Background
The video image stabilization technology is a video processing technology for processing an original video sequence collected by video equipment to remove high-frequency jitter in the original video sequence. Video image stabilization is an important research direction in the field of video processing, on one hand, the purpose of making human eyes feel comfortable and beneficial to artificial observation, discrimination and the like is achieved, and on the other hand, the video image stabilization is also used as a preprocessing stage of various other subsequent processing, such as detection, tracking, compression and the like. Although video stabilization techniques have been well developed, and in particular some emerging methods have emerged, problems still remain. The method has insufficient universality and cannot be applied to various scenes. Some methods rely on the distribution of feature points and some require that the scene be static. For scenes with complex foreground, variable depth of field and moving objects, a uniform method is rarely available. Secondly, the efficiency and the effect cannot be obtained at the same time, a complex method is often needed in a complex scene, the calculation cost is increased, and the effect of a simple method is not good enough, so that real-time image stabilization is still a big problem. Moreover, the standard for evaluating the video image stabilization effect is not unified, the commonly used evaluation indexes such as the peak signal-to-noise ratio do not always accord with the subjective feeling of human eyes, and the time consumption and the great uncertainty are caused by purely manual evaluation. Especially, the current unmanned aerial vehicle aerial photography is increasingly widely applied, and no effective solution for the image stabilizing technology of the unmanned aerial vehicle shooting video exists.
Disclosure of Invention
In order to solve the problems of moving objects and parallax in video image stabilization, the invention provides the technical scheme of unmanned aerial vehicle video image stabilization based on image grid optical flow filtering, aiming at the defects of the prior art, and the positions with discontinuous optical flows are screened out through time domain analysis and are subjected to spatial filtering, so that the influence caused by the moving objects and the parallax can be effectively reduced.
The technical scheme of the invention is an unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering, which comprises the following steps:
step 1, performing mesh division on an input video, setting the top points of meshes as optical flow tracking points, and solving the optical flow of each frame of image and the previous frame of image at the top points of the meshes;
step 2, carrying out primary time domain analysis on the grid light stream, removing high-frequency components, then comparing the size change before and after filtering, and screening out points with large change as outer points;
step 3, filtering the optical flows of the outer points in the step 2 to ensure that the optical flows are continuous with the optical flows of the vertexes of the surrounding grids in space;
step 4, performing time domain filtering on the optimized optical flow, and limiting the deformation of the output video through the grid vertexes to obtain a motion compensation vector of each grid vertex;
and 5, carrying out grid transformation according to the motion compensation vector to obtain an output image stabilizing video.
In step 1, optical flow tracking is performed on each mesh vertex according to the lk optical flow method, so that an optical flow vector of the mesh vertex is obtained.
In step 2, the continuity detection is carried out on the original optical flow, Kalman filtering is carried out in a time domain, and then comparison is carried out to judge whether the optical flow has mutation or not, which is realized as follows,
assuming that the camera shake follows Gaussian distribution, the optical flow integrated value R is passed through the previous framet-1Predicting the optical flow of the current frame obeys a Gaussian distribution
Figure BDA0002479978990000021
While the original optical flow obtained from the current frame follows Gaussian distribution N (R)t,Qt);
Where t is used to identify the current frame, Pt、QtAre respectively Gaussian distribution
Figure BDA0002479978990000022
N(Rt,Qt) The superscript T represents the transpose of the matrix;
if the camera is moving steadily, the transformation matrix H at the time ttFor a 3 × 3 identity matrix I, the kalman gain K is obtained as:
Figure BDA0002479978990000023
then the best estimated value R 'of the current frame optical flow'tIs composed of
R′t=Rt-1+K(Rt-Rt-1)
Covariance matrix P from previous framet-1The covariance matrix P of the current frame can be obtainedtIs composed of
Pt=Pt-1-KPt-1
Comparison of R'tAnd RtIf the difference between the two is greater than a preset threshold epsilon, the corresponding point is marked as an outer point and is marked as Mt(v) 0, otherwise mark Mt(v)=1。
Furthermore, in step 3, the resulting marker is M for step 2t(v) The adaptive size sliding window mean filtering is performed for the mesh vertices v, which are 0, and is implemented as follows,
statistics of v surrounding Mt(v) And (3) expanding the sliding window by taking v as a center until the number of the grid vertexes exceeds a corresponding set threshold k, and then taking the average value of the optical flows of the counted grid vertexes as the optical flow of v.
Further, k is 9.
In step 4, to obtain a motion compensation vector for each mesh vertex, an optimization equation E ═ E is used1+E2Time domain filter term E1The effect being image stabilization and the second term being a local constraint term E2It means that the variation of each mesh vertex should be as small as possible to prevent excessive offset when motion compensation is performed.
Furthermore, a temporal filtering term E1Indicating the optical flow integrated value R for the t-th frame imagetAnd smoothing in a time domain by a Gaussian sliding window method.
Furthermore, the local constraint term E2Is represented by E2=||St-Rt||2,StRepresenting the optical flow integrated value at the mesh vertex v in the tth frame image of the output video.
In step 5, a transformation matrix of each mesh is obtained from four pairs of vertices of the mesh, the meshes are transformed, an image after image stabilization is obtained, and a video is output.
The invention also provides an unmanned aerial vehicle video image stabilization system based on the image grid optical flow filtering, which is used for the unmanned aerial vehicle video image stabilization method based on the image grid optical flow filtering.
According to the method, initial optical flows are screened by Kalman filtering, screened outliers are corrected by spatial filtering, and then the optimized optical flows are subjected to time domain filtering of grid constraint, so that the influence caused by deformation and moving objects is reduced, and the image stabilization quality is improved.
Drawings
FIG. 1 is a flow chart of image mesh optical flow filtering according to an embodiment of the present invention.
FIG. 2 is an example diagram of optical flow discontinuity detection in accordance with an embodiment of the present invention.
FIG. 3 is a comparison graph of integrated optical flow values before and after image stabilization according to an embodiment of the present invention.
Detailed Description
The invention provides an image stabilization method and system based on image grid optical flow filtering for unmanned aerial vehicle videos, which are mainly based on optical flows of two adjacent frames of a video sequence and considers the discontinuity of the optical flows at different parallaxes and moving objects. The method fully considers the influence of discontinuous optical flow on image stabilization and image deformation possibly generated after image stabilization, detects discontinuity of the optical flow through coarse filtering to fine filtering, then carries out filtering processing on the discontinuous optical flow, and finally carries out motion compensation constrained by grids once again to obtain an image stabilization video so as to reduce the influence as much as possible. The result obtained by the method is more scientific and more accurate.
Referring to fig. 1, the embodiment specifically explains the process of the present invention by taking an unmanned aerial vehicle aerial video as an example, and the provided unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering includes the following steps:
step 1, performing mesh division on an input video, setting the top points of meshes as optical flow tracking points, and solving the optical flow of each frame of image and the previous frame of image at the top points of the meshes.
In order to obtain the motion relationship between two adjacent frames of an input video, the homography relationship between two adjacent frames of images is generally obtained by matching with the corner points with unchanged features, but the feature points are subject to the conditions of uneven distribution, easy tracking and loss and the like. The uneven distribution of feature points easily causes that the calculated homography matrix is not enough to represent the movement of the whole image, and the loss of the feature points can make the whole method unstable. In this regard, the present invention selects mesh optical flow instead.
In the embodiment of the invention, each frame image is divided into a plurality of grids with the size of 16 multiplied by 16, and the optical flow tracking is carried out on the vertex of each grid according to the lk optical flow method to obtainThe optical flow vector u of the mesh vertex is ═ dx, dy]Where dx and dy are the offsets in the x and y directions, respectively, and the optical flow vector at the vertex v at time t is denoted as ut(v)。
The lk optical flow method is short for Lucas-Kanade optical flow algorithm, and is an optical flow estimation algorithm of two frame difference. For specific implementation, reference may be made to the prior art, and the present invention is not described in detail.
And 2, carrying out primary time domain analysis on the grid light stream, removing high-frequency components, then comparing the size change before and after filtering, and screening out points with large change as exterior points.
Although the original optical flow can reflect the motion situation of each grid vertex, apart from the existence of certain noise in the optical flow calculated by itself, the optical flow of partial grids is discontinuous due to the existence of moving objects and foreground and background parallax. Referring to fig. 2, because a moving object in the foreground passes through the mesh, the optical flow of the corresponding mesh vertex changes abruptly at that time, and the position where the discontinuous optical flow appears moves along with the movement of the object. The invention is directed to the original optical flow ut(v) And (4) carrying out continuity detection, carrying out Kalman filtering on the optical flow in a time domain, and then comparing the difference between the front and the back to judge whether the optical flow at the vertex has mutation or not. If the motion vector of the moving object passes through the image frame, the motion vector of the moving object is determined, if the motion vector of the moving object passes through the image frame, or the parallax of the moving object changes suddenly, before the time domain filtering, discontinuous optical flow is screened out, the whole continuous optical flow is smoothed, and finally, a video with object motion or foreground shaking in a stable partial area of the whole picture can be obtained.
With reference to specific examples, the embodiment of the present invention employs the following discontinuity detection method:
for a stable video, the time-dependent profile of the integrated values of optical flow at fixed pixels should be smooth. Assuming that the camera shake follows Gaussian distribution, the optical flow integrated value of the previous frame (time t-1) is passed
Figure BDA0002479978990000041
(abbreviated herein as R)t-1Similarly, the current frame optical flow cumulative value Rt) Predicting the optical flow of a current frame obeys a Gaussian distribution
Figure BDA0002479978990000042
While the original optical flow obtained from the current frame follows Gaussian distribution N (R)t,Qt). Where t is used to identify the current frame, Pt、QtAre respectively Gaussian distribution
Figure BDA0002479978990000043
N(Rt,Qt) The superscript T denotes the transpose of the matrix. If the camera is moving steadily, the transformation matrix H at the time ttIs a 3 × 3 identity matrix I, i.e. HtI, so the kalman gain K is obtained as:
Figure BDA0002479978990000044
then the best estimated value R 'of the current frame optical flow'tIs composed of
R′t=Rt-1+K(Rt-Rt-1)
In specific implementation, P may be set according to experimental data when t is 0tThen in turn based on the covariance matrix P of the previous framet-1The covariance matrix P of the current frame can be obtainedtComprises the following steps:
Pt=Pt-1-KPt-1
by comparison of R'tAnd RtJudging whether the point is an edge point with discontinuous optical flow or not, and if the difference between the two is more than a certain threshold epsilon, obtaining | R't-RtIf | is greater than ε, the optical flow discontinuity detection template is labeled M at that pointt(v) When equal to 0, it is marked as an outer point, otherwise Mt(v)=1。
In specific implementation, the threshold value epsilon can be set according to specific conditions.
And 3, filtering the optical flows of the outer points obtained in the step 2 to ensure that the optical flows are continuous with the optical flows of the vertexes of the surrounding grids in space.
For M in step 2t(v) 0 mesh vertex v, adapted toAnd filtering the size sliding window mean value. Concretely, M around v is countedt(v) The sliding window is expanded around v until the number of mesh vertices exceeds a corresponding set threshold k (preferably k is 9 in the embodiment) for 1 mesh vertex. The optical flow of v is then taken as the average of the optical flows of the statistical mesh vertices. After this process is completed for each outlier, the mesh vertex-based raw optical flow optimization is completed.
And 4, performing time domain filtering on the optimized optical flow obtained in the step 3, and limiting through the grid vertexes to prevent the output video from generating large deformation, so as to obtain the motion compensation vector of each grid vertex.
In order to compensate for a jittered video, actually, the optical flow integrated value is filtered in the time domain, and then image deformation is performed according to interpolation before and after filtering. The grid-based image deformation needs to consider multi-aspect constraints so that the deformed result can eliminate jitter and ensure visual attractiveness.
The invention provides an optimization equation E ═ E1+E2. Wherein E represents an error. The first term is a time-domain filtering term E1The function is image stabilization; the second term being a local constraint term E2It means that the variation of each mesh vertex should be as small as possible to prevent excessive offset when motion compensation is performed.
First of all, the time-domain filtering term E1Represents the optical flow integrated value R for the t-th frame imagetSmoothing in the time domain is performed, and the embodiment is realized by a Gaussian sliding window method:
Figure BDA0002479978990000051
wherein S ist(v) (simply denoted as S)t) Representing the integrated value of the optical flow at the grid vertex v in the t-th frame image of the output video, with a sliding window from t0From time to time t (can t)0=t-10),wcRepresents the weight of each moment c in the sliding window to the current frame, ScRepresenting the integrated value of the optical flow at the mesh vertex v in the corresponding image at the moment c of outputting the video.
Secondly, a local constraint term E2It means that the variation of each mesh vertex should be as small as possible when motion compensation is performed, in case too large a shift occurs:
E2=||St-Rt||2
Rtin the same step 2, the optical flow integrated value of the t-th frame image of the input video representing the mesh vertex v is calculated, and in order to minimize E,
Figure BDA0002479978990000061
Figure BDA0002479978990000062
further, the optical flow accumulated value after t moment compensation and the previous period of time [ t ] can be obtained0The relationship between the optical flow integrated values of t):
Figure BDA0002479978990000063
more accurate values can be found over multiple iterations. After obtaining the optical flow integrated value of the output t frame image, the motion compensation can be obtained:
Ct=St-Rt
step 5, setting an optical flow compensation vector C at the vertex v of the mesht-1(v)=[dx'(v),dy'(v)]Then, the coordinates of the grid vertices before and after compensation are v ═ ih, jw]、v'=[ih+dx'(v),jw+dy'(v)]Wherein h and w respectively represent the height and width of the grid, i and j respectively represent the number of rows and columns of the grid vertex, and dx '(v) and dy' (v) respectively represent the offset compensation in the x and y directions. And obtaining a transformation matrix of each grid through four pairs of vertexes of each grid, respectively transforming the grids to obtain an image after image stabilization, and outputting a video.
Referring to fig. 3, the accumulated values of optical flow before and after image stabilization at a certain mesh vertex in the embodiment of the present invention are shown as a graph.
The method provided by the invention can realize the process by using a computer software technology. System devices for carrying out the method of the invention should also be within the scope of the invention.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (8)

1. An unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering is characterized by comprising the following steps:
step 1, performing mesh division on an input video, setting the top points of meshes as optical flow tracking points, and solving the optical flow of each frame of image and the previous frame of image at the top points of the meshes as an original mesh optical flow;
step 2, carrying out continuity detection on the original grid optical flow, carrying out primary time domain analysis on the original grid optical flow, carrying out Kalman filtering in a time domain to remove high-frequency components, then comparing the size change before and after filtering, judging whether the optical flow has mutation or not by comparison, and screening out points with large change as outer points;
the realization is as follows,
assuming that the camera shake follows Gaussian distribution, the optical flow integrated value R is passed through the previous framet-1Predicting the optical flow of the current frame obeys a Gaussian distribution
Figure FDA0003410149690000011
While the original optical flow obtained from the current frame follows Gaussian distribution N (R)t,Qt) (ii) a The current frame light stream accumulated value is recorded as Rt
Where t is used to identify the current frame, Pt、QtAre respectively Gaussian distribution
Figure FDA0003410149690000012
N(Rt,Qt) Covariance moment ofThe superscript T represents the transposition of the matrix;
if the camera is moving steadily, the transformation matrix H at the time ttFor a 3 × 3 identity matrix I, the kalman gain K is obtained as:
Figure FDA0003410149690000013
then the best estimated value R 'of the current frame optical flow'tIs composed of
R′t=Rt-1+K(Rt-Rt-1)
Covariance matrix P from previous framet-1The covariance matrix P of the current frame can be obtainedtIs composed of
Pt=Pt-1-KPt-1
Comparison of R'tAnd RtIf the difference between the two is greater than a preset threshold epsilon, the corresponding point is marked as an outer point and is marked as Mt(v) 0, otherwise mark Mt(v)=1;
Step 3, filtering the optical flows of the outer points in the step 2 to ensure that the optical flows are continuous with the optical flows of the vertexes of the surrounding grids in space to obtain optimized optical flows;
step 4, performing time domain filtering on the optimized optical flow, and limiting the deformation of the output video through the grid vertexes to obtain a motion compensation vector of each grid vertex;
and 5, carrying out grid transformation according to the motion compensation vector to obtain an output image stabilizing video.
2. The unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering of claim 1, wherein: in the step 1, optical flow tracking is carried out on each grid vertex according to an lk optical flow method to obtain an optical flow vector of the grid vertex.
3. The unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering of claim 1, wherein: in step 3, the label obtained in step 2 is Mt(v) 0 meshThe lattice vertex v, performs adaptive size sliding window mean filtering, and is implemented as follows,
statistics of v surrounding Mt(v) And (3) expanding the sliding window by taking v as a center until the number of the grid vertexes exceeds a corresponding set threshold k, and then taking the average value of the optical flows of the counted grid vertexes as the optical flow of v.
4. The unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering of claim 3, wherein: let k be 9.
5. The unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering of claim 3, wherein: in step 4, in order to obtain the motion compensation vector of each mesh vertex, an optimization equation E ═ E is adopted1+E2Time domain filter term E1The effect being image stabilization and the second term being a local constraint term E2It means that the variation of each mesh vertex should be as small as possible to prevent excessive offset when motion compensation is performed.
6. The unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering of claim 5, wherein: time domain filter term E1Indicating the optical flow integrated value R for the t-th frame imagetAnd smoothing in a time domain by a Gaussian sliding window method.
7. The unmanned aerial vehicle video image stabilization method based on image grid optical flow filtering of claim 5, wherein: local constraint term E2Is represented by E2=||St-Rt||2,StRepresenting the optical flow integrated value at the mesh vertex v in the tth frame image of the output video.
8. The unmanned aerial vehicle video image stabilization method based on image mesh optical flow filtering according to claim 1, 2, 3, 4, 5, 6 or 7, wherein: and 5, obtaining a transformation matrix of each grid through four pairs of vertexes of each grid, respectively transforming the grids to obtain an image after image stabilization, and outputting a video.
CN202010376138.7A 2020-05-07 2020-05-07 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering Active CN111614965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010376138.7A CN111614965B (en) 2020-05-07 2020-05-07 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010376138.7A CN111614965B (en) 2020-05-07 2020-05-07 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering

Publications (2)

Publication Number Publication Date
CN111614965A CN111614965A (en) 2020-09-01
CN111614965B true CN111614965B (en) 2022-02-01

Family

ID=72198380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010376138.7A Active CN111614965B (en) 2020-05-07 2020-05-07 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering

Country Status (1)

Country Link
CN (1) CN111614965B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705665B (en) * 2021-08-26 2022-09-23 荣耀终端有限公司 Training method of image transformation network model and electronic equipment
CN114363703B (en) * 2022-01-04 2024-01-23 上海哔哩哔哩科技有限公司 Video processing method, device and system
CN116402863B (en) * 2023-06-06 2023-08-11 中铁九局集团第一建设有限公司 Intelligent analysis and early warning system for building construction monitoring data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550678A (en) * 2016-02-03 2016-05-04 武汉大学 Human body motion feature extraction method based on global remarkable edge area
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
CN108509834A (en) * 2018-01-18 2018-09-07 杭州电子科技大学 Graph structure stipulations method based on video features under polynary logarithm Gaussian Profile
CN110796010A (en) * 2019-09-29 2020-02-14 湖北工业大学 Video image stabilization method combining optical flow method and Kalman filtering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10547871B2 (en) * 2017-05-05 2020-01-28 Disney Enterprises, Inc. Edge-aware spatio-temporal filtering and optical flow estimation in real time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550678A (en) * 2016-02-03 2016-05-04 武汉大学 Human body motion feature extraction method based on global remarkable edge area
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
CN108509834A (en) * 2018-01-18 2018-09-07 杭州电子科技大学 Graph structure stipulations method based on video features under polynary logarithm Gaussian Profile
CN110796010A (en) * 2019-09-29 2020-02-14 湖北工业大学 Video image stabilization method combining optical flow method and Kalman filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Grid-based Histogram of Oriented Optical Flow for analyzing movements on video data;A. Solichin et al.;《2015 International Conference on Data and Software Engineering (ICoDSE)》;20160321;第114-119页 *
基于稀疏光流的视频稳像方法研究;赵征;《CNKI》;20160501(第03期);全文 *

Also Published As

Publication number Publication date
CN111614965A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111614965B (en) Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN111539879B (en) Video blind denoising method and device based on deep learning
KR101830804B1 (en) Digital image stabilization method with adaptive filtering
Tai et al. Richardson-lucy deblurring for scenes under a projective motion path
US8428390B2 (en) Generating sharp images, panoramas, and videos from motion-blurred videos
US8368766B2 (en) Video stabilizing method and system using dual-camera system
KR100985805B1 (en) Apparatus and method for image stabilization using adaptive Kalman filter
CN106210448B (en) Video image jitter elimination processing method
US8711938B2 (en) Methods and systems for motion estimation with nonlinear motion-field smoothing
US20080025628A1 (en) Enhancement of Blurred Image Portions
JP4210954B2 (en) Image processing method, image processing method program, recording medium storing image processing method program, and image processing apparatus
CN110944176B (en) Image frame noise reduction method and computer storage medium
CN103761710A (en) Image blind deblurring method based on edge self-adaption
Gal et al. Progress in the restoration of image sequences degraded by atmospheric turbulence
CN106550187A (en) For the apparatus and method of image stabilization
Reeja et al. Real time video denoising
CN108270945B (en) Motion compensation denoising method and device
CN116091868A (en) Online video anti-shake device, online video anti-shake method and learning method thereof
CN107590781B (en) Self-adaptive weighted TGV image deblurring method based on original dual algorithm
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
Rawat et al. Adaptive motion smoothening for video stabilization
Deshmukh et al. Moving object detection from images distorted by atmospheric turbulence
Zhang et al. Video superresolution reconstruction using iterative back projection with critical-point filters based image matching
Munezawa et al. Noise removal method for moving images using 3-D and time-domain total variation regularization decomposition
Protter et al. Sparse and redundant representations and motion-estimation-free algorithm for video denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant