CN102088544B - Fast image stabilization method of dynamic scene video with foreground object - Google Patents

Fast image stabilization method of dynamic scene video with foreground object Download PDF

Info

Publication number
CN102088544B
CN102088544B CN2011100392199A CN201110039219A CN102088544B CN 102088544 B CN102088544 B CN 102088544B CN 2011100392199 A CN2011100392199 A CN 2011100392199A CN 201110039219 A CN201110039219 A CN 201110039219A CN 102088544 B CN102088544 B CN 102088544B
Authority
CN
China
Prior art keywords
sub
piece
module
block
foreground target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2011100392199A
Other languages
Chinese (zh)
Other versions
CN102088544A (en
Inventor
何凯
牟聪翀
远中文
卓磊
何海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong sincere advertising media Co., Ltd
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN2011100392199A priority Critical patent/CN102088544B/en
Publication of CN102088544A publication Critical patent/CN102088544A/en
Application granted granted Critical
Publication of CN102088544B publication Critical patent/CN102088544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of computer image processing, and relates to a fast image stabilization method of a dynamic scene video with a foreground object. The fast image stabilization method comprises the following steps: selecting a template for a frame image, and equally dividing the template into a plurality of sub-blocks; respectively calculating the pixel absolute difference between each sub-block and a corresponding sub-block in a previous row or a previous line, and calculating the average value of each pixel absolute difference; respectively calculating the pixel absolute difference D<k>(i, j) (k=1, K, 4) between each sub-block and the sub-blocks at the upper side, the lower side, the left side and the right side, and setting a threshold value T and a threshold vale T', wherein T is equal to AD and T' is less than T; for each sub-block C(i, j), if the minimum value of D<k>(i, j) is more than T' or less than T, taking the sub-block C(I, j) as a foreground object module; carrying out foreground object separation on all pictures; and carrying out motion compensation on images so as to obtain stabilized images. By using the method provided by the invention, the number of modules which are used for participating in the calculation can be greatly decreased, and the precision and speed of the foreground object separation can be improved.

Description

The quick image stabilization method of dynamic scene video with foreground target
Technical field
The invention belongs to the Computer Image Processing field, can be used for association areas such as video sequence electronic steady image.
Background technology
In actual engineering, the content that video camera is taken often is not only a natural background, also comprises the foreground target of many motions, and like pedestrian, vehicle or the like, these moving targets make video image inside produce certain local motion.When carrying out surely as processing, the estimated accuracy of global motion vector can receive the influence of these local motions and reduce, thereby influences the final effect of steady picture; Therefore when asking for kinematic parameter, need earlier the foreground moving target to be extracted; Disturbing to remove, improve the extractive technique of accuracy of parameter estimation moving target, is comparison popular topic in the image processing techniques; The moving target extraction that mainly can be divided under the static background is extracted two big types with the target under the movement background; For the video sequence of shake, background changes mostly, so the extractive technique of scape target just seems particularly important under the movement background in actual electronic steady image process.It is one of the difficult point in moving target extraction field that moving target under the dynamic background extracts, and adopts the method based on supervised classification and motion vector field to solve at present usually.Based on the method for supervised classification mainly is through a large amount of sample datas of having done the classification mark being learnt, from problem, obtained the input varible value and corresponding target output variable value of training example, and then the relation between study input and target output therefrom; And apply in the actual graphical; Extract moving target,, then need earlier adjacent image to be carried out registration based on the method for motion vector field; Carry out carrying out calculus of differences again after the motion compensation, to realize the extraction of foreground moving target.
More than two kinds of methods though advantage is separately all arranged; And all obtained good effect, these two kinds of methods also all have the weak point of oneself, though can realize that based on the method for supervised classification the moving target under the movement background extracts; But need learn in advance and train; And in practical application, it is very difficult obtaining mark (known class) sample that is used for training and testing in a large number, so this algorithm has significant limitation; Though the method based on motion vector field can extract the foreground moving target more exactly; But the kinematic parameter that need carry out entire image is in advance estimated; And need cut apart video image based on sports ground, amount of calculation is big, and this makes its range of application receive restriction significantly.Therefore, need seek a kind of new target extraction method, this method should be saved time, to satisfy the needs of actual engineering when guaranteeing the moving target extraction effect as much as possible.
Summary of the invention
In the steady fast picture algorithm of the dynamic scene video with foreground target, existing moving target method for distilling more complicated, real-time is poor; Can not obtain gratifying effect; And in many actual engineerings, also than higher, so conventional method will have influence on the effect of operation such as follow-up estimation to the requirement of real-time; In order to address this problem; The present invention propose a kind of can be when guaranteeing the moving target extraction effect, effectively reduce and be used to the module number of participating in calculating, improve the quick image stabilization method of dynamic scene video of arithmetic speed.Technical scheme of the present invention is following:
A kind of quick image stabilization method of dynamic scene video with foreground target comprises the following steps:
1) for a two field picture, selected template is divided equally into m * n size with it and is the sub-piece of M * M, and the every sub-block after dividing equally is designated as C I, j(i=1, K, m; J=1, K, n), sub-piece C I, jThe behavior i at place is capable, and the j that classifies as at place is listed as;
2) calculate all capable sub-pieces of i respectively according to formula and be listed as the pixel absolute difference between the corresponding sub-piece with j-1, be designated as with all sub-pieces of capable corresponding sub-piece of i-1 and j row
Figure BDA0000047005030000021
And
Figure BDA0000047005030000022
Figure BDA0000047005030000023
Wherein, x I, j, y I, jBe sub-piece C I, jThe coordinate of top left corner apex, (x y) is present frame x to f, the gray value at y place;
3) for sub-piece C I, j, make that the sub-piece of four positions, upper and lower, left and right is R around it u, R d, R l, R r, then sub-piece C I, jAnd R u, R d, R l, R rBetween the pixel absolute difference be designated as
Figure BDA0000047005030000024
Figure BDA0000047005030000025
4) Calculation
Figure BDA0000047005030000026
and the mean
Figure BDA0000047005030000027
5) capping threshold value T=AD, and a selected suitable lower threshold T ' (T '<T), for not being positioned at the borderline every sub-block C of template I, jIf,
Figure BDA0000047005030000028
Then think sub-piece C I, jBe the foreground target module;
6) utilize said method that all frame pictures are carried out foreground target and separate, obtain being used for the residue module of motion estimation;
7) after the completion foreground target separated, the affine transformation parameter of trying to achieve according to motion estimation carried out motion compensation to image, the image after surely being looked like.
The present invention is directed to foreground moving target in the dynamic video sequence and influence the problem of global motion vector estimated accuracy; Proposed a kind of method and extracted foreground target based on the absolute difference of piece; Come anticipation background module whether to have enough gradient informations through setting threshold and come estimated motion vector, utilize conventional methods such as finding the solution overdetermined equation to obtain the quick image stabilization method of dynamic scene video of global motion parameter at last.The present invention is with respect to the improvements based on traditional foreground target extraction algorithm such as supervised classification and motion vector field: need not carry out the registration of interframe in advance, confirm through setting dependent thresholds whether the background module has enough gradient informations simultaneously.Comparing biggest advantage with former method is: significantly reduced the module number of participating in calculating, can improve precision and speed that foreground target separates.
Description of drawings
Foreground target extraction effect figure when Fig. 1 judges for adopting threshold value
Figure BDA0000047005030000029
, black part is divided into the background module that determines.
Foreground target extraction effect figure when Fig. 2 judges for adopting
Figure BDA00000470050300000210
, black part is divided into the module that influences estimation of motion vectors precision and speed in the image that determines.
Foreground target extraction effect figure when Fig. 3 judges for adopting , chrominance section is to isolate the target context that finally obtains behind the foreground target.
The left figure of Fig. 4, middle figure and right figure are respectively first frame in continuous 3 frames, second frame and the 3rd frame video image in the actual traffic monitor video.
Fig. 5 carries out the result that foreground target extracts for adopting the present invention to continuous 3 two field pictures among Fig. 4.
Fig. 6 is for adopting the present invention to the continuous 3 two field picture separating video foreground targets among Fig. 4, and the module that is used for the kinematic parameter estimation is chosen the result.
Fig. 7 is to the effect behind the steady picture of continuous 3 two field pictures among Fig. 4.
Fig. 8 and Fig. 9 are respectively that continuous 3 frames among Fig. 4 are in the video sequence interframe difference result of steady picture front and back.
Figure 10 surely looks like the interframe PSNR of front and back for continuous 30 frame video sequences.
Embodiment
Below through accompanying drawing and embodiment the present invention is done further explanation.
The present invention is to provide a kind of foreground target extraction algorithm that has combined absolute difference of piece and anticipation local module, its concrete steps are following:
1) at first the entire image coarse positioning is gone out a template f, four limits of four the back gauge entire image in upper and lower, left and right of f are all greater than M pixel, and guarantee that this template can be divided equally into m * n size and be the sub-piece of M * M, and the every sub-block after dividing equally is designated as C I, j(i=1, K, m; J=1, K, n), C I, jThe row at place is exactly that i is capable, and the row at place are exactly the j row.
2) the pixel absolute difference between the corresponding sub-piece of all sub-pieces of calculating i capable (or j row) capable with i-1 (or j-1 row) is designated as
Figure BDA0000047005030000031
(or
Figure BDA0000047005030000032
):
D i , j row = &Sigma; m = 1 M &Sigma; n = 1 M | f ( x i , j + m , y i , j + n ) - f ( x i , j + m , y i , j - M + n ) | (1)
D i , j col = &Sigma; m = 1 M &Sigma; n = 1 M | f ( x i , j + m , y i , j + n ) - f ( x i , j - M + m , y i , j + n ) |
X wherein I, j, y I, jBe current block C I, jThe coordinate of top left corner apex, (x y) is present frame x to f, and the gray value at y place, M * M are the size of piece.For current block C I, j, making its piece on every side is R u, R d, R l, R r(representing four positions, upper and lower, left and right respectively), then current block C I, jAnd R u, R d, R l, R rBetween the pixel absolute difference be designated as
D i , j 1 = D i , j row , D i , j 2 = D i + 1 , j row (2)
D i , j 3 = D i , j col , D i , j 4 = D i , j + 1 col
3) calculate
Figure BDA0000047005030000038
and average AD:
AD = 1 ( m + 1 ) &times; ( n + 1 ) &Sigma; i = 1 m + 1 &Sigma; j = 1 n + 1 ( D i , j row + D i , j col ) - - - ( 3 )
Setting threshold T=AD; if
Figure BDA0000047005030000041
then think that current block is a foreground target; Otherwise replace current block with black block, as shown in Figure 1.Wherein min representes to get minimum value.
4) if directly with black region shown in Figure 1 as the motion estimation module; The module of then participating in calculating can be a lot; Thereby increase the operand of estimation of motion vectors, and may have the module that causes inaccurate estimation of motion vectors, therefore; The present invention adopts the method for anticipation local module, promptly through set a suitable lower threshold T ' (T '<T) judge when whether front module has enough gradient informations and come estimated motion vector.When
Figure BDA0000047005030000042
can remove the module that influences estimation of motion vectors precision and speed in the image, as shown in Figure 2.The foreground target module is separated from Fig. 2, remaining module promptly can be used as the motion estimation module, and is as shown in Figure 3 again.The present invention has set lower threshold T ' and upper limit threshold T, according to
Figure BDA0000047005030000043
Choose qualified foreground module, the number of modules N that selects for use can adjust through T ', if satisfy N Min<N<N Max(N wherein Min, N MaxThreshold value for N), then finish the operation that module is chosen, otherwise adjustment T ' repeats to choose estimation block.
So far, the selection operation that foreground target separates with specific estimation block finishes, and Fig. 3 is the motion estimation zone of when T '=AD/2, setting, and the foreground target among the figure is almost all cast out, and can guarantee the precision and the speed of estimation of motion vectors.
5) accomplish foreground target and separate after, can adopt motion module among Fig. 3 through Newton iteration or find the solution method such as overdetermined equation and obtain affine transformation parameter, and image is carried out motion compensation, thus the image after surely being looked like.
Select for use the actual traffic monitor video that the method that the present invention proposes is carried out emulation experiment, wherein continuous 3 frame video images are as shown in Figure 4, and the image resolution ratio size is 320 * 240.As can be seen from the figure, significantly rocking has appearred in video image, belongs to dynamic background, and the vehicle that dealing is advanced among the figure is more, has bigger local motion between frame of video.
At first image being divided into size is 16 * 16 fritter, utilizes the present invention that foreground target is extracted, and it is as shown in Figure 6 that the foreground target module is extracted the result.
As can be seen from Figure 5, the foreground target in the video sequence (vehicle) extracts basically.At this moment, the AD that calculates (being threshold value T) is between 900~920.Black region (being the background module) is used for the global motion parameter Estimation among the figure if directly select, and the module of then participating in calculating can be a lot, and this can influence the amount of calculation of piece coupling and estimation of motion vectors to a great extent.Therefore; The present invention adopts the method for anticipation local module; Before carrying out estimation of motion vectors; Through setting anticipation threshold value T '=AD/2, the reliability that modular movement vector is estimated is carried out anticipation, rejecting may cause the module of inaccurate estimation of motion vectors; The only selected module that satisfies condition
Figure BDA0000047005030000044
is used for kinematic parameter and estimates that the module of extraction is as shown in Figure 6.
From Fig. 5, can find out that the present invention has successfully separated the foreground target in the video sequence in 6, participate in the module number that kinematic parameter estimates among Fig. 6 and be merely about 50, be about 1/6 of entire image module number, significantly reduced amount of calculation.
In order to verify the validity of selected module to estimation of motion vectors, the present invention has chosen continuous 30 two field pictures and has carried out surely as experiment in the actual monitored video.Utilize template matching algorithm, obtain initial threshold T 0=100, carry out the piece coupling more respectively to the black region module (background module) among extraction front module, Fig. 5, and through the module after the anticipation screening, experimental result is as shown in table 1.
Table 1 is chosen the impact effect of different motion estimation block to matching result
Figure BDA0000047005030000051
Can find out from table 1; Compare with the background module after original module and foreground target extract; Module number through after the anticipation screening obviously reduces, thereby greatly reduces module calculation of Matching amount, only be about average match time with original module calculate 1/5; With the background module calculate 1/4, and the compensation after picture frame between the PSNR value also a little more than the result after mating with original module and background module.The above results shows; Adopt the method for absolute difference of piece and anticipation local module, not only reduced greatly to be used for the module number that kinematic parameter is estimated, reduced operation time; And rejected the module that may cause inaccurate estimation of motion vectors, improved the accuracy of template matches.
Utilize the present invention that the actual traffic monitor video is carried out surely as processing, 3 two field pictures compensation back effect is as shown in Figure 7 among Fig. 4, and as can be seen from the figure, the present invention has successfully removed the shake of video pictures.
Fig. 8 and Fig. 9 are respectively the video sequence interframe difference result that surely looks like front and back, and as can be seen from the figure, it is obvious surely to look like preceding frame difference result; Be to have tangible side-play amount between consecutive frame; And the consecutive frame after the compensation overlaps basically, and part foreground moving target is only arranged, and has obtained surely to look like effect preferably.
Interframe PSNR before and after continuous 30 frame video sequences surely look like is shown in figure 10; As can be seen from the figure; The PSNR value of stable back image sequence interframe is apparently higher than the PSNR of former sequence; On average exceed about 7dB, explain that the present invention has obtained the dynamic video sequence with foreground target and surely look like effect preferably.

Claims (1)

1. the quick image stabilization method of dynamic scene video with foreground target comprises the following steps:
1) for a two field picture, selected template is divided equally into m * n size with it and is the sub-piece of M * M, and the every sub-block after dividing equally is designated as C I, j, wherein, i=1 ..., m; J=1 ..., n, sub-piece C I, jThe behavior i at place is capable, and the j that classifies as at place is listed as;
2) calculate all capable sub-pieces of i respectively according to formula and be listed as the pixel absolute difference between the corresponding sub-piece with j-1, be designated as with all sub-pieces of capable corresponding sub-piece of i-1 and j row
Figure FDA0000130536550000011
And
Figure FDA0000130536550000012
Figure FDA0000130536550000013
Wherein, x I, j, y I, jBe sub-piece C I, jThe coordinate of top left corner apex, (x y) is present frame x to f, the gray value at y place;
3) for sub-piece C I, j, make that the sub-piece of four positions, upper and lower, left and right is R around it u, R d, R l, R r, with sub-piece C I, jAnd R u, R d, R l, R rBetween the pixel absolute difference be designated as
Figure FDA0000130536550000014
4) Calculation?
Figure FDA0000130536550000015
and the mean
Figure FDA0000130536550000016
5) capping threshold value T=AD, and selected suitable lower threshold T ' less than T are for every sub-block C I, jIf, Then think sub-piece C I, jBe the foreground target module;
6) utilize said method that all frame pictures are carried out foreground target and separate, obtain being used for the residue module of motion estimation;
7) after the completion foreground target separated, the affine transformation parameter of trying to achieve according to motion estimation carried out motion compensation to image, the image after surely being looked like.
CN2011100392199A 2011-02-16 2011-02-16 Fast image stabilization method of dynamic scene video with foreground object Active CN102088544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100392199A CN102088544B (en) 2011-02-16 2011-02-16 Fast image stabilization method of dynamic scene video with foreground object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100392199A CN102088544B (en) 2011-02-16 2011-02-16 Fast image stabilization method of dynamic scene video with foreground object

Publications (2)

Publication Number Publication Date
CN102088544A CN102088544A (en) 2011-06-08
CN102088544B true CN102088544B (en) 2012-06-06

Family

ID=44100142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100392199A Active CN102088544B (en) 2011-02-16 2011-02-16 Fast image stabilization method of dynamic scene video with foreground object

Country Status (1)

Country Link
CN (1) CN102088544B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9538081B1 (en) 2013-03-14 2017-01-03 Amazon Technologies, Inc. Depth-based image stabilization
CN106127810B (en) * 2016-06-24 2019-08-20 广东紫旭科技有限公司 A kind of the recording and broadcasting system image tracking method and device of the light stream of video macro block angle point
CN106851302B (en) * 2016-12-22 2019-06-25 国网浙江省电力公司杭州供电公司 A kind of Moving Objects from Surveillance Video detection method based on intraframe coding compression domain
CN108269260B (en) * 2016-12-30 2021-08-27 粉迷科技股份有限公司 Dynamic image back removing method, system and computer readable storage device
CN113489896B (en) * 2021-06-25 2023-06-20 中国科学院光电技术研究所 Video image stabilizing method capable of robustly predicting global motion estimation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969039A (en) * 1987-07-01 1990-11-06 Nec Corporation Image processing system operable in cooperation with a recording medium
JP2003169319A (en) * 2001-11-30 2003-06-13 Mitsubishi Electric Corp Image-monitoring apparatus
JP2004021888A (en) * 2002-06-20 2004-01-22 Meidensha Corp Object extraction method
TWI295448B (en) * 2005-05-10 2008-04-01 Sunplus Technology Co Ltd Method for object edge detection in macroblock and for deciding quantization scaling factor
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Mobile target in complex background automatic testing method and device
CN101281650B (en) * 2008-05-05 2010-05-12 北京航空航天大学 Quick global motion estimating method for steadying video
CN101521740A (en) * 2009-04-01 2009-09-02 北京航空航天大学 Real-time athletic estimating method based on multiple dimensioned unchanged characteristic
CN101924874B (en) * 2010-08-20 2011-10-26 北京航空航天大学 Matching block-grading realtime electronic image stabilizing method

Also Published As

Publication number Publication date
CN102088544A (en) 2011-06-08

Similar Documents

Publication Publication Date Title
CN105261037B (en) A kind of moving target detecting method of adaptive complex scene
EP3540637A1 (en) Neural network model training method, device and storage medium for image processing
CN102088544B (en) Fast image stabilization method of dynamic scene video with foreground object
CN102307274B (en) Motion detection method based on edge detection and frame difference
CN101924874B (en) Matching block-grading realtime electronic image stabilizing method
CN105488812A (en) Motion-feature-fused space-time significance detection method
CN111160295A (en) Video pedestrian re-identification method based on region guidance and space-time attention
CN106559605A (en) Digital video digital image stabilization method based on improved block matching algorithm
CN110647836B (en) Robust single-target tracking method based on deep learning
CN105279771B (en) A kind of moving target detecting method based on the modeling of online dynamic background in video
KR101173559B1 (en) Apparatus and method for the automatic segmentation of multiple moving objects from a monocular video sequence
CN105872345A (en) Full-frame electronic image stabilization method based on feature matching
CN102917220B (en) Dynamic background video object extraction based on hexagon search and three-frame background alignment
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
CN110910365A (en) Quality evaluation method for multi-exposure fusion image of dynamic scene and static scene simultaneously
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN101710426B (en) Method for tracking depth image
CN105844671B (en) A kind of fast background relief method under the conditions of change illumination
CN117036770A (en) Detection model training and target detection method and system based on cascade attention
CN102426693A (en) Method for converting 2D into 3D based on gradient edge detection algorithm
CN106097259B (en) A kind of Misty Image fast reconstructing method based on transmissivity optimisation technique
CN114841941A (en) Moving target detection algorithm based on depth and color image fusion
CN104751487A (en) Method for detecting movement target based on colored RGB three-pane color-change frame difference
CN115063880A (en) Sampling method based on YOLOv5 and GME
CN113313707A (en) Original image processing method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201120

Address after: China Resources Road, Jiafang City, Chuanjiang Town, Tongzhou District, Nantong City, Jiangsu Province

Patentee after: Nantong sincere advertising media Co., Ltd

Address before: 300072 Tianjin City, Nankai District Wei Jin Road No. 92

Patentee before: Tianjin University

TR01 Transfer of patent right