CN104935832A - Video matting method aiming at depth information - Google Patents

Video matting method aiming at depth information Download PDF

Info

Publication number
CN104935832A
CN104935832A CN201510151211.XA CN201510151211A CN104935832A CN 104935832 A CN104935832 A CN 104935832A CN 201510151211 A CN201510151211 A CN 201510151211A CN 104935832 A CN104935832 A CN 104935832A
Authority
CN
China
Prior art keywords
frame
value
video
pixel
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510151211.XA
Other languages
Chinese (zh)
Other versions
CN104935832B (en
Inventor
彭浩宇
王勋
刘春晓
孔丁科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201510151211.XA priority Critical patent/CN104935832B/en
Publication of CN104935832A publication Critical patent/CN104935832A/en
Application granted granted Critical
Publication of CN104935832B publication Critical patent/CN104935832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention discloses a video matting method aiming at depth information. The method comprises the steps of calculating the three-parted graph of each frame image; segmenting a video to keep the frame-to-frame consistency in each video segment; obtaining the optimal foreground and transparency estimation values of each pixel in an unknown area of each frame; obtaining a global optimal solution of the pixels in all unknown areas in the video segments; finishing the matting processing of a whole segment of video. The method of the present invention is suitable for carrying out the rapid and efficient foreground object extraction on a video sequence having the frame-to-frame consistency, can keep the space-time consistency of the video matting, and enables the flicker and visual mutation to be reduced and the matting calculation efficiency to be improved.

Description

For the video keying method of band depth information
Technical field
The invention belongs to image processing field, particularly for the video keying method with depth information.
Background technology
Image take as in computer vision field one there is the concern that many multiduty technology enjoy scholar and enterprise always, have a large amount of successfully practical application in a lot of fields.And the technology taking interesting target from the image sequence of video single image is scratched to the more deep structure research of picture, though be in the starting stage at present, because of its widely application prospect make increasing scholar start to be devoted to the research in this field.
The foreground pixel value F of each pixel in the stingy picture process of single-frame images and computed image, background pixel value B and alpha transparence value are underconstrained problem, calculation of complex.And video keying is the stingy picture to image sequence, it is more complicated that institute involves problem, and the difficult point that there is several aspect needs to solve, and emphasis has: 1. pending data volume is huge, needs a large amount of pixels in efficient process video sequence, improve and scratch picture efficiency; 2. video keying needs to keep the space-time consistency between sequence, reduces and scratches picture generation flicker and vision sudden change;
At present, existing video keying method mainly divides following a few class:
Stingy picture algorithm frame by frame.This algorithm regards as independently picture frame the image sequence in video, then applies existing single image to each two field picture and takes algorithm to realize stingy picture process.This method is comparatively convenient, easy realization, can owing to causing the correlation properly do not considered between consecutive frame to the independent process of every two field picture, pixel transparent angle value corresponding between continuous print sequence image can be made to produce difference, the interframe continuity taking result cannot be guaranteed, produce flicker and vision sudden change.
The stingy image space method of 3 D stereo.This type of algorithm regards a three-dimensional solid as video sequence, stereoscopic for three-dimensional geometry work entirety is carried out scratching picture process.This method often needs to carry out multistep and scratches picture and obtain an ideal result, and due to it, tentatively to scratch effect that picture result obtains accurate not, needs the optimization process continued to obtain better effect, therefore it is lower to scratch picture efficiency.
The stingy picture algorithm of successive frame.These class methods apply some comparatively ripe single width equally and scratch image space method, scratching between consecutive frame are applied in the stingy picture step of present frame as the restrictive condition of result as interframe continuity simultaneously.Owing to considering the association between the consecutive frame of front and back, can obtain and scratch picture effect relatively preferably.But owing to all adopting single frames to scratch picture to all frames, spent by these class methods, the time is more.
The present invention is based on the interframe continuity utilizing video, design estimation and the optimization method of a kind of quick prospect and transparency, the foreground target interested in video sequence is extracted.Automatic three components employed in the method based on depth information generate and video segmentation, fast prospect and transparence value estimation, and have carried out optimizing and the optimization of interframe boundling in frame, the stingy picture effect that final acquisition is desirable.
Summary of the invention
The object of the invention is to the above-mentioned problem solving this field existing, provide a kind of sequence of video images to scratch image space method fast, the foreground target be suitable for the video sequence with interframe continuity carries out rapidly and efficiently extracts, and comprises the following steps:
Calculate three components of every two field picture, be divided into foreground area, background area and zone of ignorance therebetween by every two field picture; Segmentation is carried out to video, makes the interframe in each video segmentation keep coherent; The prospect of each pixel optimum and transparency estimated value in the zone of ignorance obtaining each frame; Obtain the global optimization solution of pixel in all zone of ignorances in this video segmentation; Complete the stingy picture process of whole section of video.
As preferably, every two field picture is divided into foreground area, background area and zone of ignorance therebetween, is realized by following steps:
1) a selected depth threshold, carries out segmentation acquisition binarization segmentation result to depth map, is less than the region of threshold value as prospect;
2) morphological erosion operation is carried out to the foreground area of binaryzation, corrosion after obtain region as the foreground area R determined f;
3) carry out morphological dilation to binaryzation foreground area and negate, gained region is as the background area R determined b;
4) zone of ignorance R is between the prospect determined, background area u, calculate its area S u.
As preferably, segmentation is carried out to video, makes the interframe in each video segmentation keep coherent and realize in the following manner:
Area ratio P overlapping between the zone of ignorance before and after calculating between successive frame u=(S u t∩ S u t-1)/Max (S u t, S u t-1), choose P uthe frame of <0.8 is key frame Fr key; By certain key frame Fr key, iwith next key frame Fr key, i+1between image sequence be grouped into same segmentation wherein Fr t, t=1,2..., n are the normal frames between key frame.
As preferably, in the zone of ignorance obtaining each frame, the prospect of each pixel optimum and transparency estimated value, realize: scratch picture to the first and last two frame application closed type of video segmentation in the following manner; Based on the kinematic parameter of optical flow method principle solving pixel; The information of pixel in each frame zone of ignorance is estimated frame by frame in conjunction with interframe continuity; With simulated annealing to estimating the parameter optimization obtained in each frame.
Further, the described first and last two frame application closed type to video segmentation scratches picture, realizes by the following method: to Seg [Fr key, i] in key frame Fr key, i, scratch picture (ClosedForm Matting) method based on classical closed type and obtain its zone of ignorance R uin the prospect of each pixel, background and transparency (F, B, α) value.
Further, the described kinematic parameter based on optical flow method principle solving pixel, realizes by the following method:
Make I be the gray value of image, D is the depth value of image, according to the general principle of optical flow method, all meets following two formula to each pixel:
I x·u+I y·v+I t=0 (1)
D x·u+D y·v+D t=0 (2)
Wherein I x, I y, I tand D x, D y, D tbe respectively gray value and the partial derivative of depth value on x, y direction and time t, all directly can be calculated by the color diagram of frame sequence and depth map;
for this pixel is along the velocity component on x, y direction; By separating above-mentioned two formula, the kinematic parameter of each pixel can be tried to achieve wherein representation speed size, representation speed direction, as shown in Figure 2.
Further, the described information estimating pixel in each frame zone of ignorance in conjunction with interframe continuity frame by frame, realizes by the following method:
For present frame Fr tzone of ignorance R u tin a kth pixel p k t(x, y), according to its kinematic parameter calculate it at Fr t-1in position p k t-1(x ', y '); Use p k t-1the background value B at (x, y) place k t-1(x, y) is as p k tthe background estimating value of (x, y) use p k t-1in (x ', y ') place 3 × 3 neighborhood, given value estimates p k tthe prospect value at (x, y) place and transparence value
B &OverBar; = B k t - 1 ( x , y )
F &OverBar; = &Sigma; i , j &Element; [ - 1,1 ] &lambda; i , j &CenterDot; F k t - 1 ( x &prime; + i , y &prime; + j )
&alpha; &OverBar; = &Sigma; i , j &Element; [ - 1,1 ] &lambda; i , j &CenterDot; &alpha; k t - 1 ( x &prime; + i , y &prime; + j )
&lambda; 3 * 3 = 0.05 0.1 0.05 0.1 0.4 0.1 0.05 0.1 0.05
As preferably, described with simulated annealing to estimating in each frame that the parameter obtained is optimized, realize by the following method:
Assuming that p k tthe pixel background color value B at (x, y) place is constant, utilizes simulated annealing to foreground pixel value and transparence value be optimized;
Wherein, the solution that simulated annealing optimization algorithm uses is S:
S = { ( &alpha; * , F * ) | &alpha; * = &alpha; &OverBar; &PlusMinus; i &CenterDot; &Delta; &sigma; &alpha; , F * = F &OverBar; &PlusMinus; j &CenterDot; &Delta; &sigma; F ; i , j = 0,1 , . . . , N }
Wherein △ σ αα/ (3N), △ σ ff/ (3N); σ fand σ αbe respectively p k t-1the variance of prospect and transparence value in (x ', y ') place 3 × 3 neighborhood; N is constant, is used for step size;
The evaluation function C (S) that simulated annealing optimization algorithm uses is:
C ( S ) = &beta; 1 &CenterDot; | | &alpha; &OverBar; * &CenterDot; F &OverBar; + ( 1 - &alpha; &OverBar; * ) &CenterDot; B &OverBar; - Color ( x , y ) | | + &beta; 2 &CenterDot; | | &alpha; &OverBar; * - &alpha; &OverBar; | | + &beta; 3 &CenterDot; | | F &OverBar; * - F &OverBar; | |
Wherein β 1, β 2, β 3for constant factor, the color RGB vector that Color (x, y) is pixel, with for the estimated value of initial background, prospect and transparency, with the prospect obtained for current iteration and transparence value.
As preferably, obtaining the global optimization solution of pixel in all zone of ignorances in this video segmentation, being undertaken by asking minimum the dissolving of following energy equation:
E = &Sigma; t &Element; [ Fr key , Fr m ) , k &Element; R U m &epsiv; t &CenterDot; | &Delta; &alpha; k t - 1 &RightArrow; t &CenterDot; &Delta; F k t - 1 &RightArrow; t | + &Sigma; t &prime; &Element; ( Fr m , Fr n ] , k &Element; R U m &epsiv; t &prime; &CenterDot; | &Delta; &alpha; k t &prime; &RightArrow; t &prime; - 1 &CenterDot; &Delta; F k t &prime; &RightArrow; t &prime; - 1 | + &Sigma; k &Element; R U m &epsiv; m &CenterDot; | ( &alpha; k m - &alpha; k m &prime; ) &CenterDot; ( F k m - F k m &prime; ) |
Wherein Fr mfor the picture frame in step 3, with represent Fr respectively mzone of ignorance R u min the prospect of pixel undetermined and the frame difference of transparence value in (afterwards) derivation forward, F k m, F k m ', α k mand α k m 'for Fr mmiddle kth pixel undetermined to be derived the prospect value and transparence value that obtain respectively by (afterwards) forward; for meeting the coefficient of normal distribution, at Fr mplace reaches peak value; N is for controlling constant.
The present invention has following beneficial effect: the foreground target be suitable for the video sequence with interframe continuity carries out rapidly and efficiently extracts, and can keep the space-time consistency of video keying, reduces flicker and vision sudden change, improves and scratch picture computational efficiency.
Accompanying drawing explanation
Fig. 1 is the flow chart of the video keying method for band depth information;
Three components of Fig. 2 video segmentation frame and zone of ignorance pixel are at the motion change figure of interframe.
Embodiment
Below in conjunction with specific embodiment, and by reference to the accompanying drawings, technical scheme of the present invention is further described:
Embodiment 1: use depth camera (Kinect as Microsoft) capture video, obtain the image sequence being with depth information, namely image is except color Color value, also with degree of depth Depth value.
Parsing obtains image sequence, to this image sequence application stingy image space method of the present invention, carries out in accordance with the following steps:
(1) three components of every two field picture in sequence of computed images;
(2) key frame extraction and video segmentation; According to area ratio P overlapping between the zone of ignorance between consecutive frame uthe frame of <0.8 is key frame Fr key.By certain key frame Fr key, iwith next key frame Fr key, i+1between image sequence be grouped into same segmentation Seg [Fr key, i];
(3) every frame zone of ignorance R in video segmentation is calculated frame by frame uinterior pixel prospect, background and transparency (F, B, α) value.The simulated annealing optimization algorithm flow adopted in reckoning process is:
1. initialization: initial temperature T, initial value C (x, y), the iterations L of each T value;
2. to k=1 ..., L, performs the 3rd to the 6th step:
3. from solution space, search for new solution (α ', F '), (α ', F ') ∈ S;
4. incremental computations &Delta; t &prime; = Cost ( &alpha; &prime; , F &prime; ) - C ( x , y ) , Cost ( &alpha; &prime; , F &prime; ) = &alpha; &prime; F &prime; + ( 1 - &alpha; &prime; ) B &OverBar; ;
5. if Δ t ' < 0, then accept (α ', F ') as current new explanation, otherwise with probability exp (-Δ t '/T)
Accept (α ', F ') as current new explanation.
If 6. meet end condition, current solution is optimal solution, terminator.End condition is continuously all unaccredited situation of several new explanations.
7.T reduces, and T trends towards 0, turns the 2nd step.
(4) interframe boundling is optimized.Boundling optimizing process solves following equation to carry out by using gradient descent method:
( F &prime; , &alpha; &prime; ) = arg min &alpha; k t , F k t ( &Sigma; t &Element; [ Fr key , Fr m ) , k &Element; R U m &epsiv; t &CenterDot; | &Delta; &alpha; k t - 1 &RightArrow; t &CenterDot; &Delta; F k t - 1 &RightArrow; t | + &Sigma; t &prime; &Element; ( Fr m , Fr n ] , k &Element; R U m &epsiv; t &prime; &CenterDot; | &Delta; &alpha; k t &prime; &RightArrow; t &prime; - 1 &CenterDot; &Delta; F k t &prime; &RightArrow; t &prime; - 1 | + &Sigma; k &Element; R U m &epsiv; m &CenterDot; | ( &alpha; k m - &alpha; k m &prime; ) &CenterDot; ( F k m - F k m &prime; ) | )
(5) according to the global optimum of all video segmentations (F ', α ') value, the background substantially remained unchanged is added value, completes the stingy picture process of whole section of video.
Compared with background technology, innovation of the present invention is:
1) video segmentation.Utilize depth information automatically to calculate three components, the degree of overlapping based on consecutive frame zone of ignorance carries out segmentation to video, at utmost maintains the interframe continuity in video segmentation;
2) single frames scratches the reckoning of picture+interframe.In video segmentation, only need to use single frames to scratch picture technology to first and last two frame, all the other frames carry out prospect based on kinematic parameter and transparency is worth the mode estimated and optimize to carry out stingy picture, improve and scratch picture computational efficiency.
3) double optimization flow process.In independent picture frame, make use of the thought of simulated annealing, carry out the optimal screening of prospect colour and transparent value; Between the sequence frame of video segmentation, carry out boundling optimization, foreground and transparence value keep space-time consistency in interframe, and robust is reliable more to make final stingy picture result.
Above-mentioned embodiment is used for explaining and the present invention is described, instead of limits the invention, and in the protection range of spirit of the present invention and claim, any amendment make the present invention and change, all fall into protection scope of the present invention.

Claims (9)

1., for the video keying method of band depth information, it is characterized in that, comprise the following steps:
S01 calculates three components of every two field picture;
S02 carries out segmentation to video, makes the interframe in each video segmentation keep coherent;
The prospect of each pixel optimum and transparency estimated value in the zone of ignorance that S03 obtains each frame;
S04 obtains the global optimization solution of pixel in all zone of ignorances in this video segmentation;
S05 completes the stingy picture process of whole section of video.
2., according to the video keying method for band depth information according to claim 1, it is characterized in that: three components described in step S01 refer to image is divided into three subregions, the foreground area R determined f, the background area R that determines band zone of ignorance R therebetween u, this method calculates three components automatically based on depth information:
1) a selected depth threshold, carries out segmentation acquisition binarization segmentation result to depth map, is less than the region of threshold value as prospect;
2) morphological erosion operation is carried out to the foreground area of binaryzation, corrosion after obtain region as the foreground area R determined f;
3) carry out morphological dilation to binaryzation foreground area and negate, gained region is as the background area R determined b;
4) zone of ignorance R is between the prospect determined, background area u, calculate its area S u.
3. according to the video keying method for band depth information according to claim 1, it is characterized in that, step S02 comprises: area ratio P overlapping between the zone of ignorance before and after calculating between successive frame u=(S u t∩ S u t-1)/Max (S u t, S u t-1), choose P uthe frame of <0.8 is key frame Fr key; By certain key frame Fr key, iwith next key frame Fr key, i+1between image sequence be grouped into same segmentation wherein Fr t, t=1,2..., n are the normal frames between key frame.
4., according to the video keying method for band depth information according to claim 1, it is characterized in that, step S03 comprises: scratch picture to the first and last two frame application closed type of video segmentation; Based on the kinematic parameter of optical flow method principle solving pixel; The information of pixel in each frame zone of ignorance is estimated frame by frame in conjunction with interframe continuity; With simulated annealing to estimating the parameter optimization obtained in each frame.
5. according to the video keying method for band depth information according to claim 4, it is characterized in that, the described first and last two frame application closed type to video segmentation scratches picture, realizes by the following method: to Seg [Fr key, i] in key frame Fr key, i, scratch picture (Closed FormMatting) method based on classical closed type and obtain its zone of ignorance R uin the prospect of each pixel, background and transparency (F, B, α) value.
6. according to the video keying method for band depth information according to claim 4, it is characterized in that, the described kinematic parameter based on optical flow method principle solving pixel, realize by the following method:
Make I be the gray value of image, D is the depth value of image, according to the general principle of optical flow method, all meets following two formula to each pixel:
I x·u+I y·v+I t=0 (1)
D x·u+D y·v+D t=0 (2)
Wherein I x, I y, I tand D x, D y, D tbe respectively gray value and the partial derivative of depth value on x, y direction and time t, all directly can be calculated by the color diagram of frame sequence and depth map;
for this pixel is along the velocity component on x, y direction; By separating above-mentioned two formula, the kinematic parameter of each pixel can be tried to achieve wherein representation speed size, representation speed direction, as shown in Figure 2.
7. according to the video keying method for band depth information according to claim 4, it is characterized in that, the described information estimating pixel in each frame zone of ignorance in conjunction with interframe continuity frame by frame, realize by the following method:
For present frame Fr tzone of ignorance R u tin a kth pixel p k t(x, y), according to its kinematic parameter calculate it at Fr t-1in position p k t-1(x ', y '); Use p k t-1the background value B at (x, y) place k t-1(x, y) is as p k tthe background estimating value of (x, y) use p k t-1in (x ', y ') place 3 × 3 neighborhood, given value estimates p k tthe prospect value at (x, y) place and transparence value
B &OverBar; = B k t - 1 ( x , y )
F &OverBar; = &Sigma; i , j &Element; [ - 1,1 ] &lambda; i , j &CenterDot; F k t - 1 ( x &prime; + i , y &prime; + j )
&alpha; &OverBar; = &Sigma; i , j &Element; [ - 1,1 ] &lambda; i , j &CenterDot; &alpha; k t - 1 ( x &prime; + i , y &prime; + j )
&lambda; 3 * 3 = 0.05 0.1 0.05 0.1 0.4 0.1 0.05 0.1 0.05
8. according to according to claim 4 for band depth information video keying method, it is characterized in that, described with simulated annealing to estimating in each frame that the parameter obtained is optimized, by the following method realize:
Assuming that p k tthe pixel background color value at (x, y) place constant, utilize simulated annealing to foreground pixel value and transparence value be optimized;
Wherein, the solution that simulated annealing optimization algorithm uses is S:
S = { ( &alpha; * , F * ) | &alpha; * = &alpha; &OverBar; &PlusMinus; i &CenterDot; &Delta; &sigma; &alpha; , F * = F &OverBar; &PlusMinus; j &CenterDot; &Delta; &sigma; F ; i , j = 0,1 , . . . , N }
Wherein △ σ αα/ (3N), △ σ ff/ (3N); σ fand σ αbe respectively p k t-1the variance of prospect and transparence value in (x ', y ') place 3 × 3 neighborhood; N is constant, is used for step size;
The evaluation function C (S) that simulated annealing optimization algorithm uses is:
C ( S ) = &beta; 1 &CenterDot; | | &alpha; &OverBar; * &CenterDot; F &OverBar; * + ( 1 - &alpha; &OverBar; * ) &CenterDot; B &OverBar; - Color ( x , y ) | | + &beta; 2 &CenterDot; | | &alpha; &OverBar; * - &alpha; &OverBar; | | + &beta; 3 &CenterDot; | | F &OverBar; * - F &OverBar; | |
Wherein β 1, β 2, β 3for constant factor, the color RGB vector that Color (x, y) is pixel, with for the estimated value of initial background, prospect and transparency, with the prospect obtained for current iteration and transparence value.
9., according to the video keying method for band depth information according to claim 4, it is characterized in that, step S04 is undertaken by asking minimum the dissolving of following energy equation:
E = &Sigma; t &Element; [ Fr key , Fr m ) , k &Element; R U m &epsiv; t &CenterDot; | &Delta; &alpha; k t - 1 &RightArrow; t &CenterDot; &Delta; F k t - 1 &RightArrow; t | + &Sigma; t &prime; &Element; ( Fr m , Fr n ] , k &Element; R U m &epsiv; t &prime; &CenterDot; | &Delta; &alpha; k t &prime; &RightArrow; t &prime; - 1 &CenterDot; &Delta; F k t &prime; &RightArrow; t &prime; - 1 | + &Sigma; k &Element; R U m &epsiv; m &CenterDot; | ( &alpha; k m - &alpha; k m &prime; ) &CenterDot; ( F k m - F k m &prime; ) |
Wherein Fr mfor the picture frame in step 3, with represent Fr respectively mzone of ignorance R u min the prospect of pixel undetermined and the frame difference of transparence value in (afterwards) derivation forward, F k m, F k m ', α k mand α k m 'for Fr mmiddle kth pixel undetermined to be derived the prospect value and transparence value that obtain respectively by (afterwards) forward; for meeting the coefficient of normal distribution, at Fr mplace reaches peak value; N is for controlling constant.
CN201510151211.XA 2015-03-31 2015-03-31 For the video keying method with depth information Active CN104935832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510151211.XA CN104935832B (en) 2015-03-31 2015-03-31 For the video keying method with depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510151211.XA CN104935832B (en) 2015-03-31 2015-03-31 For the video keying method with depth information

Publications (2)

Publication Number Publication Date
CN104935832A true CN104935832A (en) 2015-09-23
CN104935832B CN104935832B (en) 2019-07-12

Family

ID=54122773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510151211.XA Active CN104935832B (en) 2015-03-31 2015-03-31 For the video keying method with depth information

Country Status (1)

Country Link
CN (1) CN104935832B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204567A (en) * 2016-07-05 2016-12-07 华南理工大学 A kind of natural background video matting method
CN106331533A (en) * 2016-08-10 2017-01-11 深圳市企拍文化科技有限公司 Method for adding LOGO in video
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107018322A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 Control method, control device and the electronic installation of rotating camera assisted drawing
CN107133964A (en) * 2017-06-01 2017-09-05 江苏火米互动科技有限公司 A kind of stingy image space method based on Kinect
CN107481261A (en) * 2017-07-31 2017-12-15 中国科学院长春光学精密机械与物理研究所 A kind of color video based on the tracking of depth prospect scratches drawing method
CN108154086A (en) * 2017-12-06 2018-06-12 北京奇艺世纪科技有限公司 A kind of image extraction method, device and electronic equipment
CN108537815A (en) * 2018-04-17 2018-09-14 芜湖岭上信息科技有限公司 A kind of video image foreground segmentation method and device
CN108668169A (en) * 2018-06-01 2018-10-16 北京市商汤科技开发有限公司 Image information processing method and device, storage medium
WO2019114571A1 (en) * 2017-12-11 2019-06-20 腾讯科技(深圳)有限公司 Image processing method and related device
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography
WO2020063321A1 (en) * 2018-09-26 2020-04-02 惠州学院 Video processing method based on semantic analysis and device
CN113610865A (en) * 2021-07-27 2021-11-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2022227689A1 (en) * 2021-04-28 2022-11-03 北京达佳互联信息技术有限公司 Video processing method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673400A (en) * 2008-09-08 2010-03-17 索尼株式会社 Image processing apparatus, method, and program
US20140003719A1 (en) * 2012-06-29 2014-01-02 Xue Bai Adaptive Trimap Propagation for Video Matting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673400A (en) * 2008-09-08 2010-03-17 索尼株式会社 Image processing apparatus, method, and program
US20140003719A1 (en) * 2012-06-29 2014-01-02 Xue Bai Adaptive Trimap Propagation for Video Matting

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
张约伦: "基于Kinect的抠像算法研究", 《中国优秀硕士学位论文全文数据库》 *
彭浩浩: "视频抠图算法的研究", 《中国优秀硕士学位论文全文数据库》 *
李闻: "一种鲁棒视频抠图算法", 《计算机应用研究》 *
黄睿,王翔: "改进的自然图像鲁棒抠图算法", 《计算机工程与应用》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204567B (en) * 2016-07-05 2019-01-29 华南理工大学 A kind of natural background video matting method
CN106204567A (en) * 2016-07-05 2016-12-07 华南理工大学 A kind of natural background video matting method
CN106331533A (en) * 2016-08-10 2017-01-11 深圳市企拍文化科技有限公司 Method for adding LOGO in video
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107018322A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 Control method, control device and the electronic installation of rotating camera assisted drawing
CN106993112B (en) * 2017-03-09 2020-01-10 Oppo广东移动通信有限公司 Background blurring method and device based on depth of field and electronic device
CN107133964A (en) * 2017-06-01 2017-09-05 江苏火米互动科技有限公司 A kind of stingy image space method based on Kinect
CN107133964B (en) * 2017-06-01 2020-04-24 江苏火米互动科技有限公司 Image matting method based on Kinect
CN107481261B (en) * 2017-07-31 2020-06-16 中国科学院长春光学精密机械与物理研究所 Color video matting method based on depth foreground tracking
CN107481261A (en) * 2017-07-31 2017-12-15 中国科学院长春光学精密机械与物理研究所 A kind of color video based on the tracking of depth prospect scratches drawing method
CN108154086A (en) * 2017-12-06 2018-06-12 北京奇艺世纪科技有限公司 A kind of image extraction method, device and electronic equipment
WO2019114571A1 (en) * 2017-12-11 2019-06-20 腾讯科技(深圳)有限公司 Image processing method and related device
US11200680B2 (en) 2017-12-11 2021-12-14 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus
CN108537815A (en) * 2018-04-17 2018-09-14 芜湖岭上信息科技有限公司 A kind of video image foreground segmentation method and device
CN108668169A (en) * 2018-06-01 2018-10-16 北京市商汤科技开发有限公司 Image information processing method and device, storage medium
WO2020063321A1 (en) * 2018-09-26 2020-04-02 惠州学院 Video processing method based on semantic analysis and device
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography
WO2022227689A1 (en) * 2021-04-28 2022-11-03 北京达佳互联信息技术有限公司 Video processing method and apparatus
CN113610865A (en) * 2021-07-27 2021-11-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113610865B (en) * 2021-07-27 2024-03-29 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN104935832B (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN104935832A (en) Video matting method aiming at depth information
US10540590B2 (en) Method for generating spatial-temporally consistent depth map sequences based on convolution neural networks
Kim et al. Video deraining and desnowing using temporal correlation and low-rank matrix completion
Bai et al. Geodesic matting: A framework for fast interactive image and video segmentation and matting
US9288458B1 (en) Fast digital image de-hazing methods for real-time video processing
Mozerov Constrained optical flow estimation as a matching problem
EP2595116A1 (en) Method for generating depth maps for converting moving 2d images to 3d
Gonzalez et al. Plade-net: Towards pixel-level accuracy for self-supervised single-view depth estimation with neural positional encoding and distilled matting loss
WO2018119808A1 (en) Stereo video generation method based on 3d convolutional neural network
Yang et al. All-in-focus synthetic aperture imaging
Lin et al. Detecting moving objects using a camera on a moving platform
US9661307B1 (en) Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D
CN103402098A (en) Video frame interpolation method based on image interpolation
US20150195510A1 (en) Method of integrating binocular stereo video scenes with maintaining time consistency
CN103458261A (en) Video scene variation detection method based on stereoscopic vision
Gal et al. Progress in the restoration of image sequences degraded by atmospheric turbulence
CN104966274A (en) Local fuzzy recovery method employing image detection and area extraction
Yang et al. Consistent depth maps recovery from a trinocular video sequence
CN101945299B (en) Camera-equipment-array based dynamic scene depth restoring method
US20060204104A1 (en) Image processing method, image processing apparatus, program and recording medium
CN104159098B (en) The translucent edge extracting method of time domain consistence of a kind of video
WO2013107833A1 (en) Method and device for generating a motion field for a video sequence
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
Chung et al. Video object extraction via MRF-based contour tracking
Bhutani et al. Unsupervised Depth and Confidence Prediction from Monocular Images using Bayesian Inference

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant