CN101930614A - Drawing rendering method based on video sub-layer - Google Patents

Drawing rendering method based on video sub-layer Download PDF

Info

Publication number
CN101930614A
CN101930614A CN 201010250063 CN201010250063A CN101930614A CN 101930614 A CN101930614 A CN 101930614A CN 201010250063 CN201010250063 CN 201010250063 CN 201010250063 A CN201010250063 A CN 201010250063A CN 101930614 A CN101930614 A CN 101930614A
Authority
CN
China
Prior art keywords
frame
video
layering
layer
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010250063
Other languages
Chinese (zh)
Other versions
CN101930614B (en
Inventor
黄华
张磊
付田楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN2010102500634A priority Critical patent/CN101930614B/en
Publication of CN101930614A publication Critical patent/CN101930614A/en
Application granted granted Critical
Publication of CN101930614B publication Critical patent/CN101930614B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a drawing rendering method based on a video sub-layer, comprising the following steps: by a video sub-layer method of the computer visual field, decomposing the input video sequence as the corresponding sub-layer expression according to parameters such as color and movement, etc, then performing a stylization drawing rendering on each sub-layer; different from the traditional way of directly arranging a drawing brush on each frame of the video, the invention provides a new method for arranging the corresponding drawing brushes on the sub-layers; the brushes are optimized, propagated and arranged on the whole video sequence according to the parameters such as color and movement, etc. corresponding to each sub-layer so that the video flash is greatly reduced, and the stylization drawing video with better continuity between the frames is generated; by a way of arranging the brushes with a pre-set style on different sub-layers, the method is convenient for generating the videos with a plurality of drawing styles, thus, a stylization rendering result with more artistic effects is created.

Description

Painting rendering method based on video segmentation
Technical field
The present invention relates to a kind of painting rendering method based on video segmentation, is specifically related to a kind of based on video segmentation of delineating and stylized painting rendering method.
Background technology
Along with development of computer, multimedia and digital entertainment are subjected to popular increasing favor, and computer styleization is played up technology and also become the research focus gradually.Video is as a kind of common multimedia form, has advantage such as contain much information, expressive force is strong, so the stylization of video is played up and is subjected to paying attention to widely.The stylization of single image drawing is played up ripe method, if but simply adopt the rendering intent of single image to remove to draw each frame of video, can cause serious visual flicker.Therefore, how to reduce video flashes, improving the interframe continuity is the key that the video stylization is drawn.Effectively solved the problem of video flashes based on the stylized method for drafting of video segmentation, it is drawn by arrange brush in the different layering of video, thereby the rendering effect of agreeing has effectively increased the interframe continuity.
Demixing technology in traditional computer vision has strict constrained for the motion model in the video scene, motions such as common processing is simple as affine, projection, and effectively handle for the poverty of movement of the complexity in the daily life, and be not subjected to concrete influence of moving based on the layered approach of delineating, can handle more eurypalynous video.
Traditional video stylization is drawn and is directly arranged brush on each frame usually, lacks the excessive continuously of brush, thereby produces serious visual flicker.Based on the method for drafting of video segmentation, then be that brush is arranged according to its corresponding motion in layering, make the brush parameter can be adapted to the motion converter of object in the video scene, generate consistent drafting effect, reduce video flashes.
The video segmentation of traditional computer vision field has strict constraint for motion model, therefore not too is fit to general stylization and draws the video of handling.In order to handle more video, need carry out necessary relaxing to kinematic constraint, and this needs usually some extra user interactions of interpolation to instruct layering.
Traditional video stylization is drawn and is often directly arranged brush on each frame, makes to lack necessary transition between the consecutive frame.When different frame is drawn same object in the video scene, may use different brush parameters like this, cause visual strong variations, cause video flashes.In addition, traditional video rendering uses unified brush model parameter on same frame, and the brush that lacks at content changes.For the drafting video scene of artistry, should carry out brush according to the particular content that video scene comprises and arrange, carry out the drafting of windy lattice and play up.
Summary of the invention
The object of the present invention is to provide a kind of painting rendering method that stylization is drawn that is suitable for based on video segmentation, and after obtaining layering accurately, can in different layerings, propagate the brush parameter, arrange that corresponding brush draws the more excellent stylization drawing render video of continuity between delta frame.
For achieving the above object, the technical solution used in the present invention is:
1) choose key frame according to the video of input, the zones of different at key frame of user interactions is delineated, and the seed region in the number of plies of designated scene and each layering;
2) utilize based on the layered approach of delineating, by light stream the seed region on the key frame is propagated into all the other each frames successively, adopt gauss hybrid models that the seed region of propagating is carried out fail-safe analysis, keep the high zone of reliability and be used for the seed region of video segmentation as this frame;
3), utilize figure to cut optimization method each frame is carried out layering, and then the brush that obtains consistent delamination area and be positioned at foreground layer between each frame is arranged according to the seed region that obtains on each frame;
4) in the brush cloth postpone that obtains to be positioned on the key frame foreground layer, transmit, generate the rendering result of the prospect of all video sequences according to the thin plate spline conversion of consecutive frame respective layer;
5) background layer is spliced into panorama sketch by conversion, on panorama sketch, carries out the brush layout and draw panorama sketch, utilize the counter rendering result of asking each frame background layer of inverse transformation then;
6) on each frame, successively background layer and the foreground layer of drawing merged, obtain the stylization drawing rendering result of whole video.
Its concrete steps are as follows:
Step 1: according to the video of given input, comprise in the selecting video sequence object of color in the video scene and appearance maximum as key frame, if video sequence is oversize, then video sequence is decomposed into several fragments, each fragment is chosen key frame separately;
Step 2: on key frame, adopt the mode of delineating to specify the seed region of layering: the color, the movable information that comprise object according to key frame, delineate in the corresponding zone of key frame, specify hierarchical index by the different gray-scale value of delineating, thereby obtain the seed region of key frame higher slice
Figure BDA0000024282610000031
For gray-scale value is delineating of c, and its overlay area is delineated for each as the seed region of c/40 layering, utilizes gauss hybrid models to calculate each regional color distribution respectively:
Pr ( c | I ) = Σ j = 1 M p ( c | j ) P ( j )
Wherein P (j) is the weight factor corresponding to each branch, is taken as 1/3, and p (c|j) is the probability of each branch in the gauss hybrid models
p ( c | j ) = 1 2 π | Σ j | 1 2 exp - 1 2 ( ξ - μ j ) T Σ j - 1 ( ξ - μ j )
ξ is the color value of three passages of RGB, and μ is each color average of delineating, and ∑ is a covariance matrix, and π is that circular constant often is worth;
Step 3: employing reciprocal basis method is calculated the optical flow field between adjacent two frames: for each pixel p on each frame i, corresponding light stream vectors v i. for the video that has noise, after obtaining optical flow field, adopt gaussian filtering that optical flow field is carried out fairing processing, obtain more stable optical flow field;
Step 4: between video sequence, propagate and delineate: delineate for each, calculate a group window { W iThe border of delineating is covered, make adjacent window cover each other simultaneously, to each window W iCalculate its inner average light flow vector
Figure BDA0000024282610000041
Delineate motion vector as this some place,, delineate and to be transmitted to so at next frame
Figure BDA0000024282610000042
The place;
Step 5: the confidence level of calculating seed region in the communication process: delineate for each
Figure BDA0000024282610000043
In each pixel i, its confidence level is defined as the distribution probability that the RGB color is delineated with respect to preceding a burst of correspondence, be Pr (i), if Pr (i) is less than 0.2, think that this pixel confidence level is lower, no longer be fit to do the seed of this layer, utilize confidence level to revise, operate by optimizing following energy function to delineating:
[formula 1]
E ( l ) = &Sigma; i R i ( l i ) + &lambda; &Sigma; < p , q > &Element; N V < p , q > ( l p , l q )
Wherein λ is a weight factor, the size of control trusted area, and value is 0.3; R i(l i) be the color probability of gauss hybrid models definition, specifically be defined as
R i(l i)=-ln(Pr(C i|l i))
Pr ( C i | l i ) = 1 2 &pi; | &Sigma; j | 1 2 exp - 1 2 ( &xi; - &mu; j ) T &Sigma; j - 1 ( &xi; - &mu; j )
V <p, q 〉Gauss is the slickness of definition neighbor, makes and delineates the compact integrality that still can keep relative:
V < p , q > ( l p , l q ) = 1 1 + | | C p - C q | | | l p - l q |
Wherein C is a color of pixel, l pIt is the number of plies at pixel place, formula [1] employing figure cuts algorithm and effectively optimizes and find the solution, delineating of zone in 30% with a low credibility is divided into two, keeps subregion with a high credibility, and then be met the seed region of layering requirement as the seed region in this layering;
Step 6: to carrying out layering according to the seed region of delineating on each frame: the result of layering is divided into mutually disjoint plane domain with each frame, each piece plane domain has the color close with seed region, motion, here the proximity of color adopts the color probability of gauss hybrid models definition in the formula [1] to describe, the similarity of motion then adopts after the displacement color distortion of respective pixel to describe, and concrete movement differential is defined as:
M i ( l i ) = arctan ( | | I t ( i ) - I t + 1 ( i &prime; ) | | 2 - &tau; ) + &pi; 2
Wherein τ often is worth value 60, increase the consistance of adjacent two frame layerings, the time consistency energy that is defined as follows:
Figure BDA0000024282610000052
Take all factors into consideration above factor, on each frame, calculate layering and obtain by optimizing following energy function:
[formula 2]
E ( l ) = &Sigma; i ( R i ( l i ) + M i ( l i ) + T i ( l i ) ) + &lambda; &Sigma; < p , q > &Element; N V < p , q > ( l p , l q )
Energy when weight factor λ value 0.3 wherein, number have been weighed and given certain one deck with each pixel can obtain the video of layering by this energy function of minimization, each frame of video is expressed as successively the combination of different delamination area;
Step 7:, at first on each foreground layer of key frame, adopt anisotropic model to arrange brush, and then generate the foreground layer that stylization is drawn for the foreground layer of video;
Step 8: consistent as far as possible layout brush during for each layering of make drawing each frame of video, with the brush on the key frame frame by frame propagate into all the other each frames.For smooth as far as possible transmission brush, that adopts the thin plate spline function definition carries out the transmission of brush as down conversion:
[formula 3]
T(x,y)=(f 1(x,y),f 2(x,y))
f ( x , y ) = c 0 + c 1 x + c 2 y + &Sigma; i = 1 n w i &Psi; ( | | ( x , y ) - ( x i , y i ) | | )
Ψ (r)=r wherein 2Logr 2Be kernel function, the thin plate spline coefficient can obtain by finding the solution following system of linear equations:
K P P T 0 w c = p k + 1 0
K wherein Ij=Ψ (‖ (x i, y i)-(x j, y j) ‖), P TI classify as (1, x i, y i) T, p K+1It is character pair point on this frame.Brush by foreground layer is propagated, and the stylization that can generate all prospects of video sequence is drawn;
Step 9: the background layer that video sequence is all is spliced into a panorama sketch under same coordinate system: in order to obtain accurate panorama sketch reconstruct, need to calculate the character pair point that is positioned between each frame of video sequence in each layering, the unique point of supposing to be positioned on 1 layer of the k frame is Unique point on the Dui Ying k+1 frame is with it
Figure BDA0000024282610000063
Seek optimum conversion H so kMake unique point error minimum after the conversion, thereby obtain accurate panorama sketch reconstruct, conversion H kFind the solution by following majorized function:
H k = arg min T &Sigma; l | | p l k - T &CenterDot; p l k + 1 | | 2
Find the solution for the background layer of each frame after the conversion, all background layers can be spliced under same coordinate system and generate panorama sketch;
Step 10: the panorama sketch of the background layer that obtains for splicing, adopt anisotropic model to arrange brush, and then generate the background layer that stylization is drawn, then, utilize the inverse transformation of splicing conversion Corresponding part mapping in the panorama sketch after the stylization drafting is returned each frame, thereby obtain having on each frame the background of drawing style;
Step 11: for each frame in the video sequence, each hierarchical fusion after will drawing according to order from back to front generates the video sequence that final stylization is drawn, promptly Here
Figure BDA0000024282610000067
Corresponding i frame r layering,
Figure BDA0000024282610000068
Be fusion coefficients, calculate according to the corresponding area ratio of each layering.
The existing problem in the prior art of the present invention is directed at first proposes a kind of based on the video segmentation method of delineating from the motion model of video; Brush method for arranging at different layerings has been proposed then; When drawing, the present invention proposes the fusion method between a kind of different layering at last, thereby can generate more continuous drawing stylization video sequence.This method adopts some seed regions of delineating appointment to propagate between each frame key frame at first according to the motion model that comprises in the frequency scene.Calculate the reliability of delineating seed region on the next frame according to the seed region color distribution of former frame then, obtain seed region more accurately.Utilize figure to cut optimized Algorithm (Boykov Y, Veksler O, Zabih R (2001) Fast approximate energy minimization via graph cuts.IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, pp 509-522.) each frame is carried out layering, obtain the layer representation of video sequence, videometer is shown as panorama sketch and some foreground layers of a background layer.Arrange on the panorama sketch of background layer and foreground layer respectively at last that brush is drawn and play up that each layering after will playing up by image co-registration mixes and generates the final video with drawing style.
Description of drawings
Fig. 1 is the process flow diagram that the drawing that the present invention is based on video segmentation is played up algorithm;
Fig. 2 shows the circulation way of delineating at each frame;
Fig. 3 delineates the calculating of confidence level and adopts zone with a high credibility layering result as seed region on each frame;
Fig. 4 shows based on the video segmentation result who delineates.
Fig. 5 shows some frames of the stylization drafting video that employing the present invention creates.
Embodiment
Below with the present invention is described in detail with reference to the accompanying drawings.
Fig. 1 is a process flow diagram of the present invention.Shown in figure one, the present invention mainly is divided into 12 steps:
Step 1: the video of given input, a certain frame is as key frame in the selecting video sequence.This frame needs the color in the video scene and the object of appearance of comprising as much as possible.If video sequence is oversize, video sequence need be decomposed into several fragments, each fragment is chosen key frame separately.
Step 2: on key frame, adopt the mode of delineating to specify the seed region of layering.The information such as color, motion that comprise object according to key frame are delineated in corresponding zone, specify hierarchical index by the different gray-scale value of delineating, thereby obtain the seed region of key frame higher slice
Figure BDA0000024282610000081
For gray-scale value is delineating of c, and its overlay area is delineated for each as the seed region of c/40 layering, utilizes gauss hybrid models to calculate each regional color distribution respectively:
Pr ( c | I ) = &Sigma; j = 1 M p ( c | j ) P ( j )
Wherein P (j) is the weight factor corresponding to each branch, is taken as 1/3, and p (c|j) is the probability of each branch in the gauss hybrid models
p ( c | j ) = 1 2 &pi; | &Sigma; j | 1 2 exp - 1 2 ( &xi; - &mu; j ) T &Sigma; j - 1 ( &xi; - &mu; j )
ξ is the color value of three passages of RGB, and μ is each color average of delineating, and ∑ is a covariance matrix, and π is that circular constant often is worth;
Step 3: adopt reciprocal basis method (Zach C, Pock T, Bischof H (2007) A dual based approach for realtime TV-L1 optical flow.Proceedings of the 29 ThDAGMSymposium on Pattern Recognition 2007.) optical flow field between adjacent two frames of calculating.For each pixel p on each frame i, corresponding light stream vectors v i. for the video that has noise, after obtaining optical flow field, adopt gaussian filtering that optical flow field is carried out fairing processing, obtain more stable optical flow field.
Step 4: between video sequence, propagate and delineate.Delineate for each, calculate a group window { W iThe border of delineating is covered, make adjacent window cover (as shown in Figure 2) each other simultaneously.To each window W iCalculate its inner average light flow vector
Figure BDA0000024282610000084
Delineate motion vector as this some place,, delineate and to be transmitted to so at next frame The place.
Step 5: the confidence level of calculating seed region in the communication process.Because optical flow computation is sometimes unstable, delineating as seed region of simply propagation being come can produce wrong layering result, as shown in Figure 3.Therefore,, should assess, keep zone with a high credibility as the seed region layering to its confidence level on this frame to after delineating propagation.
Delineate for each In each pixel i, its confidence level is defined as the distribution probability that the RGB color is delineated with respect to preceding a burst of correspondence, i.e. Pr (i).If Pr (i), thinks that this pixel confidence level is lower less than 0.2, no longer be fit to do the seed of this layer.Utilize confidence level to revise, can operate by optimizing following energy function to delineating:
[formula 1]
E ( l ) = &Sigma; i R i ( l i ) + &lambda; &Sigma; < p , q > &Element; N V < p , q > ( l p , l q )
Wherein λ is a weight factor, the size of control trusted area, and value is 0.3; R i(l i) be the color probability of gauss hybrid models definition, specifically be defined as
R i(l i)=-ln(Pr(C i|l i))
Pr ( C i | l i ) = 1 2 &pi; | &Sigma; j | 1 2 exp - 1 2 ( &xi; - &mu; j ) T &Sigma; j - 1 ( &xi; - &mu; j )
V <p, q 〉Gauss is the slickness of definition neighbor, makes and delineates the compact integrality that still can keep relative:
V < p , q > ( l p , l q ) = 1 1 + | | C p - C q | | | l p - l q |
Wherein C is a color of pixel, l pIt is the number of plies at pixel place, formula [1] can be cut algorithm (] Boykov Y by employing figure, Veksler 0, Zabih R (2001) Fast approximate energy minimization via graph cuts.IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, pp 509-522.) effectively optimizes and find the solution, delineating of zone in 30% with a low credibility is divided into two, the subregion that keeps (greater than 70%) with a high credibility is as the seed region in this layering, and then is met the seed region (as shown in Figure 3) of layering requirement.
Step 6: to carrying out layering according to the seed region of delineating on each frame.The result of layering is divided into mutually disjoint plane domain with each frame, and each piece plane domain has the color close with seed region, motion etc.Here the proximity of color adopts the color probability of gauss hybrid models definition in the formula [1] to describe, and the similarity of motion then adopts after the displacement color distortion of respective pixel to describe, and concrete movement differential can be defined as:
M i ( l i ) = arctan ( | | I t ( i ) - I t + 1 ( i &prime; ) | | 2 - &tau; ) + &pi; 2
Wherein τ is normal value, and value 60 in the method.In order to increase the consistance of adjacent two frame layerings, the time consistency energy that is defined as follows:
Figure BDA0000024282610000102
Take all factors into consideration above factor, calculating layering on each frame can obtain by optimizing following energy function:
[formula 2]
E ( l ) = &Sigma; i ( R i ( l i ) + M i ( l i ) + T i ( l i ) ) + &lambda; &Sigma; < p , q > &Element; N V < p , q > ( l p , l q )
Weight factor λ value 0.3 wherein.Energy when this function has been weighed and given certain one deck with each pixel can obtain the video of layering by this energy function of minimization, each frame of video is expressed as successively the combination of different delamination area.Usually these layerings comprise a background layer and some foreground object layers: the motion that comprises on the background layer mainly is that the translation, selection of video camera etc. produces, and its motion model is simple relatively; The motion of foreground object layer is more complicated then.Fig. 4 has shown the result for a certain video sequence layering, the different layering of wherein different gray-scale value Regional Representative.
Step 7: for the foreground layer of video, at first on each foreground layer of key frame, adopt anisotropic model (Huang H, Fu T N, Li C F (2010) Anisotropic brush for painterly rendering.In:Computer Graphics International 2010.) arranges brush, and then generate the foreground layer that stylization is drawn.
Step 8: consistent as far as possible layout brush during for each layering of make drawing each frame of video, with the brush on the key frame frame by frame propagate into all the other each frames.For smooth as far as possible transmission brush, that adopts the thin plate spline function definition carries out the transmission of brush as down conversion:
[formula 3]
T(x,y)=(f 1(x,y),f 2(x,y))
f ( x , y ) = c 0 + c 1 x + c 2 y + &Sigma; i = 1 n w i &Psi; ( | | ( x , y ) - ( x i , y i ) | | )
Ψ (r)=r wherein 2Logr 2Be kernel function, the thin plate spline coefficient can obtain by finding the solution following system of linear equations:
K P P T 0 w c = p k + 1 0
K wherein Ij=Ψ (‖ (x i, y i)-(x j, y j) ‖), P TI classify as (1, x i, y i) T,, x, y are the geometric coordinates of pixel, p K+1It is character pair point on this frame.Brush by foreground layer is propagated, and the stylization that can generate all prospects of video sequence is drawn.
Step 9: the background layer that video sequence is all is spliced into a panorama sketch under same coordinate system.In order to obtain accurate panorama sketch reconstruct, need to calculate the character pair point that is positioned between each frame of video sequence in each layering.The unique point of supposing to be positioned on 1 layer of the k frame is
Figure BDA0000024282610000113
Unique point on the Dui Ying k+1 frame is with it
Figure BDA0000024282610000114
Seek optimum conversion H so kMake unique point error minimum after the conversion, thereby obtain accurate panorama sketch reconstruct.Conversion H kFind the solution by following majorized function:
H k = arg min T &Sigma; l | | p l k - T &CenterDot; p l k + 1 | | 2
Find the solution for the background layer of each frame after the conversion, all background layers can be spliced under same coordinate system and generate panorama sketch.
Step 10: the panorama sketch of the background layer that obtains for splicing, adopt anisotropic model (Huang H, Fu T N, Li C F (2010) Anisotropic brush for painterly rendering.In:Computer Graphics International 2010.) arranges brush, and then generate the background layer that stylization is drawn.Then, utilize the inverse transformation of splicing conversion
Figure BDA0000024282610000116
Corresponding part mapping in the panorama sketch after the stylization drafting is returned each frame, thereby obtain having on each frame the background of drawing style.
Step 11: for each frame in the video sequence, each hierarchical fusion after will drawing according to order from back to front generates the video sequence that final stylization is drawn, promptly Here
Figure BDA0000024282610000122
Corresponding i frame r layering,
Figure BDA0000024282610000123
Be fusion coefficients, calculate according to the corresponding area ratio of each layering.
Fig. 5 is several frames in the stylized video of drawing, and has shown to adopt the stylization of video segmentation to draw effect.As can be seen, the present invention can generate the video of the drawing style with specific artistic effect.
As mentioned above, the present invention proposes a kind of painting rendering method based on video segmentation, it is by means of the video segmentation in the computer vision, obtain the layering of video sequence, then by in each layering, arranging the brush drafting, effectively reduce visual flicker, improve the interframe continuity, generate the stylization that has more artistic effect and draw video.
Although with reference to the accompanying drawings the present invention is explained and describe, the professional and technical personnel should be appreciated that, without departing from the spirit and scope of the present invention, can carry out various other changes, additions and deletions therein or to it.

Claims (2)

1. painting rendering method based on video segmentation is characterized in that comprising following steps:
1) choose key frame according to the video of input, the zones of different at key frame of user interactions is delineated, and the seed region in the number of plies of designated scene and each layering;
2) utilize based on the layered approach of delineating, by light stream the seed region on the key frame is propagated into all the other each frames successively, adopt gauss hybrid models that the seed region of propagating is carried out fail-safe analysis, keep the high zone of reliability and be used for the seed region of video segmentation as this frame;
3), utilize figure to cut optimization method each frame is carried out layering, and then the brush that obtains consistent delamination area and be positioned at foreground layer between each frame is arranged according to the seed region that obtains on each frame;
4) in the brush cloth postpone that obtains to be positioned on the key frame foreground layer, transmit, generate the rendering result of the prospect of all video sequences according to the thin plate spline conversion of consecutive frame respective layer;
5) background layer is spliced into panorama sketch by conversion, on panorama sketch, carries out the brush layout and draw panorama sketch, utilize the counter rendering result of asking each frame background layer of inverse transformation then;
6) on each frame, successively background layer and the foreground layer of drawing merged, obtain the stylization drawing rendering result of whole video.
2. the painting rendering method based on video segmentation as claimed in claim 1, its concrete steps are as follows:
Step 1: according to the video of given input, comprise in the selecting video sequence object of color in the video scene and appearance maximum as key frame, if video sequence is oversize, then video sequence is decomposed into several fragments, each fragment is chosen key frame separately;
Step 2: on key frame, adopt the mode of delineating to specify the seed region of layering: the color, the movable information that comprise object according to key frame, delineate in the corresponding zone of key frame, specify hierarchical index by the different gray-scale value of delineating, thereby obtain the seed region of key frame higher slice
For gray-scale value is delineating of c, and its overlay area is delineated for each as the seed region of c/40 layering, utilizes gauss hybrid models to calculate each regional color distribution respectively:
Pr ( c | I ) = &Sigma; j = 1 M p ( c | j ) P ( j )
Wherein P (j) is the weight factor corresponding to each branch, is taken as 1/3, and p (c|j) is the probability of each branch in the gauss hybrid models
p ( c | j ) = 1 2 &pi; | &Sigma; j | 1 2 exp - 1 2 ( &xi; - &mu; j ) T &Sigma; j - 1 ( &xi; - &mu; j )
ξ is the color value of three passages of RGB, and μ is each color average of delineating, and ∑ is a covariance matrix, and π is that circular constant often is worth;
Step 3: employing reciprocal basis method is calculated the optical flow field between adjacent two frames: for each pixel p on each frame i, corresponding light stream vectors v i. for the video that has noise, after obtaining optical flow field, adopt gaussian filtering that optical flow field is carried out fairing processing, obtain more stable optical flow field;
Step 4: between video sequence, propagate and delineate: delineate for each, calculate a group window { W iThe border of delineating is covered, make adjacent window cover each other simultaneously, to each window W iCalculate its inner average light flow vector
Figure FDA0000024282600000023
Delineate motion vector as this some place,, delineate and to be transmitted to so at next frame
Figure FDA0000024282600000024
The place;
Step 5: the confidence level of calculating seed region in the communication process: delineate for each
Figure FDA0000024282600000025
In each pixel i, its confidence level is defined as the distribution probability that the RGB color is delineated with respect to preceding a burst of correspondence, be Pr (i), if Pr (i) is less than 0.2, think that this pixel confidence level is lower, no longer be fit to do the seed of this layer, utilize confidence level to revise, operate by optimizing following energy function to delineating:
[formula 1]
E ( l ) = &Sigma; i R i ( l i ) + &lambda; &Sigma; < p , q > &Element; N V < p , q > ( l p , l q )
Wherein λ is a weight factor, the size of control trusted area, and value is 0.3; R i(l i) be the color probability of gauss hybrid models definition, specifically be defined as
R i(l i)=-ln(Pr(G i|l i))
Pr ( C i | l i ) = 1 2 &pi; | &Sigma; j | 1 2 exp - 1 2 ( &xi; - &mu; j ) T &Sigma; j - 1 ( &xi; - &mu; j )
V <p, q 〉Gauss is the slickness of definition neighbor, makes and delineates the compact integrality that still can keep relative:
V < p , q > ( l p , l q ) = 1 1 + | | C p - C q | | | l p - l q |
Wherein C is a color of pixel, l pIt is the number of plies at pixel place, formula [1] employing figure cuts algorithm and effectively optimizes and find the solution, delineating of zone in 30% with a low credibility is divided into two, keeps subregion with a high credibility, and then be met the seed region of layering requirement as the seed region in this layering;
Step 6: to carrying out layering according to the seed region of delineating on each frame: the result of layering is divided into mutually disjoint plane domain with each frame, each piece plane domain has the color close with seed region, motion, here the proximity of color adopts the color probability of gauss hybrid models definition in the formula [1] to describe, the similarity of motion then adopts after the displacement color distortion of respective pixel to describe, and concrete movement differential is defined as:
M i ( l i ) = arctan ( | | I t ( i ) - I t + 1 ( i &prime; ) | | 2 - &tau; ) + &pi; 2
Wherein τ often is worth value 60, increase the consistance of adjacent two frame layerings, the time consistency energy that is defined as follows:
Figure FDA0000024282600000034
Take all factors into consideration above factor, on each frame, calculate layering and obtain by optimizing following energy function:
[formula 2]
E ( l ) = &Sigma; i ( R i ( l i ) + M i ( l i ) + T i ( l i ) ) + &lambda; &Sigma; < p , q > &Element; N V < p , q > ( l p , l q )
Energy when weight factor λ value 0.3 wherein, number have been weighed and given certain one deck with each pixel can obtain the video of layering by this energy function of minimization, each frame of video is expressed as successively the combination of different delamination area;
Step 7:, at first on each foreground layer of key frame, adopt anisotropic model to arrange brush, and then generate the foreground layer that stylization is drawn for the foreground layer of video;
Step 8: consistent as far as possible layout brush during for each layering of make drawing each frame of video, with the brush on the key frame frame by frame propagate into all the other each frames.For smooth as far as possible transmission brush, that adopts the thin plate spline function definition carries out the transmission of brush as down conversion:
[formula 3]
T(x,y)=(f 1(x,y),f 2(x,y))
f ( x , y ) = c 0 + c 1 x + c 2 y + &Sigma; i = 1 n w i &Psi; ( | | ( x , y ) - ( x i , y i ) | | )
Ψ (r)=r wherein 2Logr 2Be kernel function, the thin plate spline coefficient can obtain by finding the solution following system of linear equations:
K P P T 0 w c = p k + 1 0
K wherein Ij=Ψ (‖ (x i, y i)-(x j, y j) ‖), P TI classify as (1, x i, y i) T, p K+1It is character pair point on this frame.Brush by foreground layer is propagated, and the stylization that can generate all prospects of video sequence is drawn;
Step 9: the background layer that video sequence is all is spliced into a panorama sketch under same coordinate system: in order to obtain accurate panorama sketch reconstruct, need to calculate the character pair point that is positioned between each frame of video sequence in each layering, the unique point of supposing to be positioned on 1 layer of the k frame is Unique point on the Dui Ying k+1 frame is with it
Figure FDA0000024282600000045
Seek optimum conversion H so kMake unique point error minimum after the conversion, thereby obtain accurate panorama sketch reconstruct, conversion H kFind the solution by following majorized function:
H k = arg min T &Sigma; l | | p l k - T &CenterDot; p l k + 1 | | 2
Find the solution for the background layer of each frame after the conversion, all background layers can be spliced under same coordinate system and generate panorama sketch;
Step 10: the panorama sketch of the background layer that obtains for splicing, adopt anisotropic model to arrange brush, and then generate the background layer that stylization is drawn, then, utilize the inverse transformation of splicing conversion
Figure FDA0000024282600000052
Corresponding part mapping in the panorama sketch after the stylization drafting is returned each frame, thereby obtain having on each frame the background of drawing style;
Step 11: for each frame in the video sequence, each hierarchical fusion after will drawing according to order from back to front generates the video sequence that final stylization is drawn, promptly Here
Figure FDA0000024282600000054
Corresponding i frame r layering,
Figure FDA0000024282600000055
Be fusion coefficients, calculate according to the corresponding area ratio of each layering.
CN2010102500634A 2010-08-10 2010-08-10 Drawing rendering method based on video sub-layer Expired - Fee Related CN101930614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102500634A CN101930614B (en) 2010-08-10 2010-08-10 Drawing rendering method based on video sub-layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102500634A CN101930614B (en) 2010-08-10 2010-08-10 Drawing rendering method based on video sub-layer

Publications (2)

Publication Number Publication Date
CN101930614A true CN101930614A (en) 2010-12-29
CN101930614B CN101930614B (en) 2012-11-28

Family

ID=43369771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102500634A Expired - Fee Related CN101930614B (en) 2010-08-10 2010-08-10 Drawing rendering method based on video sub-layer

Country Status (1)

Country Link
CN (1) CN101930614B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542593A (en) * 2011-09-30 2012-07-04 中山大学 Interactive video stylized rendering method based on video interpretation
CN102609958A (en) * 2012-01-19 2012-07-25 北京三星通信技术研究有限公司 Method and device for extracting video objects
WO2013106984A1 (en) * 2012-01-16 2013-07-25 Google Inc. Learning painting styles for painterly rendering
CN105913483A (en) * 2016-03-31 2016-08-31 百度在线网络技术(北京)有限公司 Method and device for generating three-dimensional crossing road model
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
CN109191539A (en) * 2018-07-20 2019-01-11 广东数相智能科技有限公司 Oil painting generation method, device and computer readable storage medium based on image
WO2019047628A1 (en) * 2017-09-08 2019-03-14 北京京东尚科信息技术有限公司 Massive picture processing method, device, electronic apparatus and storage medium
CN111951345A (en) * 2020-08-10 2020-11-17 杭州趣维科技有限公司 GPU-based real-time image video oil painting stylization method
CN112055255A (en) * 2020-09-15 2020-12-08 深圳创维-Rgb电子有限公司 Shooting image quality optimization method and device, smart television and readable storage medium
CN112492375A (en) * 2021-01-18 2021-03-12 新东方教育科技集团有限公司 Video processing method, storage medium, electronic device and video live broadcast system
CN115250374A (en) * 2022-07-08 2022-10-28 北京有竹居网络技术有限公司 Method, device and equipment for displaying panoramic image and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050285875A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Interactive viewpoint video system and process
CN101573732A (en) * 2006-12-29 2009-11-04 英特尔公司 Using supplementary information of bounding boxes in multi-layer video composition
US20100169783A1 (en) * 2008-12-30 2010-07-01 Apple, Inc. Framework for Slideshow Object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050285875A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Interactive viewpoint video system and process
CN101573732A (en) * 2006-12-29 2009-11-04 英特尔公司 Using supplementary information of bounding boxes in multi-layer video composition
US20100169783A1 (en) * 2008-12-30 2010-07-01 Apple, Inc. Framework for Slideshow Object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 20011130 Yuri Boykov et al. Fast Approximate Energy Minimization via Graph Cuts 全文 1-2 第23卷, 第11期 2 *
《洛阳大学学报》 20021215 朱小燕 序列图像视频分层技术综述 全文 1-2 第17卷, 第4期 2 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542593A (en) * 2011-09-30 2012-07-04 中山大学 Interactive video stylized rendering method based on video interpretation
WO2013106984A1 (en) * 2012-01-16 2013-07-25 Google Inc. Learning painting styles for painterly rendering
US9449253B2 (en) 2012-01-16 2016-09-20 Google Inc. Learning painting styles for painterly rendering
CN102609958A (en) * 2012-01-19 2012-07-25 北京三星通信技术研究有限公司 Method and device for extracting video objects
CN105913483A (en) * 2016-03-31 2016-08-31 百度在线网络技术(北京)有限公司 Method and device for generating three-dimensional crossing road model
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
CN106504306B (en) * 2016-09-14 2019-09-24 厦门黑镜科技有限公司 A kind of animation segment joining method, method for sending information and device
CN110019865A (en) * 2017-09-08 2019-07-16 北京京东尚科信息技术有限公司 Mass picture processing method, device, electronic equipment and storage medium
WO2019047628A1 (en) * 2017-09-08 2019-03-14 北京京东尚科信息技术有限公司 Massive picture processing method, device, electronic apparatus and storage medium
CN110019865B (en) * 2017-09-08 2021-01-26 北京京东尚科信息技术有限公司 Mass image processing method and device, electronic equipment and storage medium
US11395010B2 (en) 2017-09-08 2022-07-19 Beijing Jingdong Shangke Information Technology Co., Ltd. Massive picture processing method converting decimal element in matrices into binary element
CN109191539A (en) * 2018-07-20 2019-01-11 广东数相智能科技有限公司 Oil painting generation method, device and computer readable storage medium based on image
CN111951345A (en) * 2020-08-10 2020-11-17 杭州趣维科技有限公司 GPU-based real-time image video oil painting stylization method
CN111951345B (en) * 2020-08-10 2024-03-26 杭州小影创新科技股份有限公司 GPU-based real-time image video oil painting stylization method
CN112055255A (en) * 2020-09-15 2020-12-08 深圳创维-Rgb电子有限公司 Shooting image quality optimization method and device, smart television and readable storage medium
CN112055255B (en) * 2020-09-15 2022-07-05 深圳创维-Rgb电子有限公司 Shooting image quality optimization method and device, smart television and readable storage medium
CN112492375A (en) * 2021-01-18 2021-03-12 新东方教育科技集团有限公司 Video processing method, storage medium, electronic device and video live broadcast system
CN115250374A (en) * 2022-07-08 2022-10-28 北京有竹居网络技术有限公司 Method, device and equipment for displaying panoramic image and storage medium

Also Published As

Publication number Publication date
CN101930614B (en) 2012-11-28

Similar Documents

Publication Publication Date Title
CN101930614B (en) Drawing rendering method based on video sub-layer
CN100448271C (en) Video editing method based on panorama sketch split joint
CN104715451B (en) A kind of image seamless fusion method unanimously optimized based on color and transparency
US20180025749A1 (en) Automatic generation of semantic-based cinemagraphs
Xiao et al. Joint affinity propagation for multiple view segmentation
CN100539698C (en) The video of interactive time-space unanimity is scratched drawing method in a kind of Digital Video Processing
CN105956995B (en) A kind of face appearance edit methods based on real-time video eigen decomposition
CN102831584B (en) Data-driven object image restoring system and method
CN103279961A (en) Video segmentation method based on depth recovery and motion estimation
CN108509917A (en) Video scene dividing method and device based on shot cluster correlation analysis
CN102005061A (en) Method for reusing cartoons based on layering/hole-filling
CN106204461A (en) Compound regularized image denoising method in conjunction with non local priori
Shi et al. Deep line art video colorization with a few references
CN111414860A (en) Real-time portrait tracking and segmenting method
Xie et al. Seamless video composition using optimized mean-value cloning
Zhao et al. Cartoon image processing: a survey
Yue et al. Semi-supervised monocular depth estimation based on semantic supervision
CN103049929A (en) Multi-camera dynamic scene 3D (three-dimensional) rebuilding method based on joint optimization
CN115100223A (en) High-resolution video virtual character keying method based on deep space-time learning
Ueno et al. Continuous and Gradual Style Changes of Graphic Designs with Generative Model
CN115272146B (en) Stylized image generation method, system, device and medium
Li et al. Exploiting multi-direction features in MRF-based image inpainting approaches
Ye et al. Hybrid scheme of image’s regional colorization using mask r-cnn and Poisson editing
Cao et al. Automatic motion-guided video stylization and personalization
Jin et al. Automatic and real-time green screen keying

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

Termination date: 20190810