CN102542593A - Interactive video stylized rendering method based on video interpretation - Google Patents

Interactive video stylized rendering method based on video interpretation Download PDF

Info

Publication number
CN102542593A
CN102542593A CN201110302054XA CN201110302054A CN102542593A CN 102542593 A CN102542593 A CN 102542593A CN 201110302054X A CN201110302054X A CN 201110302054XA CN 201110302054 A CN201110302054 A CN 201110302054A CN 102542593 A CN102542593 A CN 102542593A
Authority
CN
China
Prior art keywords
region
video
writing
style
paintbrush
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110302054XA
Other languages
Chinese (zh)
Inventor
刘树郁
张新楠
江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201110302054XA priority Critical patent/CN102542593A/en
Publication of CN102542593A publication Critical patent/CN102542593A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to an interactive video stylized rendering method based on video interpretation, wherein an interactive video semantic segmentation module and a video stylization module are utilized. A segmentation method of the interactive video semantic segmentation module comprises the following steps of: (1) interactive segmentation and automatic identification of key frame images; (2) matching of dense characteristic points among key frames; and (3) area competition segmentation. A stylization method of the video stylization module comprises the following steps of: (4) un-reality sense drawing of the key frames based on semantic analysis; (5) a brushwork propagating method of a sequence frame; and (6) a damping pen brush system for preventing shaking. The interactive video stylized rendering method based on the video interpretation, disclosed by the invention, has the advantages of short manufacturing period, low cost and favorability on manufacturing in batches.

Description

A kind of interactive video stylized rendering method interpreted based on video
Technical field
The present invention is a kind of interactive video stylized rendering method interpreted based on video, belongs to the renovation technique of the interactive video stylized rendering method interpreted based on video. 
Background technology
As computer, digital camera and DV are popularized on a large scale, people are for the making demand more and more higher in terms of audio-visual amusement.What is triggered therewith is flourishing for family digital entertainment field.Increasing people begins attempt to, when having played amateurish " director ", be keen to make and edit various common realistic videos.Recent years, various stylized videos are progressively received by people, and as fashion elements, especially in terms of animated video and network game make.The oil painting short-movie of such as manual drawing《Old man and sea》With ink and wash video《Small tadpole looks for mother》Deng all causing being absorbed in extensively for people, the former also obtains a series of awards such as Oscar short-movie.Video stylized rendering not only needs professional technique, and also needs to a large amount of manpower financial capacity's supports, and the stylized technology of traditional video is to realize stylized rendering by method for drafting frame by frame.Although visual effect of the works completed with this making pattern per two field picture can be with hand-guided, but continuous play then result in video pictures and there is larger jitter phenomenon due to lacking the uniformity of interframe, and these method fabrication cycle length, high costs, it is unfavorable for batch making.Such as, it is mentioned above《Old man and sea》Oil painting short-movie, although duration only has 22 minutes, but fabrication cycle but be up to nearly 3 years as long as. 
The content of the invention
There is provided that a kind of fabrication cycle is short, low cost it is an object of the invention to consider above mentioned problem, beneficial to the interactive video stylized rendering method interpreted based on video of batch making. 
The technical scheme is that:The interactive video stylized rendering method that the present invention is interpreted based on video, includes the stylized module of interactive video semantic segmentation module, video, and the dividing method of interactive video semantic segmentation module comprises the following steps: 
1) Interactive Segmentation and automatic identification of key frame images; 
2) between key frame dense characteristic point matching; 
3) region-competitive partitioning algorithm; 
The stylizing method of video stylization module comprises the following steps: 
4) the key frame non-photorealistic rendering based on semanteme parsing; 
5) the style of writing transmission method of sequence frame; 
6) it is used for the damping brush system of stabilization. 
Stylization to video will use the two modules successively.It is i.e. first that semantic segmentation is carried out to video using interactive semantic segmentation module.Reuse video stylized module and stylized rendering is carried out to the video after segmentation.Above-mentioned steps 1) key frame images Interactive Segmentation and automatic identifying method it is as follows: 
They be divide into 12 classes by the semantic region of segmentation according to its different material properties, including sky/cloud, mountain range/land, rock/building, leaf/grove, hair/hair, flower/fruit, skin/hide, trunk/branch, abstract background, wood/plastics, water, clothes; 
In practical operation, employ texture, distribution of color, positional information these three principal characters and be trained and recognize, give an area image X, the conditional probability for defining its classification c is: 
LogP (x | X, θ)=∑iΨi(ci, X;θΨ)+π(ci, X;θπ)+λ(ci, X;θλ)-logZ (θ, X) (*)
Four are texture potential-energy function, color potential-energy function, position potential-energy function and normalization item respectively after in formula. 
Texture potential-energy function is defined as Ψi(ci, X;θΨ)=logP (ci| X, i), P (ci| X, i) it is a normalized distribution function being given by Boost graders; 
Color potential-energy function is defined as π (ci, X;θπ)=log ∑skθn(ci, k) P (k | xi), with gauss hybrid models (the Gaussian Mixture Models in CIELab color spaces:GMMs color model) is represented, to a pixel color x in given image, its conditional probability is: 
Figure BDA0000095342860000031
Wherein μkRepresent the average and variance of k-th of color cluster respectively with ∑ k; 
Position potential-energy function is defined as λ (ci, X;θλ)=log θλ(ci, i), for above two potential-energy functions, position potential-energy function is relatively weak, and in the definition of this function, the category label of image pixel is only related to absolute position in the picture; 
It is trained using the method for 12 class materials, all pixels in the probability for each classification for giving each pixel in an image-region, last statistical regions is then calculated using above formula *, the classification in each region is determined by the way of ballot;During stylized rendering, the material that the selection of paintbrush is identified by object area is determined, to realize that automatic render lays the foundation. 
Above-mentioned steps 2) key frame between dense characteristic point matching process it is as follows: 
After semantic information on key frame is obtained, comprehensive line drawing feature, texture and blend of colors image template feature provide abundant characteristic set and expression for image matching problems; 
11) line drawing feature is shown as by Gabor base tables: 
Fsk(Ii)=| |<Ii, GCos, x, θ>||2+||<Ii, GSin, x, θ>||2, GSin, x, θAnd GCos, x, θIt is illustrated respectively in position
X prescriptions to for θ sine and cosine Gabor bases.The distribution of its characteristic probability is expressed as: 
P ( I i | B i ; &theta; i ) q ( I i ) = 1 Z ( &lambda; i sk ) exp { &lambda; i sk h sk [ F sk ( I i ) ] }
Represent parameter θi, hskIt is a sigmoid function,
Figure BDA0000095342860000043
It is normalization constraints. 
So model will encourage have stronger corresponding edge than background distributions; 
12) textural characteristics are modeled with a simplified gradient orientation histogram (HOG) to textural characteristics, and 6 characteristic dimensions represent different gradient directions respectively;HOG j-th of direction is represented, and
Figure BDA0000095342860000044
Represent ith feature IiCorresponding description; 
Figure BDA0000095342860000045
Figure BDA0000095342860000046
It is Ftxt(Ii) average in all positive samples.The probabilistic model of feature is expressed as by the present invention: 
P ( I i | B i ; &theta; i ) q ( I i ) = 1 Z ( &lambda; i txt ) exp { &lambda; i txt &Sigma; j h j txt [ F txt ( I i ) ] }
Figure BDA0000095342860000048
It is parameter θi.It can be seen that model encourages to respond the set for the characteristic image block relatively concentrated; 
13) color characteristic be using simple pixel intensity as description,
Figure BDA0000095342860000049
It is the wave filter on the x of position.Quantizing pixel brightness value of the present invention is interval to each statistics, then model can be reduced to: 
P ( I i | B i ; &theta; i ) q ( I i ) = 1 Z ( &lambda; i fl ) exp { &Sigma; j &lambda; i fl h xj fl [ F xj fl ( I i ) ] }
The small feature of similar image by combining, the local combinations of features with strong judgement index can be just received, over-segmentation is carried out to image first, the tiny image block of some in image is obtained, line drawing can be described by being extracted from small image block, texture, the statistical nature of color, in order to effectively obtain combinations of features, increased and model-learning algorithm using the region of iteration, by constantly updating characteristic model, iteration growth feature combination zone finally gives the local combinations of features with strong judgement index; 
On the basis of expressing herein, matching problem of the moving target in time domain and spatial domain is modeled as the hierarchical diagram the matching frame on figure is represented, the mixed image template characteristic of extraction is used as node of graph, graph structure is built between frames, side annexation between node of graph can be defined with the affiliated object type of the similarity between feature based, locus, and feature; 
With Is, It represents artwork and target figure, and U, V represent hybrid template characteristic set in Is, It respectively, to each characteristic point u ∈ U ', there is two marks:Level mark I (u) ∈ { 1,2 ..., K } and matching candidate mark
Figure BDA0000095342860000051
With the higher Candidate Set C of each Feature Points Matching degree in artwork, the vertex set of graph structure is set up, with E=E+∪E-Build line set.Represent that the candidate of connection is mutually exclusive with negative side, and define its " repulsive force " and be: 
With the adjacent and not mutually exclusive candidate feature point of positive side connection space, order
Figure BDA0000095342860000053
The tightness degree cooperated between them is represented,Represent vi, vjBetween space length; 
By original image and the graph structure G of target images、GT, K+1 layers are divided into, wherein K represents the object number in artwork, with GsExemplified by, division is expressed as ∏={ g0, g1..., gk}.Wherein, gkIt is GsA subgraph, its vertex set is with UkRepresent.Similar, GTVertex set with VkRepresent.Then GsAnd GTBetween matching relationship be expressed asAssuming that the matching between subgraph is separate, then: 
Figure BDA0000095342860000056
Estimated with geometric transformation, outward appearance and define Matching sub-image to (gk, gk') between similarity measure, use
Figure BDA0000095342860000057
Represent;In summary, the solution of graph structure matching problem is configurable to: 
W=(K, ∏={ g0, g1..., gk, Ψ={ Φk, Φ={ Φk}) 
Under bayesian theory framework, graph structure matching problem is described to maximize posterior probability: 
W*=argmaxp (W | Gs, GT)=argmaxp (W) p (Gs, GT|W) 
Above formula is solved by markov chain Monte-Carlo (MCMC) method, while in order to efficiently calculate, by efficiently redirecting in solution space, globally optimal solution is rapidly converged to, to reach the matching of interframe characteristic point. 
Above-mentioned steps 3) region-competitive dividing method it is as follows: 
On the basis of the matching relationship of interframe stabilization is obtained, pass through the advantage in Video segmentation of excavation regions competition mechanism, utilize the image matching algorithm for being layered graph structure, the matching relationship between former frame and present frame feature can be determined, the semantic information of so former frame is just traveled in present frame, then region-competitive partitioning algorithm is utilized according to the characteristic information of each matching area, present frame is divided into multiple semantic regions; 
Given image I, corresponding image segmentation solution is defined as follows: 
W={ (R1, R2... RN), (θ1, θ2..., θN), (I1, I2, ..., IN)} 
Wherein, RiThe region with same characteristic being partitioned into is represented,
Figure BDA0000095342860000061
Figure BDA0000095342860000062
θiRepresent region RiThe parameter of corresponding characteristic probability distributed model, IiRepresent region RiCorresponding mark; 
According to the matching relationship of feature in front and rear frame, it may be determined that cut zone number N.If the feature zonule set S={ S corresponding to each region1, S2..., SN, for each region Ri, the zonule S occupied according to featureiEstimate the initial parameter θ of modeli, obtain initial posterior probability P (θi| I (x, y)).According to MDL principles, posterior probability is converted into solution energy function minimum problem, obtained: 
log P ( W | I ) = E [ &Gamma; , { &theta; i } ] = &Sigma; i = 1 N - &Integral; &Integral; R 1 log P ( &theta; i | I ( x , y ) ) dxdy
Wherein
Figure BDA0000095342860000071
Figure BDA0000095342860000073
Represent region RiBoundary profile.The present invention estimates parameter { θ stage by stage using iterative manneriAnd Γ, alternating iteration two benches, constantly reduce energy function in the various stages, so that constantly Reasoning With Learning goes out the final segmentation result of entire image; 
During region-competitive, its characteristic probability distributed model is constantly updated in each region, while declining the ownership that principle fights for pixel according to steepest, updates respective boundary profile so that scope is constantly expanded in each region, finally gives the image segmentation result of present frame; 
Its specific iterative step is:First stage, fixed Γ, according to current region segmentation state estimation { θi, try to achieve parameter θ under current stateiMaximal possibility estimation be used as its optimal solution
Figure BDA0000095342860000074
To minimize the cost for describing each region, therefore energy function is converted into: 
&theta; i * = arg max &theta; i { &Integral; &Integral; R log P ( &theta; i | I ( x , y ) ) dxdy } , &ForAll; i &Element; [ 1 , N ]
Second stage, { θi, it is known that carrying out steepest decline to Γ, in order to quickly try to achieve the minimal solution of energy function, the present invention solves the equation of motion that steepest declines to the border Γ in all regions.For any point on boundary profile Γ
Figure BDA0000095342860000076
Have
d &upsi; &RightArrow; dt = - &delta;E ( &Gamma; , { &theta; i } ) &delta; &upsi; &RightArrow; = &Sigma; k &Element; Q ( &upsi; &RightArrow; ) log P ( &theta; k | I ( &upsi; &RightArrow; ) ) &CenterDot; n &RightArrow; k ( &upsi; &RightArrow; )
Wherein,
Figure BDA0000095342860000078
It is τ k in pointDirection vector, point
Figure BDA00000953428600000711
Which region belonged to, depending on point
Figure BDA00000953428600000712
It is adapted to the degree described by provincial characteristics probability Distribution Model; 
To determine each pixel and interregional subordinate relation, the image segmentation algorithm process description based on competition mechanism is as follows: 
In initial phase, the initial parameter of each class model is estimated according to the characteristic image block matched, the boundary point of all characteristic image blocks is added in queue undetermined, and calculate the posterior probability that all boundary points belong to all kinds of; 
In the loop iteration stage, the boundary point i that present energy steepest declines is selected from queue undetermined, and then update all borders where boundary point i;Then under current cutting state, the model parameter in each region is recalculated using maximal possibility estimation;Using each provincial characteristics distributed model newly obtained, the posterior probability that all boundary points belong to all kinds of is recalculated; 
So, the boundary point for constantly selecting present energy steepest to decline from queue undetermined updates corresponding border, update the feature distribution probabilistic model in each region according to current region segmentation state in good time simultaneously, multiple regions are mutually restricted, the ownership of image-region is competed simultaneously, until energy function convergence, so as to divide the image into as multiple regions. 
The stylizing method step 4 of the stylized module of above-mentioned video) video stylization is based on interactive video semanteme segmentation module, and the selection of paintbrush is only determined by the material corresponding to the object area that identifies; 
Above-mentioned paintbrush is all based on professional artist and a large amount of typical styles of writing is drawn on paper, then it is scanned and parameterizes, finally set up style of writing storehouse, drawn for each image-region, bottoming is carried out using big brush first, then brush size and opacity are gradually decreased to carry out meticulous depiction to the detail section of object, during drafting, using drafting strategy internal behind first edge:The drafting present invention of each tomographic image is drawn, and brush is alignd according to flow field first first since edge along the edge of line drawing; 
In Video Rendering, in order to ensure stability of the paintbrush in time domain, the propagation of style of writing is carried out using thin-plate spline interpolation technology, in addition, style of writing is in communication process, also by calculating the area in style of writing region, style of writing is devised and deletes and increase mechanism;And using damping spring system is simulated, reduce " shake " effect of rendering result. 
The stylizing method step 5 of the stylized module of above-mentioned video) the key frame non-photorealistic rendering method based on semanteme parsing it is as follows: 
How to design different artistic style style of writing models is one of stylized focus of attention of video, the works of different artistic expressions, differ from one another in style of writing expression, basic drafting strategy in video stylization is that the suitable style of writing of image content-based selection is drawn, style of writing storehouse is that a large amount of typical styles of writing are drawn on paper based on professional artist, then it is scanned and parameterizes, finally complete foundation, for the paintbrush B that will be drawnnInclude following information:The classification information I of brushn, placement area scope Λn, color mapping Cn, the α of transparencyn, height field HnAnd control point { Pni, that is, have: 
Bn={ In, Λn, Cn, αn, Hn, { Pni}} 
When designing style of writing model, the low level informations such as style of writing shape, texture are not only allowed for, while also combine its high-layer semantic information, so that in render process, each interpretation region of image/video has " pen " can be according to;When choosing style of writing, to interpret area classification as keyword, a collection of style of writing with identical category is selected from style of writing storehouse with simple and fast.And then a therefrom optional style of writing in a random basis; 
" alignment " principle in being drawn for simulation oil painting, uses for reference original simple model theory, in each region RiIt is interior, calculate its original brief figure SKiExpression.Brief figure is made up of the conspicuousness primitive of a group echo body surface feature, such as the spot on clothes, lines, gauffer;In render process, different paintbrush will be covered on these primitives to produce desired artistic effect;Interpret region Ri, Ri∈ΛiIt is divided into the line drawing part for describing line drawing
Figure BDA0000095342860000091
And for describing the non-line drawing part with identical structural region
Figure BDA0000095342860000092
RiField of direction θ x are defined as: 
&Theta; i = { &theta; ( x , y ) | &theta; ( x , y ) &Element; [ 0 , &pi; ) , &ForAll; ( x , y ) &Element; &Lambda; i }
Wherein field of direction θiInitial value is line drawingGradient direction.Then direction is traveled to non-line drawing region using diffusion equation
Figure BDA0000095342860000095
Render process to key frame is the continuous process chosen style of writing and put style of writing;To interpret region RiExemplified by, its non-line drawing part is rendered first
Figure BDA0000095342860000101
Then line drawing part is rendered
Figure BDA0000095342860000102
This is in order to ensure when the region rendered overlaps, the style of writing of line drawing part can be in upper strata;In non-line drawing part, optional one pixel region that is not rendered, using the center in the region as originating point, spreads along the field of direction to both sides, generates a flow pattern regions;The line on the basis of the axis in the region, the paintbrush chosen is transformed in the flow pattern regions, style of writing axis is alignd with region axis;It is similar to rendering for region line drawing part. 
The stylizing method step 5 of the stylized module (2) of above-mentioned video) sequence frame style of writing transmission method it is as follows: 
Rendering for non-key frame is obtained by the rendering result " propagation " of key frame, the foundation of propagation is the space-time corresponding relation for interpreting region, in communication process, as the change in interpretation region is increasing, style of writing may gradually be leaked to region exterior, and the space with occurring in time domain to be rendered, so, in style of writing figure is propagated, it is necessary to while considering the addition of style of writing and deleting mechanism, otherwise, the jitter phenomenon that rendering result occurs;The propagation of style of writing, add and to delete mechanism as follows: 
(a) style of writing is propagated:C is made to represent some interpretation region, R of video t key framei(t+1) R is representedi(t) in t+1 moment corresponding region.Their image-region is respectively with Λi(t)、Λi(t+1) represent.With Pij(t)、Pij(t+1) Λ is representedi(t), dense matching points (video interpretation during calculate) of the Λ x (t+1) in time domain.Assuming that Ri(t+1) table can pass through Ri(t) non-rigid transformation of table is obtained.When style of writing is propagated, the present invention wishes Λi(t) the match point P onij(t) image-region Λ new in t+1 frames can be mapped toi(t+1) match point Pij(t+1).Based on considerations above, the present invention is from thin-plate spline interpolation model (Thin-plate Spline, TPS).It can be Λi(t) key point P inij(t) it is mapped to Λi(t+1) match point Pij(t+1), and for Λi(t) pixel of the non-key point of remaining in, TPS makes Λ by minimizing energy functioni(t) pixel grid occurs (non-rigid) deformation of elasticity and distorted. 
(b) style of writing is deleted:After being propagated in video due to paintbrush or when there occurs hiding relation or too many style of writing propagation frame number, region corresponding to some paintbrush can become less and less, therefore, the present invention will reject these paintbrush when their corresponding region areas are less than some given threshold value.Equally, also to be deleted when the paintbrush of propagation falls outside corresponding zone boundary. 
(c) style of writing increases.When there is new semantic region or already present semantic region becomes increasing (expansion of such as clothes), the present invention must increase new paintbrush to cover these emerging regions, and in order to fill the space between paintbrush, the present invention only need to simply change size and the position of adjacent paintbrush.If the region not covered by paintbrush becomes bigger and bigger and exceeded some given threshold value, system can be automatically created new paintbrush to cover it.Nevertheless, the present invention is still unlikely to draw one to it at once when space occurs for the first time.Then, comparatively the present invention is provided with compares high threshold value, and postpones to render emerging region when they rise to sufficiently large.Then, the present invention puts algorithm to be filled up to the sufficiently large space of threshold value using general paintbrush, finally oppositely propagates and converts again these new paintbrush and go to fill void area that is previously having occurred but not rendering.The process of filling paintbrush can avoid continually converting paintbrush backward, while less some scrappy paintbrushes can be linked as larger paintbrush again, so as to reduce scintillation effect and other undesirable visual effects artificially caused.Similarly, since the present invention is that new paintbrush is added in the bottom, so they are drawn in below the paintbrush existed, This further reduces visual scintillation effect. 
The stylizing method step 6 of the stylized module of above-mentioned video) in be used for stabilization damping brush system it is as follows: 
It is stabilization operation to the final step that video carries out stylized rendering, paintbrush adjacent in time domain and spatial domain is attached with spring, to simulate damping system;By the energy for minimizing the system, it is possible to reach the effect for removing shake; 
For i-th of paintbrush of t, present invention AI, t=(xI, t, yI, t, sI, t) its centre coordinate and the geometric attribute of size are represented, and its initial value is designated as
Figure BDA0000095342860000121
The energy function of damping brush system is defined as follows: 
E=Edata1Esmooth12Esmooth2
λ1And λ2For weight, λ1=2.8, λ2=1.1; 
The first item constraint paintbrush position will can not be partially too far away with initial position in formula: 
E data = &Sigma; i , t ( A i , t - A i , t 0 ) 2
Section 2 is the smoothness constraint carried out to paintbrush i in time domain in formula: 
E smooth 1 = &Sigma; i , t ( A i , t + 1 - 2 A i , t + A i , t - 1 ) 2
Section 3 all carries out smoothness constraint to adjacent paintbrush in time domain and spatial domain in formula;Note
Figure BDA0000095342860000124
For the adjacent paintbrush of i-th of paintbrush of t, for arbitrary neighborhood paintbrush
Figure BDA0000095342860000125
By the relative distance difference and difference in size between them, Δ A is designated asI, j, t=AI, t-AJ, t, and smooth item is defined as below: 
E smooth 1 = &Sigma; i , t ( &Delta;A i , j , t + 1 - 2 &Delta;A i , j , t + &Delta; A i , j , t - 1 ) 2
Pass through Levenbergy-Marquard Algorithm for Solving energy minimization problems. 
Above-mentioned λ1=2.8, λ2=1.1. 
Segmentation, identification and the foundation of space-time corresponding relation of the invention by studying video, inquires into the video stylized rendering technology of semantics-driven, reaches the expression effect of art needs.The present invention is by since the semantic analysis research of input video, using the interactive mode based on key frame, while farthest reduction burden for users, sufficient prior information is provided for Video segmentation, then by setting up the feature point correspondence between frame and frame, the interactive information on key frame is propagated to subsequent frame using region-competitive algorithm so that user semantic information can sufficiently instruct accurate Video segmentation.And for different-style, create different style of writing storehouses.When rendering, key frame is rendered according to semantic information first, then using the time-space relationship of semantic region as constraint, the style of writing of key frame is traveled in sequence frame by spatial alternation, so as to effectively suppress " shake " effect of rendering result.In addition, present invention further propose that being easy to the system schema that user mutual is created, so as to improve the applicability of this project.The various industries such as advertisement, education, amusement are present invention can be extensively applied to, with important application background. 
Embodiment
Embodiment: 
The interactive video stylized rendering method that the present invention is interpreted based on video, includes the stylized module of interactive video semantic segmentation module, video, and the dividing method of interactive video semantic segmentation module comprises the following steps: 
1) Interactive Segmentation and automatic identification of key frame images; 
2) between key frame dense characteristic point matching; 
3) region-competitive partitioning algorithm; 
The stylizing method of video stylization module comprises the following steps: 
4) the key frame non-photorealistic rendering based on semanteme parsing; 
5) the style of writing transmission method of sequence frame; 
6) it is used for the damping brush system of stabilization. 
Stylization to video will use the two modules successively.It is i.e. first that semantic segmentation is carried out to video using interactive semantic segmentation module.Reuse video stylized module and stylized rendering is carried out to the video after segmentation.Step 1 in above-mentioned interactive video semantic segmentation module 1) key frame images Interactive Segmentation and automatic identifying method it is as follows: 
In the present invention, the identification technology TextonBoost and interactive segmentation method GraphCut of comprehensive more maturation interact semantic segmentation and the identification of formula to key frame images, so as to obtain the object area in image and mutually be layered and hiding relation.They divide into 12 classes, including sky by the semantic region of segmentation by present system according to its different material properties, water, land, rock, hair, skin, clothes etc., as shown in table 1. 
Table 1:12 kinds of material classifications of semantic region
Mountain range Water Rock/building Leaf/grove
Skin/hide Hair/hair Flower/fruit Sky/cloud
Clothes Trunk/branch Abstract background Wood/plastics
In practical operation, it is trained and recognizes present invention employs texture, distribution of color, positional information these three principal characters.An area image X is given, the conditional probability for defining its classification c is: 
log P ( x | X , &theta; ) = &Sigma; i &Psi; i ( c i , X ; &theta; &Psi; ) + &pi; ( c i , X ; &theta; &pi; ) + &lambda; ( c i , X ; &theta; &lambda; ) - log Z ( &theta; , X )
Four are texture potential-energy function, color potential-energy function, position potential-energy function and normalization item respectively after in formula. 
Texture potential-energy function is defined as Ψi(ci, X;θΨ)=logP (ci| X,i), P (ci| X,i) it is the normalized distribution function given by Boost graders. 
Color potential-energy function is defined as π (ci, X;θπ)=log ∑skθπ(ci, k) P (k | xi), gauss hybrid models (Gaussian Mixture Models of the present invention in CIELab color spaces here:GMMs color model) is represented, to a pixel color x in given image, its conditional probability is: Wherein μkRepresent the average and variance of k-th of color cluster respectively with ∑ k. 
Position potential-energy function is defined as λ (ci, X;θλ)=log θλ(ci, i), for above two potential-energy functions, position potential-energy function is relatively weak, and in the definition of this function, the category label of image pixel is only related to absolute position in the picture. 
It is trained using the method for 12 class materials, all pixels in the probability for each classification for giving each pixel in an image-region, last statistical regions is then calculated using above formula, the classification in each region is determined by the way of ballot.During stylized rendering, the material that the selection of paintbrush is identified by object area is determined, to realize that automatic render lays the foundation. 
2) between key frame dense characteristic point matching
After semantic information on key frame is obtained, semantic information is effectively traveled to sequence frame up by the matching algorithm that the present invention needs to explore between a kind of frame. 
The present invention proposes comprehensive line drawing, texture and blend of colors image template feature first, and the characteristic set enriched and expression are provided for image matching problems. 
(a) line drawing feature is shown as by Gabor base tables: Fsk(Ii)=| |<Ii, GCos, x, θ>||2+||<Ii, GSin, x, θ>||2, GSin, x, θAnd GCos, x, θPosition x prescriptions are illustrated respectively in the sine and cosine Gabor bases for θ.The distribution of its characteristic probability is expressed as: 
P ( I i | B i ; &theta; i ) q ( I i ) = 1 Z ( &lambda; i sk ) exp { &lambda; i sk h sk [ F sk ( I i ) ] }
Figure BDA0000095342860000162
Represent parameter θi, hskIt is a sigmoid function,
Figure BDA0000095342860000163
It is normalization constraints. 
So model will encourage have stronger corresponding edge than background distributions. 
(b) textural characteristics are modeled with a simplified gradient orientation histogram (HOG) to textural characteristics, and 6 characteristic dimensions represent different gradient directions respectively. HOG j-th of direction is represented, and
Figure BDA0000095342860000165
Represent corresponding description of ith feature Ii. 
Figure BDA0000095342860000166
Figure BDA0000095342860000167
It is Ftxt(Ii) average in all positive samples.The probabilistic model of feature is expressed as by the present invention: 
P ( I i | B i ; &theta; i ) q ( I i ) = 1 Z ( &lambda; i txt ) exp { &lambda; i txt &Sigma; j h j txt [ F txt ( I i ) ] }
Figure BDA0000095342860000169
It is parameter θi.It can be seen that model encourages to respond the set for the characteristic image block relatively concentrated. 
(c) color characteristic is to be used as description using simple pixel intensity. 
Figure BDA00000953428600001610
It is the wave filter on the x of position.Quantizing pixel brightness value of the present invention is interval to each statistics, then model can be reduced to: 
P ( I i | B i ; &theta; i ) q ( I i ) = 1 Z ( &lambda; i fl ) exp { &Sigma; j &lambda; i fl h xj fl [ F xj fl ( I i ) ] }
The present invention is by combining the small feature of similar image, it is possible to receive the local combinations of features with strong judgement index.Over-segmentation is carried out to image first, the tiny image block of some in image is obtained.Line drawing can be described by being extracted from small image block, texture, the statistical nature of color.In order to effectively obtain combinations of features, increased and model-learning algorithm using the region of iteration, by constantly updating characteristic model, iteration growth feature combination zone finally gives the local combinations of features with strong judgement index. 
On the basis of expressing herein, the present invention is modeled as matching problem of the moving target in time domain and spatial domain the hierarchical diagram the matching frame on figure is represented.The mixed image template characteristic of extraction is as node of graph, and the side annexation between graph structure, node of graph is built between frames to be defined with the affiliated object type of the similarity between feature based, locus, and feature. 
With Is, It represents artwork and target figure, and U, V represent hybrid template characteristic set in Is, It respectively.To each characteristic point u ∈ U ', there are two marks:Level mark I (u) ∈ { 1,2 ..., K } and matching candidate mark
Figure BDA0000095342860000171
With the higher Candidate Set C of each Feature Points Matching degree in artwork, the vertex set of graph structure is set up.With E=E+∪E-Build line set.Represent that the candidate of connection is mutually exclusive with negative side, and define its " repulsive force " and be: 
Figure BDA0000095342860000172
With the adjacent and not mutually exclusive candidate feature point of positive side connection space, order
Figure BDA0000095342860000173
The tightness degree cooperated between them is represented,Represent vi, vjBetween space length. 
By original image and the graph structure G of target images、GT, K+1 layers are divided into, wherein K represents the object number in artwork.With GsExemplified by, division is expressed as ∏={ g0, g1..., gk}.Wherein, gkIt is GsA subgraph, its vertex set is with UkRepresent.Similar, GTVertex set with VkRepresent.Then GsAnd GTBetween matching relationship be expressed as
Figure BDA0000095342860000175
Assuming that the matching between subgraph is separate, then: 
In the present invention, estimated with geometric transformation, outward appearance and define Matching sub-image to (gk, gk') between similarity measure, useRepresent.In summary, the solution of graph structure matching problem is configurable to: 
W=(K, ∏={ g0, g1..., gk, Ψ={ Φk, Φ={ Φk}) 
Under bayesian theory framework, the present invention describes graph structure matching problem to maximize posterior probability: 
W*=argmaxp (W | Gs, GT)=argmaxp (W) p (Gs, GT|W) 
The present invention can be solved by markov chain Monte-Carlo (MCMC) method to above formula.Simultaneously in order to efficiently calculate, the present invention explores cluster sampling policy, by efficiently redirecting in solution space, globally optimal solution is rapidly converged to, to reach the matching of interframe characteristic point. 
(1) region-competitive partitioning algorithm
On the basis of the matching relationship of interframe stabilization is obtained, by the advantage in Video segmentation of excavation regions competition mechanism, the present invention proposes the region-competitive propagation algorithm based on frame matching.Utilize the image matching algorithm for being layered graph structure, present invention may determine that the matching relationship between former frame and present frame feature, the semantic information of former frame is traveled in present frame, then region-competitive partitioning algorithm is utilized according to the characteristic information of each matching area, present frame is divided into multiple semantic regions. 
Given image I, corresponding image segmentation solution is defined as follows: 
W={ (R1, R2... RN), (θ1, θ2..., θN), (I1, I2..., IN)} 
Wherein, RiThe region with same characteristic being partitioned into is represented,
Figure BDA0000095342860000183
θiRepresent region RiThe parameter of corresponding characteristic probability distributed model, IiRepresent region RiCorresponding mark. 
According to the matching relationship of feature in front and rear frame, it may be determined that cut zone number N.If the feature zonule set S={ S corresponding to each region1, S2..., SN, for each region Ri, the zonule S occupied according to featureiEstimate the initial parameter θ of modeli, obtain initial posterior probability P (θi| I (x, y)).According to MDL principles, posterior probability is converted into solution energy function minimum problem, obtained: 
log P ( W | I ) = E [ &Gamma; , { &theta; i } ] = &Sigma; i = 1 N - &Integral; &Integral; R 1 log P ( &theta; i | I ( x , y ) ) dxdy
Wherein
Figure BDA0000095342860000192
Figure BDA0000095342860000193
Represent region RiBoundary profile.The present invention estimates parameter { θ stage by stage using iterative manneriAnd Γ, alternating iteration two benches, constantly reduce energy function in the various stages, so that constantly Reasoning With Learning goes out the final segmentation result of entire image. 
During region-competitive, its characteristic probability distributed model is constantly updated in each region, while declining the ownership that principle fights for pixel according to steepest, updates respective boundary profile so that scope is constantly expanded in each region, finally gives the image segmentation result of present frame. 
Its specific iterative step is:First stage, fixed Γ, according to current region segmentation state estimation { θi, try to achieve parameter θ under current stateiMaximal possibility estimation be used as its optimal solution
Figure BDA0000095342860000195
To minimize the cost for describing each region, therefore energy function is converted into: 
&theta; i * = arg max &theta; i { &Integral; &Integral; R log P ( &theta; i | I ( x , y ) ) dxdy } , &ForAll; i &Element; [ 1 , N ]
Second stage, { θi, it is known that carrying out steepest decline to Γ, in order to quickly try to achieve the minimal solution of energy function, the present invention solves the equation of motion that steepest declines to the border Γ in all regions.For any point on boundary profile Γ
Figure BDA0000095342860000197
Have
d &upsi; &RightArrow; dt = - &delta;E ( &Gamma; , { &theta; i } ) &delta; &upsi; &RightArrow; = &Sigma; k &Element; Q ( &upsi; &RightArrow; ) log P ( &theta; k | I ( &upsi; &RightArrow; ) ) &CenterDot; n &RightArrow; k ( &upsi; &RightArrow; )
Wherein,
Figure BDA0000095342860000201
Figure BDA0000095342860000202
It is τ k in point
Figure BDA0000095342860000203
Direction vector.Point
Figure BDA0000095342860000204
Which region belonged to, depending on point
Figure BDA0000095342860000205
It is adapted to the degree described by provincial characteristics probability Distribution Model. 
To determine each pixel and interregional subordinate relation, the present invention proposes the image segmentation algorithm based on competition mechanism to be rapidly completed image segmentation.The specific image segmentation algorithm process description based on competition mechanism is as follows: 
In initial phase, the initial parameter of each class model is estimated according to the characteristic image block matched, the boundary point of all characteristic image blocks is added in queue undetermined, and calculate the posterior probability that all boundary points belong to all kinds of. 
In the loop iteration stage, the boundary point i that present energy steepest declines is selected from queue undetermined, and then update all borders where boundary point i;Then under current cutting state, the model parameter in each region is recalculated using maximal possibility estimation;Using each provincial characteristics distributed model newly obtained, the posterior probability that all boundary points belong to all kinds of is recalculated. 
So, the boundary point for constantly selecting present energy steepest to decline from queue undetermined updates corresponding border, update the feature distribution probabilistic model in each region according to current region segmentation state in good time simultaneously, multiple regions are mutually restricted, the ownership of image-region is competed simultaneously, until energy function convergence, so as to divide the image into as multiple regions. 
1. video stylization module
Video stylization is based on interactive video semanteme segmentation module.The selection of paintbrush is only determined by the material corresponding to the object area that identifies.The paintbrush of present system is all based on professional artist and a large amount of typical styles of writing is drawn on paper, is then scanned and parameterizes, finally sets up style of writing storehouse.Drawn for each image-region, bottoming is carried out using big brush first, then gradually decrease brush size and opacity to carry out meticulous depiction to the detail section of object.During drafting, using drafting strategy internal behind first edge:The drafting present invention of each tomographic image is drawn, and brush is alignd according to flow field first first since edge along the edge of line drawing.In Video Rendering, in order to ensure stability of the paintbrush in time domain, the present invention carries out the propagation of style of writing using thin-plate spline interpolation technology.In addition, style of writing is in communication process, also by calculating the area in style of writing region, devises style of writing and delete and increase mechanism.And using damping spring system is simulated, reduce " shake " effect of rendering result. 
(1) the key frame non-photorealistic rendering technology based on semanteme parsing
How to design different artistic style style of writing models is one of stylized focus of attention of video.The works of different artistic expressions, differ from one another in style of writing expression.Basic drafting strategy of the invention is that the suitable style of writing of image content-based selection is drawn in video stylization, and style of writing storehouse is that a large amount of typical styles of writing are drawn on paper based on professional artist, is then scanned and parameterizes, finally completes foundation.For the paintbrush B that will be drawnnInclude following information:The classification information l of brushn, placement area scope Λn, color mapping Cn, the α of transparencyn, height field HnAnd control point { Pni, that is, have: 
Bn={ In, Λn, Cn, αn, Hn, { Pni}} 
When designing style of writing model, the present invention not only allows for the low level informations such as style of writing shape, texture, while also combining its high-layer semantic information.So as in render process, each interpretation region of image/video has " pen " can be according to.This is one of Rendering algorithms of the present invention key different from the past based on pen-contact type Rendering algorithms.Thus when choosing style of writing, to interpret area classification as keyword, a collection of style of writing with identical category can be selected from style of writing storehouse with simple and fast.And then a therefrom optional style of writing in a random basis. 
" alignment " principle in being drawn for simulation oil painting, the present invention uses for reference original simple model theory, in each region RiInterior, the present invention calculates its original brief figure SKiExpression.Brief figure is made up of the conspicuousness primitive of a group echo body surface feature, such as the spot on clothes, lines, gauffer etc..In render process, different paintbrush will be covered on these primitives to produce desired artistic effect.Interpret region Ri, Ri∈ΛiIt is divided into the line drawing part for describing line drawing
Figure BDA0000095342860000221
And for describing the non-line drawing part with identical structural region
Figure BDA0000095342860000222
RiField of direction θiIt is defined as: 
&Theta; i = { &theta; ( x , y ) | &theta; ( x , y ) &Element; [ 0 , &pi; ) , &ForAll; ( x , y ) &Element; &Lambda; i }
Wherein field of direction θiInitial value is line drawing
Figure BDA0000095342860000224
Gradient direction.Then direction is traveled to non-line drawing region using diffusion equation
Figure BDA0000095342860000225
Render process to key frame is the continuous process chosen style of writing and put style of writing.To interpret region RiExemplified by, the present invention renders its non-line drawing part first
Figure BDA0000095342860000226
Then line drawing part is rendered
Figure BDA0000095342860000227
This is in order to ensure when the region rendered overlaps, the style of writing of line drawing part can be in upper strata.In non-line drawing part, optional one pixel region that is not rendered, using the center in the region as originating point, spreads along the field of direction to both sides, generates a flow pattern regions.The line on the basis of the axis in the region, the paintbrush chosen is transformed in the flow pattern regions, style of writing axis is alignd with region axis.It is similar to rendering for region line drawing part. 
(2) the style of writing propagation algorithm of sequence frame
In the present invention, rendering for non-key frame is obtained by the rendering result " propagation " of key frame.The foundation of propagation is the space-time corresponding relation for interpreting region.In communication process, as the change in interpretation region is increasing, style of writing may gradually be leaked to region exterior, and the space with occurring in time domain to be rendered.So, in style of writing figure is propagated, it is necessary to while considering the addition of style of writing and deleting mechanism.Otherwise, the jitter phenomenon that rendering result occurs.The present invention describes the propagation of style of writing, adds and delete mechanism respectively below. 
(d) style of writing is propagated:C is made to represent some interpretation region, R of video t key framei(t+1) R is representedi(t) in t+1 moment corresponding region.Their image-region is respectively with Λi(t)、Λi(t+1) represent.With Pij(t)、Pij(t+1) Λ is representedi(t)、Λi(t+1) dense matching point (being calculated during video interpretation) in time domain.Assuming that Ri(t+1) table can pass through Ri(t) non-rigid transformation of table is obtained.When style of writing is propagated, the present invention wishes Λi(t) the match point P onij(t) image-region Λ new in t+1 frames can be mapped toi(t+1) match point Pij(t+1).Based on considerations above, the present invention is from thin-plate spline interpolation model (Thin-plate Spline, TPS).It can be Λi(t) key point P inij(t) it is mapped to Λi(t+1) match point Pij(t+1), and for Λi(t) pixel of the non-key point of remaining in, TPS makes Λ by minimizing energy functioni(t) pixel grid occurs (non-rigid) deformation of elasticity and distorted. 
(e) style of writing is deleted:After being propagated in video due to paintbrush or when there occurs hiding relation or too many style of writing propagation frame number, region corresponding to some paintbrush can become less and less, therefore, the present invention will reject these paintbrush when their corresponding region areas are less than some given threshold value.Equally, also to be deleted when the paintbrush of propagation falls outside corresponding zone boundary. 
(f) style of writing increases.When there is new semantic region or already present semantic region becomes increasing (expansion of such as clothes), the present invention must increase new paintbrush to cover these emerging regions, and in order to fill the space between paintbrush, the present invention only need to simply change size and the position of adjacent paintbrush.If the region not covered by paintbrush becomes bigger and bigger and exceeded some given threshold value, system can be automatically created new paintbrush to cover it.Nevertheless, the present invention is still unlikely to draw one to it at once when space occurs for the first time.Then, comparatively the present invention is provided with compares high threshold value, and postpones to render emerging region when they rise to sufficiently large.Then, the present invention puts algorithm to be filled up to the sufficiently large space of threshold value using general paintbrush, finally oppositely propagates and converts again these new paintbrush and go to fill void area that is previously having occurred but not rendering.The process of filling paintbrush can avoid continually converting paintbrush backward, while less some scrappy paintbrushes can be linked as larger paintbrush again, so as to reduce scintillation effect and other undesirable visual effects artificially caused.Similarly, since the present invention is that new paintbrush is added in the bottom, so they are drawn in below the paintbrush existed, This further reduces visual scintillation effect. 
(3) it is used for the damping brush system of stabilization
It is stabilization operation to the final step that video carries out stylized rendering.The present invention is attached to paintbrush adjacent in time domain and spatial domain with spring, to simulate damping system.By the energy for minimizing the system, it is possible to reach the effect for removing shake. 
For i-th of paintbrush of t, present invention AI, t=(xI, t, yI, t, sI, t) its centre coordinate and the geometric attribute of size are represented, and its initial value is designated as
Figure BDA0000095342860000241
The energy function of damping brush system is defined as follows: 
E=Edata1Esmooth12Esmooth2
λ1And λ2For weight, in an experiment, the present invention is set to λ1=2.8, λ2=1.1. 
The first item constraint paintbrush position will can not be partially too far away with initial position in formula: 
E data = &Sigma; i , t ( A i , t - A i , t 0 ) 2
Section 2 is the smoothness constraint carried out to paintbrush i in time domain in formula: 
E smooth 1 = &Sigma; i , t ( A i , t + 1 - 2 A i , t + A i , t - 1 ) 2
Section 3 all carries out smoothness constraint to adjacent paintbrush in time domain and spatial domain in formula.Note
Figure BDA0000095342860000253
For the adjacent paintbrush of i-th of paintbrush of t, for arbitrary neighborhood paintbrush
Figure BDA0000095342860000254
By the relative distance difference and difference in size between them, Δ A is designated asI, j, t=AI, t-AJ, t, and smooth item is defined as below: 
E smooth 1 = &Sigma; i , t ( &Delta;A i , j , t + 1 - 2 &Delta;A i , j , t + &Delta; A i , j , t - 1 ) 2
Pass through Levenbergy-Marquard Algorithm for Solving energy minimization problems. 

Claims (9)

1. a kind of interactive video stylized rendering method interpreted based on video, it is characterised in that include the stylized module of interactive video semantic segmentation module, video.
The dividing method of interactive video semantic segmentation module comprises the following steps:
1) Interactive Segmentation and automatic identification of key frame images;
2) between key frame dense characteristic point matching;
3) region-competitive is split;
The stylizing method of video stylization module comprises the following steps:
1) the key frame non-photorealistic rendering based on semanteme parsing;
2) style of writing of sequence frame is propagated;
3) handled with the damping brush system of stabilization.
Stylization to video will be successively using interactive video semantic segmentation module and the two modules of the stylized module of video, semantic segmentation first is carried out to video using interactive video semantic segmentation module, video stylized module is reused and stylized rendering is carried out to the video after segmentation.
2. the interactive video stylized rendering method according to claim 1 interpreted based on video, it is characterised in that the Interactive Segmentation and automatic identifying method of the key frame images of above-mentioned steps are as follows:
They be divide into 12 classes by the semantic region of segmentation according to its different material properties, including sky/cloud, mountain range/land, rock/building, leaf/grove, hair/hair, flower/fruit, skin/hide, trunk/branch, abstract background, wood/plastics, water, clothes;
In practical operation, employ texture, distribution of color, positional information these three principal characters and be trained and recognize, give an area image X, the conditional probability for defining its classification c is: 
Figure FDA0000095342850000021
(formula 1)
Four are texture potential-energy function, color potential-energy function, position potential-energy function and normalization item respectively after in formula.
Texture potential-energy function is defined as Ψi(ci, X;θΨ)=logP (ci| X, i), P (ci| X, is the normalized distribution function given by Boost graders i);
Color potential-energy function is defined as π (ci, X;θn)=log ∑skθn(ci, k) P (k | xi), with gauss hybrid models (the Gaussian Mixture Models in CIELab color spaces:GMMs color model) is represented, to a pixel color x in given image, its conditional probability is: 
Figure FDA0000095342850000022
Wherein μkRepresent the average and variance of k-th of color cluster respectively with ∑ k;
Position potential-energy function is defined as λ (ci, X;θλ)=log θ2(ci, i), for above two potential-energy functions, position potential-energy function is relatively weak, and in the definition of this function, the category label of image pixel is only related to absolute position in the picture;
It is trained using the method for 12 class materials, all pixels in the probability for each classification of each pixel in an image-region, last statistical regions is then given using formula 1, the classification in each region is determined by the way of ballot;During stylized rendering, the material that the selection of paintbrush is identified by object area is determined, to realize that automatic render lays the foundation.
3. it is according to claim 1 based on video interpret interactive video stylized rendering method, it is characterised in that above-mentioned steps 2) key frame between dense characteristic point matching process it is as follows:
After semantic information on key frame is obtained, comprehensive line drawing feature, texture and blend of colors image template feature provide abundant characteristic set and expression for image matching problems;
11) line drawing feature is shown as by Gabor base tables: 
Fsk(Ii)=| |<Ii, GCos, x, θ>||2+||<Ii, GSin, x, θ, GiAnd GcBe illustrated respectively in position x prescriptions to for sine and cosine Gabor bases.The distribution of its characteristic probability is expressed as:
Figure FDA0000095342850000031
Figure FDA0000095342850000032
Represent parameter θi, hskIt is a sigmoid function,
Figure FDA0000095342850000033
It is normalization constraints.
So model will encourage have stronger corresponding edge than background distributions;
12) textural characteristics are modeled with a simplified gradient orientation histogram (HOG) to textural characteristics, and 6 characteristic dimensions represent different gradient directions respectively;HOG j-th of direction is represented, and
Figure FDA0000095342850000034
Represent corresponding description of ith feature; 
Figure FDA0000095342850000035
It is FtAverage in all positive samples.The probabilistic model of feature is expressed as by the present invention:
Figure FDA0000095342850000036
Figure FDA0000095342850000037
It is parameter θi.It can be seen that model encourages to respond the set for the characteristic image block relatively concentrated;
13) color characteristic be using simple pixel intensity as description,
Figure FDA0000095342850000038
Quantizing pixel brightness value of the present invention is interval to each statistics, then model can be reduced to:
Figure FDA00000953428500000310
The small feature of similar image by combining, the local combinations of features with strong judgement index can be just received, over-segmentation is carried out to image first, the tiny image block of some in image is obtained, line drawing can be described by being extracted from small image block, texture, the statistical nature of color, in order to effectively obtain combinations of features, increased and model-learning algorithm using the region of iteration, by constantly updating characteristic model, iteration growth feature combination zone finally gives the local combinations of features with strong judgement index;
On the basis of expressing herein, matching problem of the moving target in time domain and spatial domain is modeled as the hierarchical diagram the matching frame on figure is represented, the mixed image template characteristic of extraction is used as node of graph, graph structure is built between frames, side annexation between node of graph can be defined with the affiliated object type of the similarity between feature based, locus, and feature;
With Is, It represents artwork and target figure, and U, V represent hybrid template characteristic set in Is, It respectively, has two marks to each characteristic point u ∈ U ':Level mark I (u) ∈ { 1,2 ..., K } and matching candidate mark
Figure FDA0000095342850000041
With the higher Candidate Set C of each Feature Points Matching degree in artwork, the vertex set of graph structure is set up, with E=E+∪E-Build line set.Represent that the candidate of connection is mutually exclusive with negative side, and define its " repulsive force " and be:
Figure FDA0000095342850000042
With the adjacent and not mutually exclusive candidate feature point of positive side connection space, order
Figure FDA0000095342850000043
The tightness degree cooperated between them is represented,
Figure FDA0000095342850000044
Represent vi, viBetween space length;
By original image and the graph structure G of target images、GT, K+1 layers are divided into, wherein K represents the object number in artwork, with GsExemplified by, division is expressed as ∏={ g0, g1..., gk}.Wherein, gkIt is GsA subgraph, its vertex set is with UkRepresent.Similar, GTVertex set with VkRepresent.Then GsAnd GTBetween matching relationship be expressed asAssuming that the matching between subgraph is separate, then:
Figure FDA0000095342850000046
Estimated with geometric transformation, outward appearance and define Matching sub-image to (gk, gk') between similarity measure, use
Figure FDA0000095342850000051
Represent;In summary, the solution of graph structure matching problem is configurable to:
W=(K, ∏={ g0, g1..., gk, Ψ={ Φk, Φ={ Φk})
Under bayesian theory framework, graph structure matching problem is described to maximize posterior probability:
W*=argmaxp (W | Gs, GT)=argmaxp (W) p (Gs, GT|W)
Above formula is solved by markov chain Monte-Carlo (MCMC) method, while in order to efficiently calculate, by efficiently redirecting in solution space, globally optimal solution is rapidly converged to, to reach the matching of interframe characteristic point.
4. it is according to claim 1 based on video interpret interactive video stylized rendering method, it is characterised in that above-mentioned steps 3) region-competitive dividing method it is as follows:
On the basis of the matching relationship of interframe stabilization is obtained, pass through the advantage in Video segmentation of excavation regions competition mechanism, utilize the image matching algorithm for being layered graph structure, the matching relationship between former frame and present frame feature can be determined, the semantic information of so former frame is just traveled in present frame, then region-competitive partitioning algorithm is utilized according to the characteristic information of each matching area, present frame is divided into multiple semantic regions;
Given image I, corresponding image segmentation solution is defined as follows:
W={ (R1, R2... RN), (θ1, θ2..., θN), (I1, I2..., IN)}
Wherein, RiThe region with same characteristic being partitioned into is represented,
Figure FDA0000095342850000053
θiRepresent region RiThe parameter of corresponding characteristic probability distributed model, IiRepresent region RiCorresponding mark;
According to the matching relationship of feature in front and rear frame, it may be determined that cut zone number N.If the feature zonule set S={ S corresponding to each region1, S2..., SN, for each region Ri, the zonule S occupied according to featureiEstimate the initial parameter θ of modeli, obtain initial posterior probability P (θi| I (x, y)).According to MDL principles, posterior probability is converted into solution energy function minimum problem, obtained:
Figure FDA0000095342850000061
Wherein
Figure FDA0000095342850000062
Figure FDA0000095342850000063
Represent region RiBoundary profile.The present invention estimates parameter { θ stage by stage using iterative manneriAnd Γ, alternating iteration two benches, constantly reduce energy function in the various stages, so that constantly Reasoning With Learning goes out the final segmentation result of entire image;
During region-competitive, its characteristic probability distributed model is constantly updated in each region, while declining the ownership that principle fights for pixel according to steepest, updates respective boundary profile so that scope is constantly expanded in each region, finally gives the image segmentation result of present frame;
Its specific iterative step is:First stage, fixed Γ, according to current region segmentation state estimation { θi, try to achieve parameter θ under current stateiMaximal possibility estimation be used as its optimal solution
Figure FDA0000095342850000065
To minimize the cost for describing each region, therefore energy function is converted into:
Figure FDA0000095342850000066
Second stage, { θi, it is known that carrying out steepest decline to Γ, in order to quickly try to achieve the minimal solution of energy function, the present invention solves the equation of motion that steepest declines to the border Γ in all regions.For any point on boundary profile Γ
Figure FDA0000095342850000067
Have
Wherein,
Figure FDA0000095342850000069
Figure FDA00000953428500000610
For τkPointDirection vector, point
Figure FDA00000953428500000612
Which region belonged to, depending on point
Figure FDA0000095342850000071
It is adapted to the degree described by provincial characteristics probability Distribution Model;
To determine each pixel and interregional subordinate relation, the image segmentation algorithm process description based on competition mechanism is as follows:
In initial phase, the initial parameter of each class model is estimated according to the characteristic image block matched, the boundary point of all characteristic image blocks is added in queue undetermined, and calculate the posterior probability that all boundary points belong to all kinds of;
In the loop iteration stage, the boundary point i that present energy steepest declines is selected from queue undetermined, and then update all borders where boundary point i;Then under current cutting state, the model parameter in each region is recalculated using maximal possibility estimation;Using each provincial characteristics distributed model newly obtained, the posterior probability that all boundary points belong to all kinds of is recalculated;
So, the boundary point for constantly selecting present energy steepest to decline from queue undetermined updates corresponding border, update the feature distribution probabilistic model in each region according to current region segmentation state in good time simultaneously, multiple regions are mutually restricted, the ownership of image-region is competed simultaneously, until energy function convergence, so as to divide the image into as multiple regions.
5. the interactive video stylized rendering method according to claim 1 interpreted based on video, it is characterised in that the stylized module of above-mentioned video(2)Stylizing method step 4)Video stylization is based on interactive video semanteme segmentation module, and the selection of paintbrush is only determined by the material corresponding to the object area that identifies;
Above-mentioned paintbrush is all based on professional artist and a large amount of typical styles of writing is drawn on paper, then it is scanned and parameterizes, finally set up style of writing storehouse, drawn for each image-region, bottoming is carried out using big brush first, then brush size and opacity are gradually decreased to carry out meticulous depiction to the detail section of object, during drafting, using drafting strategy internal behind first edge:The drafting present invention of each tomographic image is drawn, and brush is alignd according to flow field first first since edge along the edge of line drawing;
In Video Rendering, in order to ensure stability of the paintbrush in time domain, the propagation of style of writing is carried out using thin-plate spline interpolation technology, in addition, style of writing is in communication process, also by calculating the area in style of writing region, style of writing is devised and deletes and increase mechanism;And using damping spring system is simulated, reduce " shake " effect of rendering result.
6. the interactive video stylized rendering method according to claim 1 interpreted based on video, it is characterised in that the stylized module of above-mentioned video(2)Stylizing method step 5)Based on semanteme parsing key frame non-photorealistic rendering method it is as follows:
How to design different artistic style style of writing models is one of stylized focus of attention of video, the works of different artistic expressions, differ from one another in style of writing expression, basic drafting strategy in video stylization is that the suitable style of writing of image content-based selection is drawn, style of writing storehouse is that a large amount of typical styles of writing are drawn on paper based on professional artist, then it is scanned and parameterizes, finally complete foundation, for the paintbrush that will be drawn
Figure RE-DEST_PATH_IMAGE078
Include following information:The classification information of brush
Figure RE-571907DEST_PATH_IMAGE079
, placement area scope
Figure RE-DEST_PATH_IMAGE080
, color mapping
Figure RE-494863DEST_PATH_IMAGE081
, transparency
Figure RE-DEST_PATH_IMAGE082
, height field
Figure RE-804622DEST_PATH_IMAGE083
And control point
Figure RE-DEST_PATH_IMAGE084
, that is, have:
Figure RE-DEST_PATH_IMAGE086
When designing style of writing model, the low level informations such as style of writing shape, texture are not only allowed for, while also combine its high-layer semantic information, so that in render process, each interpretation region of image/video has " pen " can be according to;When choosing style of writing, to interpret area classification as keyword, a collection of style of writing with identical category is selected from style of writing storehouse with simple and fast, and then a therefrom optional style of writing in a random basis;
" alignment " principle in being drawn for simulation oil painting, uses for reference original simple model theory, in each region
Figure RE-563762DEST_PATH_IMAGE056
It is interior, calculate its original brief figure
Figure RE-617168DEST_PATH_IMAGE087
Expression, brief figure is made up of the conspicuousness primitive of a group echo body surface feature, such as the spot on clothes, lines, gauffer;In render process, different paintbrush will be covered on these primitives to produce desired artistic effect;Interpret region
Figure RE-DEST_PATH_IMAGE088
It is divided into the line drawing part for describing line drawing
Figure RE-394631DEST_PATH_IMAGE089
And for describing the non-line drawing part with identical structural region
Figure RE-DEST_PATH_IMAGE090
Figure RE-875291DEST_PATH_IMAGE056
The field of direction
Figure RE-354683DEST_PATH_IMAGE091
It is defined as:
Figure RE-211781DEST_PATH_IMAGE093
The wherein field of direction
Figure RE-906067DEST_PATH_IMAGE091
Initial value is line drawing
Figure RE-557628DEST_PATH_IMAGE089
Gradient direction, direction is then traveled to non-line drawing region using diffusion equation
Figure RE-540628DEST_PATH_IMAGE090
Render process to key frame is the continuous process chosen style of writing and put style of writing;To interpret region
Figure RE-935837DEST_PATH_IMAGE056
Exemplified by, its non-line drawing part is rendered first
Figure RE-484630DEST_PATH_IMAGE090
, then render line drawing part
Figure RE-272806DEST_PATH_IMAGE089
;This is in order to ensure when the region rendered overlaps, the style of writing of line drawing part can be in upper strata;In non-line drawing part, optional one pixel region that is not rendered, using the center in the region as originating point, spreads along the field of direction to both sides, generates a flow pattern regions;The line on the basis of the axis in the region, the paintbrush chosen is transformed in the flow pattern regions, style of writing axis is alignd with region axis;It is similar to rendering for region line drawing part.
7. the interactive video stylized rendering method according to claim 1 interpreted based on video, it is characterised in that the stylized module of above-mentioned video(2)Stylizing method step 5)The style of writing transmission method of sequence frame is as follows:
Rendering for non-key frame is obtained by the rendering result " propagation " of key frame, the foundation of propagation is the space-time corresponding relation for interpreting region, in communication process, as the change in interpretation region is increasing, style of writing may gradually be leaked to region exterior, and the space with occurring in time domain to be rendered, so, in style of writing figure is propagated, it is necessary to while considering the addition of style of writing and deleting mechanism, otherwise, the jitter phenomenon that rendering result occurs;The propagation of style of writing, add and to delete mechanism as follows:
Style of writing is propagated:Order represents some interpretation region of video t key frame,
Figure RE-539839DEST_PATH_IMAGE095
RepresentIn t+1 moment corresponding region, their image-region respectively with
Figure RE-DEST_PATH_IMAGE098
Figure RE-676423DEST_PATH_IMAGE099
Represent;With
Figure RE-79722DEST_PATH_IMAGE101
Figure RE-DEST_PATH_IMAGE102
Represent
Figure RE-525616DEST_PATH_IMAGE098
Figure RE-279945DEST_PATH_IMAGE099
Dense matching point (being calculated during video interpretation) in time domain;Assuming that
Figure RE-16957DEST_PATH_IMAGE095
Table can pass through
Figure RE-478025DEST_PATH_IMAGE096
The non-rigid transformation of table is obtained;When style of writing is propagated, the present invention wishes
Figure RE-907870DEST_PATH_IMAGE098
On match point
Figure RE-883916DEST_PATH_IMAGE101
Image-region new in t+1 frames can be mapped to
Figure RE-424619DEST_PATH_IMAGE099
Match point
Figure RE-756505DEST_PATH_IMAGE102
, based on considerations above, the present invention is from thin-plate spline interpolation model (Thin-plate Spline, TPS), and it can be
Figure RE-91672DEST_PATH_IMAGE098
Middle key point
Figure RE-820593DEST_PATH_IMAGE101
It is mapped toMatch point
Figure RE-335068DEST_PATH_IMAGE102
, and for
Figure RE-841136DEST_PATH_IMAGE098
In remaining non-key point pixel, TPS makes by minimizing energy function
Figure RE-57354DEST_PATH_IMAGE098
Pixel grid occur elasticity(It is non-rigid)Deform and distort;
Style of writing is deleted:After being propagated in video due to paintbrush or when there occurs hiding relation or too many style of writing propagation frame number, region corresponding to some paintbrush can become less and less, therefore, the present invention will reject these paintbrush when their corresponding region areas are less than some given threshold value, equally, also to be deleted when the paintbrush of propagation falls outside corresponding zone boundary;
Style of writing increases, when there is new semantic region or already present semantic region becomes increasing(The expansion of such as clothes)The present invention must increase new paintbrush to cover these emerging regions, and in order to fill the space between paintbrush, the present invention only need to simply change size and the position of adjacent paintbrush, if the region not covered by paintbrush becomes bigger and bigger and exceeded some given threshold value, system can be automatically created new paintbrush to cover it;Nevertheless, the present invention is still unlikely to draw one to it at once when space occurs for the first time;Then, comparatively the present invention is provided with compares high threshold value, and postpones to render emerging region when they rise to sufficiently large;Then, the present invention puts algorithm to be filled up to the sufficiently large space of threshold value using general paintbrush, finally oppositely propagates and converts again these new paintbrush and go to fill void area that is previously having occurred but not rendering;The process of filling paintbrush can avoid continually converting paintbrush backward, while less some scrappy paintbrushes can be linked as larger paintbrush again, so as to reduce scintillation effect and other undesirable visual effects artificially caused;Similarly, since the present invention is that new paintbrush is added in the bottom, so they are drawn in below the paintbrush existed, This further reduces visual scintillation effect.
8. the interactive video stylized rendering method according to claim 1 interpreted based on video, it is characterised in that the stylized module of above-mentioned video(2)Stylizing method step 6)In be used for stabilization damping brush system it is as follows:
It is stabilization operation to the final step that video carries out stylized rendering, paintbrush adjacent in time domain and spatial domain is attached with spring, to simulate damping system;By the energy for minimizing the system, it is possible to reach the effect for removing shake;
For i-th of paintbrush of t, the present invention is used
Figure RE-126810DEST_PATH_IMAGE103
Its centre coordinate and the geometric attribute of size are represented, and its initial value is designated as;The energy function of damping brush system is defined as follows:
Figure RE-DEST_PATH_IMAGE106
Figure RE-416977DEST_PATH_IMAGE107
With
Figure RE-DEST_PATH_IMAGE108
For weight,,
Figure RE-DEST_PATH_IMAGE110
The first item constraint paintbrush position will can not be partially too far away with initial position in formula:
Figure RE-DEST_PATH_IMAGE112
Section 2 is the smoothness constraint carried out to paintbrush i in time domain in formula:
Figure RE-DEST_PATH_IMAGE114
Section 3 all carries out smoothness constraint to adjacent paintbrush in time domain and spatial domain in formula;Note
Figure RE-485875DEST_PATH_IMAGE115
For the adjacent paintbrush of i-th of paintbrush of t, for arbitrary neighborhood paintbrush
Figure RE-DEST_PATH_IMAGE116
, the relative distance difference and difference in size between them are designated as
Figure RE-844175DEST_PATH_IMAGE117
, and smooth item is defined as below:
Figure RE-316745DEST_PATH_IMAGE119
Pass through Levenbergy-Marquard Algorithm for Solving energy minimization problems.
9. the interactive video stylized rendering method according to claim 8 interpreted based on video, it is characterised in that above-mentioned,
Figure RE-276796DEST_PATH_IMAGE110
CN201110302054XA 2011-09-30 2011-09-30 Interactive video stylized rendering method based on video interpretation Pending CN102542593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110302054XA CN102542593A (en) 2011-09-30 2011-09-30 Interactive video stylized rendering method based on video interpretation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110302054XA CN102542593A (en) 2011-09-30 2011-09-30 Interactive video stylized rendering method based on video interpretation

Publications (1)

Publication Number Publication Date
CN102542593A true CN102542593A (en) 2012-07-04

Family

ID=46349405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110302054XA Pending CN102542593A (en) 2011-09-30 2011-09-30 Interactive video stylized rendering method based on video interpretation

Country Status (1)

Country Link
CN (1) CN102542593A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927372A (en) * 2014-04-24 2014-07-16 厦门美图之家科技有限公司 Image processing method based on user semanteme
CN104063876A (en) * 2014-01-10 2014-09-24 北京理工大学 Interactive image segmentation method
CN104346789A (en) * 2014-08-19 2015-02-11 浙江工业大学 Fast artistic style study method supporting diverse images
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN105719327A (en) * 2016-02-29 2016-06-29 北京中邮云天科技有限公司 Art stylization image processing method
CN105825531A (en) * 2016-03-17 2016-08-03 广州多益网络股份有限公司 Method and device for dyeing game object
CN106296567A (en) * 2015-05-25 2017-01-04 北京大学 The conversion method of a kind of multi-level image style based on rarefaction representation and device
CN106485223A (en) * 2016-10-12 2017-03-08 南京大学 The automatic identifying method of rock particles in a kind of sandstone microsection
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN109741413A (en) * 2018-12-29 2019-05-10 北京金山安全软件有限公司 Rendering method and device for semitransparent objects in scene and electronic equipment
CN109816663A (en) * 2018-10-15 2019-05-28 华为技术有限公司 A kind of image processing method, device and equipment
CN110288625A (en) * 2019-07-04 2019-09-27 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN110446066A (en) * 2019-08-28 2019-11-12 北京百度网讯科技有限公司 Method and apparatus for generating video
CN110738715A (en) * 2018-07-19 2020-01-31 北京大学 automatic migration method of dynamic text special effect based on sample
CN111722896A (en) * 2019-03-21 2020-09-29 华为技术有限公司 Animation playing method, device, terminal and computer readable storage medium
CN112017179A (en) * 2020-09-09 2020-12-01 杭州时光坐标影视传媒股份有限公司 Method, system, electronic device and storage medium for evaluating visual effect grade of picture
CN113128498A (en) * 2019-12-30 2021-07-16 财团法人工业技术研究院 Cross-domain picture comparison method and system
CN113256484A (en) * 2021-05-17 2021-08-13 百果园技术(新加坡)有限公司 Method and device for stylizing image
CN116761018A (en) * 2023-08-18 2023-09-15 湖南马栏山视频先进技术研究院有限公司 Real-time rendering system based on cloud platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN101853517A (en) * 2010-05-26 2010-10-06 西安交通大学 Real image oil painting automatic generation method based on stroke limit and texture
CN101930614A (en) * 2010-08-10 2010-12-29 西安交通大学 Drawing rendering method based on video sub-layer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807198A (en) * 2010-01-08 2010-08-18 中国科学院软件研究所 Video abstraction generating method based on sketch
CN101853517A (en) * 2010-05-26 2010-10-06 西安交通大学 Real image oil painting automatic generation method based on stroke limit and texture
CN101930614A (en) * 2010-08-10 2010-12-29 西安交通大学 Drawing rendering method based on video sub-layer

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063876A (en) * 2014-01-10 2014-09-24 北京理工大学 Interactive image segmentation method
CN104063876B (en) * 2014-01-10 2017-02-01 北京理工大学 Interactive image segmentation method
CN103927372A (en) * 2014-04-24 2014-07-16 厦门美图之家科技有限公司 Image processing method based on user semanteme
CN104346789A (en) * 2014-08-19 2015-02-11 浙江工业大学 Fast artistic style study method supporting diverse images
CN104346789B (en) * 2014-08-19 2017-02-22 浙江工业大学 Fast artistic style study method supporting diverse images
CN106296567A (en) * 2015-05-25 2017-01-04 北京大学 The conversion method of a kind of multi-level image style based on rarefaction representation and device
CN106296567B (en) * 2015-05-25 2019-05-07 北京大学 A kind of conversion method and device of the multi-level image style based on rarefaction representation
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN105719327B (en) * 2016-02-29 2018-09-07 北京中邮云天科技有限公司 A kind of artistic style image processing method
CN105719327A (en) * 2016-02-29 2016-06-29 北京中邮云天科技有限公司 Art stylization image processing method
CN105825531A (en) * 2016-03-17 2016-08-03 广州多益网络股份有限公司 Method and device for dyeing game object
CN105825531B (en) * 2016-03-17 2018-08-21 广州多益网络股份有限公司 A kind of colouring method and device of game object
CN106485223B (en) * 2016-10-12 2019-07-12 南京大学 The automatic identifying method of rock particles in a kind of sandstone microsection
CN106485223A (en) * 2016-10-12 2017-03-08 南京大学 The automatic identifying method of rock particles in a kind of sandstone microsection
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN107277615B (en) * 2017-06-30 2020-06-23 北京奇虎科技有限公司 Live broadcast stylization processing method and device, computing device and storage medium
CN110738715B (en) * 2018-07-19 2021-07-09 北京大学 Automatic migration method of dynamic text special effect based on sample
CN110738715A (en) * 2018-07-19 2020-01-31 北京大学 automatic migration method of dynamic text special effect based on sample
CN109816663A (en) * 2018-10-15 2019-05-28 华为技术有限公司 A kind of image processing method, device and equipment
CN109741413A (en) * 2018-12-29 2019-05-10 北京金山安全软件有限公司 Rendering method and device for semitransparent objects in scene and electronic equipment
CN109741413B (en) * 2018-12-29 2023-09-19 超级魔方(北京)科技有限公司 Rendering method and device of semitransparent objects in scene and electronic equipment
CN111722896A (en) * 2019-03-21 2020-09-29 华为技术有限公司 Animation playing method, device, terminal and computer readable storage medium
CN111722896B (en) * 2019-03-21 2021-09-21 华为技术有限公司 Animation playing method, device, terminal and computer readable storage medium
CN110288625A (en) * 2019-07-04 2019-09-27 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN110446066B (en) * 2019-08-28 2021-11-19 北京百度网讯科技有限公司 Method and apparatus for generating video
CN110446066A (en) * 2019-08-28 2019-11-12 北京百度网讯科技有限公司 Method and apparatus for generating video
CN113128498A (en) * 2019-12-30 2021-07-16 财团法人工业技术研究院 Cross-domain picture comparison method and system
CN112017179A (en) * 2020-09-09 2020-12-01 杭州时光坐标影视传媒股份有限公司 Method, system, electronic device and storage medium for evaluating visual effect grade of picture
CN113256484A (en) * 2021-05-17 2021-08-13 百果园技术(新加坡)有限公司 Method and device for stylizing image
CN113256484B (en) * 2021-05-17 2023-12-05 百果园技术(新加坡)有限公司 Method and device for performing stylization processing on image
CN116761018A (en) * 2023-08-18 2023-09-15 湖南马栏山视频先进技术研究院有限公司 Real-time rendering system based on cloud platform
CN116761018B (en) * 2023-08-18 2023-10-17 湖南马栏山视频先进技术研究院有限公司 Real-time rendering system based on cloud platform

Similar Documents

Publication Publication Date Title
CN102542593A (en) Interactive video stylized rendering method based on video interpretation
Wu et al. A survey of image synthesis and editing with generative adversarial networks
Hartmann et al. Streetgan: Towards road network synthesis with generative adversarial networks
CN102831638B (en) Three-dimensional human body multi-gesture modeling method by adopting free-hand sketches
CN110222722A (en) Interactive image stylization processing method, calculates equipment and storage medium at system
CN105374007A (en) Generation method and generation device of pencil drawing fusing skeleton strokes and textural features
CN109448015A (en) Image based on notable figure fusion cooperates with dividing method
CN103578107B (en) A kind of interactive image segmentation method
Ren et al. Two-stage sketch colorization with color parsing
CN109118588B (en) Automatic color LOD model generation method based on block decomposition
Fan et al. Structure completion for facade layouts.
CN110189397A (en) A kind of image processing method and device, computer equipment and storage medium
Penhouët et al. Automated deep photo style transfer
CN105678835A (en) Modeling, drawing and rendering method for digital three-dimensional freehand Chinese brush landscape painting
Tong et al. Sketch generation with drawing process guided by vector flow and grayscale
Du et al. 3D building fabrication with geometry and texture coordination via hybrid GAN
CN108062758B (en) A kind of crowd&#39;s generation emulation mode and system based on image segmentation algorithm
CN102867290B (en) Texture optimization-based non-homogeneous image synthesis method
He Application of local color simulation method of landscape painting based on deep learning generative adversarial networks
Du Application of CAD aided intelligent technology in landscape design
CN112884893A (en) Cross-view-angle image generation method based on asymmetric convolutional network and attention mechanism
Wang et al. Singrav: Learning a generative radiance volume from a single natural scene
CN104091318B (en) A kind of synthetic method of Chinese Sign Language video transition frame
Zhang et al. Procedural modeling of rivers from single image toward natural scene production
Jia et al. Facial expression synthesis based on motion patterns learned from face database

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120704