CN104599305A - Two-dimension and three-dimension combined animation generation method - Google Patents

Two-dimension and three-dimension combined animation generation method Download PDF

Info

Publication number
CN104599305A
CN104599305A CN201410805149.7A CN201410805149A CN104599305A CN 104599305 A CN104599305 A CN 104599305A CN 201410805149 A CN201410805149 A CN 201410805149A CN 104599305 A CN104599305 A CN 104599305A
Authority
CN
China
Prior art keywords
dimensional
model
dimentional
role
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410805149.7A
Other languages
Chinese (zh)
Other versions
CN104599305B (en
Inventor
耿卫东
金秉文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201410805149.7A priority Critical patent/CN104599305B/en
Publication of CN104599305A publication Critical patent/CN104599305A/en
Application granted granted Critical
Publication of CN104599305B publication Critical patent/CN104599305B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Abstract

The invention discloses a two-dimension and three-dimension combined animation generation method. The two-dimension and three-dimension combined animation generation method comprises inputting a multi-viewpoint two-dimensional role line drawing and generating a two-dimensional half model; inputting a role three-dimensional model, establishing the association relation of the role three-dimensional model and a model, strokes and points of the two-dimensional half model in turn, obtaining the registration information corresponding to the two-dimensional half model and the registration information corresponding to the role three-dimensional model through calculation and forming a mixed model through the two-dimensional half model, the role three-dimensional model, the registration information corresponding to the two-dimensional half model and the registration information corresponding to the role three-dimensional model; inputting skeleton motion data and viewpoint change data, enabling the mixed model and the input skeleton motion data and viewpoint change data to drive the role three-dimensional model and the two-dimensional half model to produce deformation and generating animation. According to the two-dimension and three-dimension combined animation generation method, the two-dimensional animation and the three-dimensional animation are integrated in the animation production process and accordingly the existing two-dimensional and three-dimensional material is effectively utilized and the animation with vivid and rich pictures is produced.

Description

A kind of two three-dimensional animation producing methods combined
Technical field
The present invention relates to a kind of animation method, especially relate to a kind of two three-dimensional animation producing methods combined.
Background technology
2 D animation is a kind of comparatively traditional animate form, by animation teacher Freehandhand-drawing frame by frame, has very large advantage when showing the role action of exaggeration, role's expression and magnificent collision special efficacy, but its make needed for human cost relatively high.
Along with the development of computer hardware technique, three-dimensional animation rises, and it mainly utilizes 3 D rendering software to set up three-dimensional model in a computer, and the change of simulation object material, light etc., obtains finished product animation by computer graphics techniques color applying drawing.It greatly reduces the human cost needed for cartoon making, under the prerequisite of same quality, shorten fabrication cycle, but its artistic visual effect, appeal are inferior to 2 D animation.
In recent years, no longer single employing two dimension or 3-D technology in many animation works, but the feature both combining is produced jointly, as adopted 3-D technology to make background, then two dimension draws foreground people etc.This method for making, can draw the respective strong point of two and three dimensions cartoon technique and advantage, make up the deficiency of 2 D animation in the authenticity of space, possess the artistic expression power of 2 D animation simultaneously.But this kind of animation design technique inherently or two and three dimensions separating making, and separated various piece still has 2 D animation or three-dimensional animation shortcoming separately.Lack a kind of method, really can merge the advantage of 2 D animation and three-dimensional animation, suppress respective shortcoming.
Summary of the invention
In order to the advantage in conjunction with three peacekeeping two dimensions, two kinds of cartoon techniques is also complementary not enough, the object of the invention is to propose a kind of two three-dimensional animation producing methods combined, natural fusion three peacekeeping two dimension effectively, in animation process, merge two and three dimensions animation, make effectively to utilize existing two and three dimensions material, produce the animation that picture is more vividly abundant, its effect is better than being used alone the animation that two dimension or 3-D technology obtain.
The object of the invention is to comprise the following steps by the following technical programs:
1) input multiple views two-dimensional character line to draw, generate two-dimentional half model;
2) role's three-dimensional model is inputted, set up the incidence relation of the model of role's three-dimensional model and two-dimentional half model, stroke and point successively, calculate two-dimentional half model and each self-corresponding log-on message of role's three-dimensional model, form mixture model by two-dimentional half model, role's three-dimensional model and each self-corresponding log-on message;
3) input skeleton motion data and viewpoint delta data, drive role's three-dimensional model and two-dimentional half model to deform the skeleton motion data of mixture model and input and viewpoint delta data, generation animation.
Described step 1) according to input multiple views two-dimensional character line draw, two-dimentional half cartoon modeling method is adopted to generate two-dimentional half model, this two-dimentional half model is formed by the role's line drawing structure under multiple crucial viewpoint, and the viewpoint for the role's line drawing correspondence building two-dimentional half model is called as crucial viewpoint v j, at least comprise following three viewpoints in multiple crucial viewpoint: the viewpoint at role's bounding box center is looked squarely in the viewpoint that the viewpoint at role's bounding box center is looked squarely in front, role's bounding box center is looked squarely in side and 45 °, side, crucial viewpoint v jtwo vector elements of bivector be respectively direction of visual lines relative to the horizontal rotation angle of the positive apparent direction of level and vertical luffing angle, j is the index of crucial viewpoint, the two-dimensional space that horizontal rotation angle and vertical luffing angle are formed is that viewpoint is towards space, the two dimensional strokes quantity of role's line drawing that each crucial viewpoint is corresponding is identical, and two dimensional strokes is designated as s i,j, different role line has identical stroke index two dimensional strokes in drawing is corresponding, and the two dimensional strokes set of two-dimentional sampled point represents, different role line has identical stroke index two dimensional strokes in drawing has identical two-dimentional sampled point number n i.
The two dimensional strokes having identical stroke index in the drawing of different role line has identical expression implication and is corresponding, such as, two dimensional strokes in role's line drawing is identical with the physical meaning of a two dimensional strokes in the drawing of another role's line, it is all the profile representing nose, then these two two dimensional strokes are corresponding, and have identical sampled point number n i.
What two dimension half cartoon modeling method adopted Rivers and co-workers thereof to deliver in 2010 is entitled as in the paper of " 2.5dcartoon models " the two-dimentional half model method mentioned, paper is A.Rivers, T.Igarashi, and F.Durand, " 2.5d cartoon models, " TOG, vol.29, no.4, p.59,2010.
Described two-dimentional half model is formed by the role's line drawing structure under at least three crucial viewpoints.
Described step 2) detailed process is:
2.1) by role's three-dimensional model of input towards with two-dimentional half model towards aliging, set up the model interaction relation between two-dimentional half model and role's three-dimensional model;
2.2) at role's three-dimensional model with two-dimentional half model towards under the condition of aliging, for each role's line draw under each two dimensional strokes s i,j, i is the index of stroke, and j is the index of crucial viewpoint, and role's three-dimensional model is drawn and each two dimensional strokes s i,jcorresponding three-dimensional stroke r i,j, the three-dimensional stroke having identical stroke index under different points of view is corresponding, sets up the stroke incidence relation between two-dimentional half model and role's three-dimensional model;
Choose a crucial viewpoint v of two-dimentional half model j, and to each two dimensional strokes s under this viewpoint i,j, manually on screen, draw two dimensional strokes, then project on role's three-dimensional model of input according to current view point direction, obtain s i,jcorresponding three-dimensional stroke r on role's three-dimensional model i,j;
2.3) with each two dimensional strokes of two-dimentional half model s i,jon angle point or flex point as two-dimentional key point, each three-dimensional stroke r i,jthe upper point corresponding with each two-dimentional key point is as three-dimensional key point, according to two-dimentional key point and three-dimensional key point, resampling is carried out to two dimensional strokes and three-dimensional stroke, three-dimensional sample point quantity between two the adjacent three-dimensional key points making the two-dimentional sampled point quantity between two adjacent two-dimentional key points and its corresponding is identical, the two-dimentional sampled point of two dimensional strokes and the three-dimensional sample point of three-dimensional stroke are corresponding in turn to thus, set up the some incidence relation between two-dimentional half model and role's three-dimensional model;
2.4) for three-dimensional stroke r i,jon each three-dimensional sample point q k, find the tri patch u on role's three-dimensional model at its place, obtain each three-dimensional sample point q kcoordinate in the barycentric coordinate system of tri patch u is (λ 0, λ 1, λ 2), each three-dimensional sample point q kthree-dimensional log-on message be (λ 0, λ 1, λ 2, u), λ 0, λ 1and λ 2following three formula are adopted to obtain respectively:
λ 1 = ( y 2 - y 3 ) ( x - x 3 ) + ( x 3 - x 2 ) ( y - y 3 ) ( y 2 - y 3 ) ( x 1 - x 3 ) + ( x 3 - x 2 ) ( y 1 - y 3 )
λ 2 = ( y 3 - y 1 ) ( x - x 3 ) + ( x 1 - x 3 ) ( y - y 3 ) ( y 2 - y 3 ) ( x 1 - x 3 ) + ( x 3 - x 2 ) ( y 1 - y 3 )
λ 3=1-λ 12
Wherein, λ 0, λ 1, λ 2be respectively first, second, and third barycentric coordinates coefficient, (x 1,y 1,z 1), (x 2, y 2, z 2) and (x 3, y 3, z 3) be the three-dimensional coordinate on three summits of tri patch u;
2.5) for two dimensional strokes s i,jon each two-dimentional sampled point p k, the three-dimensional sample point q corresponding on role's three-dimensional model by it kproject to current key viewpoint v junder two dimensional surface, obtain projection function f proj(q k, v j), and then calculate acquisition three-dimensional sample point p ktwo-dimentional log-on message B k, k is the sequence number of sampled point;
Above-mentioned f projthe projection function be one being parameter with three-dimensional point coordinate and viewpoint, can project to two dimensional surface by a three-dimensional point according to viewpoint and obtain two-dimensional coordinate;
2.6) mixture model is formed by two-dimentional half model, role's three-dimensional model and respective three-dimensional log-on message and two-dimentional log-on message.
Described two-dimentional log-on message B kcalculate in the following ways:
Two dimension log-on message B kbe a set, register parameter by two dimension and form, two-dimentional log-on message B kdimension and two dimensional strokes s i,jtwo-dimentional sampled point quantity n iidentical, b k,lfor two-dimentional log-on message B kl two dimension registration parameter, l is the sequence number corresponding with three-dimensional sample point, b k,lfor two-dimentional log-on message B kin an element, b k,lobtain according to following formulae discovery:
b k , l = | p k - f proj ( q l , v j ) | - 2 Σ l = 1 n i | p k - f proj ( q l , v j ) | - 2 , b k , l | l = 1,2 , . . . , n i
Wherein, three-dimensional sample point q lfor with three-dimensional sample point q kdifferent three-dimensional sample points;
The two-dimentional log-on message B finally obtained kbe expressed as B k={ b k, l| l=1,2 ..., n i.
Described step 3) in, for described viewpoint delta data, in viewpoint towards in space, in the line that in animation, the eye coordinates of each frame must be in any two crucial eye coordinates or the two-dimentional convex closure that is made up of the coordinate of all crucial viewpoints.Skeleton motion data define the personage's skeleton pose in each frame of animation, and viewpoint delta data defines viewpoint in each frame in animation in viewpoint towards the coordinate in space.
Described step 3) process of concrete generation animation is:
3.1) to each frame F in animation, all utilize skeleton motion data, adopt skeleton Subspace Deformation method, role's three-dimensional model is out of shape;
According to three-dimensional log-on message, following formula is adopted to upgrade the three-dimensional sample point q of all three-dimensional strokes lposition, obtain be out of shape after three-dimensional stroke:
q l’=λ 0V 0(u)+λ 1V 1(u)+λ 2V 2(u)
Wherein, q l' be the new three-dimensional sample point of three-dimensional stroke after distortion, V 1(u), V 2(u) and V 3u () is respectively the three-dimensional coordinate on three summits of three-dimensional sample point place tri patch u;
3.2) according to the reposition of three-dimensional sample point on all three-dimensional stroke after distortion, and step 2) the two-dimentional log-on message that obtains, each two dimensional strokes s adopting following formula to upgrade two-dimentional half model to draw at crucial viewpoint inferior horn colo(u)r streak i,jeach two-dimentional sampled point p k, obtain the two dimensional strokes s after being out of shape i,j':
p k ′ = p k + Σ l = 1 n i b k , l ( f proj ( q l ′ , v j ) - f proj ( q l , v j ) ) ;
Wherein, p k' be the two dimensional strokes s after distortion i,j' new two-dimentional sampled point;
3.3) found and current view point v by the adjacent crucial viewpoint lookup method in two-dimentional half cartoon modeling method xadjacent all crucial viewpoint, and form set and be designated as adjacent crucial viewpoint set
And according to adjacent crucial viewpoint set in two dimensional strokes under each crucial viewpoint after all distortion, adopt the crucial viewpoint stroke interpolation method in two-dimentional half cartoon modeling method to generate current view point v xunder two dimensional strokes, by generate current view point v xunder two dimensional strokes be designated as s i,x';
3.4) by adjacent crucial viewpoint set in each three-dimensional stroke under each crucial viewpoint after distortion project to current view point v xunder two dimensional surface obtain the projection result of three-dimensional stroke, projection result identical for stroke index in the projection result of all three-dimensional strokes first being carried out first time merges;
The projection result of each three-dimensional stroke after first time is merged, then with step 3.3) the current view point v that generates xunder have an identical stroke index two dimensional strokes carry out second time and merge, above-mentioned twice fusion adopts following formula to carry out, weighted sum operation between two two dimensional strokes is realized by weighted sum operation by the two-dimentional sample point coordinate of correspondence, obtains the two dimensional strokes after twice fusion
Wherein, merge weight parameter ω and represent the weight size of two dimensional strokes when merging, the fusion weight parameter of each stroke can adjust as required independently, r i,y' represent crucial viewpoint v yunder distortion after three-dimensional stroke, v yrepresent adjacent crucial viewpoint set in crucial viewpoint, t yrepresent v yweight, weight t yviewpoint weighing computation method in available two-dimentional half cartoon modeling method calculates;
3.5) fusion weight parameter ω is adjusted, each two dimensional strokes of being drawn by role's line in animation generate the 2 D animation after merging.
The invention has the beneficial effects as follows:
The inventive method has merged two and three dimensions animation, effectively utilizes existing two and three dimensions resource in animation process, for making animation, generates the animation that picture is more vividly abundant.The present invention, by regulating weight, controls animation effect.
The animation that the inventive method generates, compared to the animation by single two dimension or three-dimensional material-making, more flexible and changeable.
Accompanying drawing explanation
Fig. 1 is that the multiple views two-dimensional character line of embodiment input is drawn.
Fig. 2 be on the three-dimensional stroke of embodiment Freehandhand-drawing and two-dimentional half model with play a corresponding stroke.
Fig. 3 extracts 5 frame skeleton pose figure in role's skeleton motion data of walking of embodiment input.
Fig. 4 is pedestrian's animation that embodiment is generated by three-dimensional model.
Fig. 5 is pedestrian's animation that embodiment is generated by two-dimentional half model.
Fig. 6 is that embodiment generates animation by three-dimensional model and two-dimentional half model generates the pedestrian's animation after animation fusion.
Embodiment one
Below in conjunction with drawings and Examples, the invention will be further described.
The embodiment of the present invention is as follows:
1) input multiple views two-dimensional character line to draw, generate two-dimentional half model.
Input multiple views two-dimensional character line is as shown in Figure 1 drawn, two-dimentional half cartoon modeling method is adopted to generate two-dimentional half model, this two-dimentional half model is formed by the role's line drawing structure under multiple crucial viewpoint, is formed by the role's line drawing structure under three crucial viewpoints.
Three crucial viewpoints are following three viewpoints: the viewpoint at role's bounding box center is looked squarely in the viewpoint that the viewpoint at role's bounding box center is looked squarely in front, role's bounding box center is looked squarely in side and 45 °, side, crucial viewpoint v jtwo vector elements of bivector be respectively direction of visual lines relative to the horizontal rotation angle of the positive apparent direction of level and vertical luffing angle, j is the index of crucial viewpoint, j=1,2,3, the two-dimensional space that horizontal rotation angle and vertical luffing angle are formed be viewpoint towards space, then v 1=(0,0), v 2=(90,0), v 3=(45,0);
Wherein, the two dimensional strokes quantity of role's line drawing that each crucial viewpoint is corresponding is identical, and two dimensional strokes is designated as s i,j, different role line has identical stroke index two dimensional strokes in drawing is corresponding, and the two dimensional strokes set of two-dimentional sampled point represents, different role line has identical stroke index two dimensional strokes in drawing has identical two-dimentional sampled point number n i.
The two dimensional strokes having identical stroke index in the drawing of different role line has identical expression implication and is corresponding, such as, two dimensional strokes in role's line drawing is identical with the physical meaning of a two dimensional strokes in the drawing of another role's line, it is all the profile representing nose, then these two two dimensional strokes are corresponding, and have identical sampled point number n i.
What two dimension half cartoon modeling method adopted Rivers and co-workers thereof to deliver in 2010 is entitled as the two-dimentional half model method mentioned in the paper of " 2.5d cartoon models ", paper is A.Rivers, T.Igarashi, and F.Durand, " 2.5d cartoon models, " TOG, vol.29, no.4, p.59,2010.
2) role's three-dimensional model is inputted, set up the incidence relation of the model of role's three-dimensional model and two-dimentional half model, stroke and point successively, calculate two-dimentional half model and each self-corresponding log-on message of role's three-dimensional model, form mixture model by two-dimentional half model, role's three-dimensional model and each self-corresponding log-on message;
2.1) by role's three-dimensional model of input towards with two-dimentional half model towards aliging, set up the model interaction relation between two-dimentional half model and role's three-dimensional model;
2.2) at role's three-dimensional model with two-dimentional half model towards under the condition of aliging, for each role's line draw under each two dimensional strokes s i,j, i is the index of stroke, and j is the index of crucial viewpoint, and role's three-dimensional model is drawn and each two dimensional strokes s i,jcorresponding three-dimensional stroke r i,j, the three-dimensional stroke having identical stroke index under different points of view is corresponding, sets up the stroke incidence relation between two-dimentional half model and role's three-dimensional model;
Choose a crucial viewpoint v of two-dimentional half model j, and to each two dimensional strokes s under this viewpoint i,j, manually on screen, draw two dimensional strokes, then project on role's three-dimensional model of input according to current view point direction, obtain s i,jcorresponding three-dimensional stroke r on role's three-dimensional model i,j;
As shown in Figure 2, that show in the sectional drawing on the left side is crucial viewpoint v 1under role's line draw, add black stroke be select a two dimensional strokes s 1,1, the sectional drawing on the right is the three-dimensional model of the local corresponding with left figure, and wherein the stroke of black is the three-dimensional stroke r drawn on three-dimensional model 1,1;
2.3) each two dimensional strokes of two-dimentional half model s is chosen i,jon angle point or flex point as two-dimentional key point, shown in figure as left in Fig. 2, the two dimensional strokes s selected in left figure 1,1on have 6 circles, be the key point chosen; Choose each three-dimensional stroke r i,jthe upper point corresponding with each two-dimentional key point as three-dimensional key point, shown in figure as right in Fig. 2, the auxiliary pen that in right figure, color is darker draw band figure notation 6 circles and at s 1,1on the key point chosen identical, for assisting at three-dimensional stroke r 1,1on choose corresponding key point, the stroke r that the color in right figure is more shallow 1,1on 6 circles be at r 1,1on the key point chosen;
According to two-dimentional key point and three-dimensional key point, resampling is carried out to two dimensional strokes and three-dimensional stroke, three-dimensional sample point quantity between two the adjacent three-dimensional key points making the two-dimentional sampled point quantity between two adjacent two-dimentional key points and its corresponding is identical, and in embodiment, quantity arranges and adopts s i, 1value after arc length between upper two adjacent two-dimentional key points rounds, it can thus be appreciated that s i, 1on any one two-dimentional sampled point it can be found at r i, 1on corresponding three-dimensional sampled point, set up the some incidence relation between two-dimentional half model and role's three-dimensional model.
2.4) for three-dimensional stroke r i,jon each three-dimensional sample point q k, find the tri patch u on role's three-dimensional model at its place, obtain each three-dimensional sample point q kcoordinate in the barycentric coordinate system of tri patch u is (λ 0, λ 1, λ 2), each three-dimensional sample point q kthree-dimensional log-on message be (λ 0, λ 1, λ 2, u);
2.5) for two dimensional strokes s i,jon each two-dimentional sampled point p k, the three-dimensional sample point q corresponding on role's three-dimensional model by it kproject to current key viewpoint v junder two dimensional surface, obtain projection function f proj(q k, v j), and then calculate acquisition three-dimensional sample point p ktwo-dimentional log-on message B k, k is the sequence number of sampled point;
Above-mentioned f projthe projection function be one being parameter with three-dimensional point coordinate and viewpoint, can project to two dimensional surface by a three-dimensional point according to viewpoint and obtain two-dimensional coordinate;
2.6) mixture model is formed by two-dimentional half model, role's three-dimensional model and respective three-dimensional log-on message and two-dimentional log-on message.
Above-mentioned two-dimentional log-on message B kobtain in the following ways: two-dimentional log-on message B kbe a set, register parameter by two dimension and form, two-dimentional log-on message B kdimension and two dimensional strokes s i,jtwo-dimentional sampled point quantity n iidentical, b k,lfor two-dimentional log-on message B kl two dimension registration parameter, l is the sequence number corresponding with three-dimensional sample point, b k,lfor two-dimentional log-on message B kin an element, the two-dimentional log-on message B finally obtained kbe expressed as B k={ b k, l| l=1,2 ..., n i.
3) input skeleton motion data and viewpoint delta data, drive role's three-dimensional model and two-dimentional half model to deform the skeleton motion data of mixture model and input and viewpoint delta data, generation animation.
The skeleton motion data of being walked by input role and viewpoint are slowly changed to side viewpoint delta data from front drives three-dimensional model and two-dimentional half model to deform, generate animation, be illustrated in figure 3 the skeleton motion data of walking from the role of input and extract 5 frame skeleton pose
Step 3) in, for described viewpoint delta data, in viewpoint towards in space, in the line that in animation, the eye coordinates of each frame must be in any two crucial eye coordinates or the two-dimentional convex closure that is made up of the coordinate of all crucial viewpoints.
3.1) to each frame F in animation, all utilize skeleton motion data, adopt skeleton Subspace Deformation method, role's three-dimensional model is out of shape;
The skeleton Subspace Deformation method adopted is by paper Magnenat-Thalmann, Nadia, Richard Laperrire, and Daniel Thalmann. " Joint-dependent local deformations for hand animation and object grasping. " In Proceedings on Graphics interface ' 88.1988. proposes.
The three-dimensional sample point q of all three-dimensional strokes is upgraded according to three-dimensional log-on message lposition, obtain be out of shape after three-dimensional stroke q l'=λ 0v 0(u)+λ 1v 1(u)+λ 2v 2(u);
3.2) according to the reposition of three-dimensional sample point on all three-dimensional stroke after distortion, with step 2) the two-dimentional log-on message that obtains, upgrade each two dimensional strokes s that two-dimentional half model is drawn at crucial viewpoint inferior horn colo(u)r streak i,jeach two-dimentional sampled point p k, obtain the two dimensional strokes s after being out of shape i,j':
3.3) found and current view point v by the adjacent crucial viewpoint lookup method in two-dimentional half cartoon modeling method xadjacent all crucial viewpoint, and form set and be designated as adjacent crucial viewpoint set and according to adjacent crucial viewpoint set in two dimensional strokes under each crucial viewpoint after all distortion, adopt the crucial viewpoint stroke interpolation method in two-dimentional half cartoon modeling method to generate current view point v xunder two dimensional strokes, by generate current view point v xunder two dimensional strokes be designated as s i,x';
3.4) by adjacent crucial viewpoint set in each three-dimensional stroke under each crucial viewpoint after distortion project to current view point v xunder two dimensional surface obtain the projection result of three-dimensional stroke, projection result identical for stroke index in the projection result of all three-dimensional strokes first being carried out first time merges;
The projection result of each three-dimensional stroke after first time is merged, then with step 3.3) the current view point v that generates xunder have an identical stroke index two dimensional strokes carry out second time and merge, the weighted sum operation between two two dimensional strokes is realized by weighted sum operation by the two-dimentional sample point coordinate of correspondence, obtains the two dimensional strokes after twice fusion
3.5) manually syncretizing effect is adjusted finally by fusion weight parameter ω:
When the fusion weight parameter ω of all strokes is 0, the animation generated is completely based on three-dimensional model, as shown in Figure 4, when the fusion weight parameter ω of all strokes is 1, the animation generated is completely based on two-dimentional half model, as shown in Figure 5, when the fusion weight parameter ω of stroke is via certain value be adjusted between 0 to 1, generate stroke can between two-dimentional half-sum three-dimensional, as shown in Figure 6, can find out, after adjusting weight parameter, it is better than the quality of animation of independent two dimension half or three-dimensional model generation that two dimension half model generation animation and three-dimensional model generation animation merge the animation obtained.
As can be seen here, method of the present invention generates the more vividly abundant animation of picture, compared to by the complete animation of single two dimension or three-dimensional material-making or local animation, more flexible and changeable, has significant technique effect.

Claims (7)

1. the two three-dimensional animation producing methods combined, is characterized in that comprising following steps:
1) input multiple views two-dimensional character line to draw, generate two-dimentional half model;
2) role's three-dimensional model is inputted, set up the incidence relation of the model of role's three-dimensional model and two-dimentional half model, stroke and point successively, calculate two-dimentional half model and each self-corresponding log-on message of role's three-dimensional model, form mixture model by two-dimentional half model, role's three-dimensional model and each self-corresponding log-on message;
3) input skeleton motion data and viewpoint delta data, drive role's three-dimensional model and two-dimentional half model to deform the skeleton motion data of mixture model and input and viewpoint delta data, generation animation.
2. a kind of two three-dimensional animation design techniques combined according to claim 1, it is characterized in that: described step 1) according to input multiple views two-dimensional character line draw, two-dimentional half cartoon modeling method is adopted to generate two-dimentional half model, this two-dimentional half model is formed by the role's line drawing structure under multiple crucial viewpoint, following three viewpoints are at least comprised: the viewpoint at role's bounding box center is looked squarely in the viewpoint that the viewpoint at role's bounding box center is looked squarely in front, role's bounding box center is looked squarely in side and 45 °, side, crucial viewpoint v in multiple crucial viewpoint jtwo vector elements of bivector be respectively direction of visual lines relative to the horizontal rotation angle of the positive apparent direction of level and vertical luffing angle, j is the index of crucial viewpoint, the two-dimensional space that horizontal rotation angle and vertical luffing angle are formed is that viewpoint is towards space, the two dimensional strokes quantity of role's line drawing that each crucial viewpoint is corresponding is identical, different role line has identical stroke index two dimensional strokes in drawing is corresponding, and different role line has identical stroke index two dimensional strokes in drawing has identical two-dimentional sampled point number n i.
3. according to claims 1 or 2 a kind of two three-dimensional animation producing methods combined, is characterized in that: described two-dimentional half model is formed by the role's line drawing structure under at least three crucial viewpoints.
4. a kind of two three-dimensional animation producing methods combined according to claim 1, is characterized in that: described step 2) detailed process is:
2.1) by role's three-dimensional model of input towards with two-dimentional half model towards aliging, set up the model interaction relation between two-dimentional half model and role's three-dimensional model;
2.2) at role's three-dimensional model with two-dimentional half model towards under the condition of aliging, for each role's line draw under each two dimensional strokes s i,j, i is the index of stroke, and j is the index of crucial viewpoint, and role's three-dimensional model is drawn and each two dimensional strokes s i,jcorresponding three-dimensional stroke r i,j, the three-dimensional stroke having identical stroke index under different points of view is corresponding, sets up the stroke incidence relation between two-dimentional half model and role's three-dimensional model;
2.3) with each two dimensional strokes of two-dimentional half model s i,jon angle point or flex point as two-dimentional key point, each three-dimensional stroke r i,jthe upper point corresponding with each two-dimentional key point is as three-dimensional key point, according to two-dimentional key point and three-dimensional key point, resampling is carried out to two dimensional strokes and three-dimensional stroke, three-dimensional sample point quantity between two the adjacent three-dimensional key points making the two-dimentional sampled point quantity between two adjacent two-dimentional key points and its corresponding is identical, the two-dimentional sampled point of two dimensional strokes and the three-dimensional sample point of three-dimensional stroke are corresponding in turn to thus, set up the some incidence relation between two-dimentional half model and role's three-dimensional model;
2.4) for three-dimensional stroke r i,jon each three-dimensional sample point q k, find the tri patch u on role's three-dimensional model at its place, obtain each three-dimensional sample point q kcoordinate in the barycentric coordinate system of tri patch u is (λ 0, λ 1, λ 2), each three-dimensional sample point q kthree-dimensional log-on message be (λ 0, λ 1, λ 2, u), λ 0, λ 1and λ 2following three equations are adopted to obtain respectively:
λ 3=1-λ 12
Wherein, λ 0, λ 1, λ 2be respectively first, second, and third barycentric coordinates coefficient, (x 1,y 1,z 1), (x 2, y 2, z 2) and (x 3, y 3, z 3) be the three-dimensional coordinate on three summits of tri patch u;
2.5) for two dimensional strokes s i,jon each two-dimentional sampled point p k, the three-dimensional sample point q corresponding on role's three-dimensional model by it kproject to current key viewpoint v junder two dimensional surface, obtain projection function f proj(q k, v j), and then calculate acquisition three-dimensional sample point p ktwo-dimentional log-on message B k, k is the sequence number of sampled point;
2.6) mixture model is formed by two-dimentional half model, role's three-dimensional model and respective three-dimensional log-on message and two-dimentional log-on message.
5. a kind of two three-dimensional animation producing methods combined according to claim 4, is characterized in that: described two-dimentional log-on message B kcalculate in the following ways:
Two dimension log-on message B kbe a set, register parameter by two dimension and form, two-dimentional log-on message B kdimension and two dimensional strokes s i,jtwo-dimentional sampled point quantity n iidentical, b k,lfor two-dimentional log-on message B kl two dimension registration parameter, l is the sequence number corresponding with three-dimensional sample point, b k,lfor two-dimentional log-on message B kin an element, b k,lobtain according to following formulae discovery:
b k,l|l=1,2,...,n i
Wherein, three-dimensional sample point q lfor with three-dimensional sample point q kdifferent three-dimensional sample points;
The two-dimentional log-on message B finally obtained kbe expressed as B k={ b k,l| l=1,2 ..., n i.
6. a kind of two three-dimensional animation producing methods combined according to claim 1, it is characterized in that: described step 3) in, for described viewpoint delta data, in viewpoint towards in space, in the line that in animation, the eye coordinates of each frame must be in any two crucial eye coordinates or the two-dimentional convex closure that is made up of the coordinate of all crucial viewpoints.
7. a kind of two three-dimensional animation producing methods combined according to claim 1, is characterized in that: described step 3) process of concrete generation animation is:
3.1) to each frame F in animation, all utilize skeleton motion data, adopt skeleton Subspace Deformation method, role's three-dimensional model is out of shape; According to three-dimensional log-on message, following formula is adopted to upgrade the three-dimensional sample point q of all three-dimensional strokes lposition, obtain be out of shape after three-dimensional stroke:
q l’=λ 0V 0(u)+λ 1V 1(u)+λ 2V 2(u)
Wherein, q l' be the new three-dimensional sample point of three-dimensional stroke after distortion, V 1(u), V 2(u) and V 3u () is respectively the three-dimensional coordinate on three summits of three-dimensional sample point place tri patch u;
3.2) according to the reposition of all three-dimensional sample points after distortion, with step 2) the two-dimentional log-on message that obtains, each two dimensional strokes s adopting following formula to upgrade two-dimentional half model to draw at crucial viewpoint inferior horn colo(u)r streak i,jeach two-dimentional sampled point p k, obtain the two dimensional strokes s after being out of shape i,j':
Wherein, p k' be the two dimensional strokes s after distortion i,j' new two-dimentional sampled point;
3.3) found and current view point v by the adjacent crucial viewpoint lookup method in two-dimentional half cartoon modeling method xadjacent all crucial viewpoint, and form set and be designated as adjacent crucial viewpoint set
And according to adjacent crucial viewpoint set in two dimensional strokes under each crucial viewpoint after all distortion, adopt the crucial viewpoint stroke interpolation method in two-dimentional half cartoon modeling method to generate current view point v xunder two dimensional strokes, by generate current view point v xunder two dimensional strokes be designated as s i,x';
3.4) by adjacent crucial viewpoint set in each three-dimensional stroke under each crucial viewpoint after distortion project to current view point v xunder two dimensional surface obtain the projection result of three-dimensional stroke, projection result identical for stroke index in the projection result of all three-dimensional strokes first being carried out first time merges;
The projection result of each three-dimensional stroke after first time is merged, then with step 3.3) the current view point v that generates xunder have an identical stroke index two dimensional strokes carry out second time and merge, above-mentioned twice fusion adopts following formula to carry out, and obtains the two dimensional strokes after twice fusion
Wherein, merge weight parameter ω and represent the weight size of two dimensional strokes when merging, r i,y' represent crucial viewpoint v yunder distortion after three-dimensional stroke, v yrepresent adjacent crucial viewpoint set in crucial viewpoint, t yrepresent v yweight;
3.5) fusion weight parameter ω is adjusted, each two dimensional strokes of being drawn by role's line in animation generate the 2 D animation after merging.
CN201410805149.7A 2014-12-22 2014-12-22 A kind of two three-dimensional animation producing methods combined Expired - Fee Related CN104599305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410805149.7A CN104599305B (en) 2014-12-22 2014-12-22 A kind of two three-dimensional animation producing methods combined

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410805149.7A CN104599305B (en) 2014-12-22 2014-12-22 A kind of two three-dimensional animation producing methods combined

Publications (2)

Publication Number Publication Date
CN104599305A true CN104599305A (en) 2015-05-06
CN104599305B CN104599305B (en) 2017-07-14

Family

ID=53125055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410805149.7A Expired - Fee Related CN104599305B (en) 2014-12-22 2014-12-22 A kind of two three-dimensional animation producing methods combined

Country Status (1)

Country Link
CN (1) CN104599305B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131535A (en) * 2016-07-29 2016-11-16 传线网络科技(上海)有限公司 Video capture method and device, video generation method and device
CN108846884A (en) * 2018-05-29 2018-11-20 电子科技大学 A kind of adaptive weighting setting method of three-dimensional animation algorithm
CN109074670A (en) * 2016-04-28 2018-12-21 株式会社Live2D Program, information processing unit, disturbance degree deriving method, image generating method and recording medium
CN109345616A (en) * 2018-08-30 2019-02-15 腾讯科技(深圳)有限公司 Two dimension rendering map generalization method, equipment and the storage medium of three-dimensional pet
CN111105484A (en) * 2019-12-03 2020-05-05 北京视美精典影业有限公司 Paperless 2D (two-dimensional) string frame optimization method
CN111369654A (en) * 2019-12-03 2020-07-03 北京视美精典影业有限公司 2D CG animation mixing method
CN112017179A (en) * 2020-09-09 2020-12-01 杭州时光坐标影视传媒股份有限公司 Method, system, electronic device and storage medium for evaluating visual effect grade of picture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10326353A (en) * 1997-05-23 1998-12-08 Matsushita Electric Ind Co Ltd Three-dimensional character animation display device, and three-dimensional motion data transmission system
CN1895168A (en) * 2005-10-26 2007-01-17 浙江大学 Three-dimensional feet data measuring method to sparse grid based on curve subdivision
CN1945632A (en) * 2006-10-19 2007-04-11 浙江大学 Forming and editing method for three dimension martial art actions based on draft driven by data
CN103514619A (en) * 2012-06-27 2014-01-15 甲尚股份有限公司 System and method for performing three-dimensional motion by two-dimensional character

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10326353A (en) * 1997-05-23 1998-12-08 Matsushita Electric Ind Co Ltd Three-dimensional character animation display device, and three-dimensional motion data transmission system
CN1895168A (en) * 2005-10-26 2007-01-17 浙江大学 Three-dimensional feet data measuring method to sparse grid based on curve subdivision
CN1945632A (en) * 2006-10-19 2007-04-11 浙江大学 Forming and editing method for three dimension martial art actions based on draft driven by data
CN103514619A (en) * 2012-06-27 2014-01-15 甲尚股份有限公司 System and method for performing three-dimensional motion by two-dimensional character

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李响: "融合手绘风格的卡通角色动画生成方法", 《计算机辅助设计与图形学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074670A (en) * 2016-04-28 2018-12-21 株式会社Live2D Program, information processing unit, disturbance degree deriving method, image generating method and recording medium
CN109074670B (en) * 2016-04-28 2023-04-28 株式会社Live2D Information processing apparatus, image generating method, and recording medium
CN106131535A (en) * 2016-07-29 2016-11-16 传线网络科技(上海)有限公司 Video capture method and device, video generation method and device
CN108846884A (en) * 2018-05-29 2018-11-20 电子科技大学 A kind of adaptive weighting setting method of three-dimensional animation algorithm
CN109345616A (en) * 2018-08-30 2019-02-15 腾讯科技(深圳)有限公司 Two dimension rendering map generalization method, equipment and the storage medium of three-dimensional pet
CN111105484A (en) * 2019-12-03 2020-05-05 北京视美精典影业有限公司 Paperless 2D (two-dimensional) string frame optimization method
CN111369654A (en) * 2019-12-03 2020-07-03 北京视美精典影业有限公司 2D CG animation mixing method
CN111105484B (en) * 2019-12-03 2023-08-29 北京视美精典影业有限公司 Paperless 2D serial frame optimization method
CN112017179A (en) * 2020-09-09 2020-12-01 杭州时光坐标影视传媒股份有限公司 Method, system, electronic device and storage medium for evaluating visual effect grade of picture
CN112017179B (en) * 2020-09-09 2021-03-02 杭州时光坐标影视传媒股份有限公司 Method, system, electronic device and storage medium for evaluating visual effect grade of picture

Also Published As

Publication number Publication date
CN104599305B (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN104599305A (en) Two-dimension and three-dimension combined animation generation method
KR101514327B1 (en) Method and apparatus for generating face avatar
CN102663766B (en) Non-photorealistic based art illustration effect drawing method
CN104123747B (en) Multimode touch-control three-dimensional modeling method and system
KR100727034B1 (en) Method for representing and animating 2d humanoid character in 3d space
CN104103090A (en) Image processing method, customized human body display method and image processing system
CN101916454A (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN104103091B (en) 3D intelligent modeling method and system and 3D model flexible manufacturing system
CN102306386A (en) Method for quickly constructing third dimension tree model from single tree image
CN104063888B (en) A kind of wave spectrum artistic style method for drafting based on feeling of unreality
CN104463954A (en) Three-dimensional image surface detail simulation method and system
CN104778736A (en) Three-dimensional garment animation generation method driven by single video content
CN105096358A (en) Line enhanced simulation method for pyrography artistic effect
CN104091366B (en) Three-dimensional intelligent digitalization generation method and system based on two-dimensional shadow information
CN109308380B (en) Embroidery artistic style simulation method based on non-photorealistic sense
CN105096359A (en) Drawing method of cucurbit pyrography artistic style
Xu Face reconstruction based on multiscale feature fusion and 3d animation design
CN104616287A (en) Mobile terminal for 3D image acquisition and 3D printing and method
Yang et al. Creating a virtual activity for the intangible culture heritage
Li Architectural design virtual simulation based on virtual reality technology
Zhao et al. The application of traditional Chinese painting technique and stroke effect in digital ink painting
CN103198520A (en) Individuation portrait product design method based on control point and control line neighborhood deformation
CN102467747A (en) Building decoration animation three-dimensional (3D) effect processing method
CN100446039C (en) Layering view point correlated model based computer assisted two-dimensional cartoon drawing method by hand
CN103268627B (en) The generation method of the marbleizing glaze texture on a kind of intelligent mobile phone platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170714

Termination date: 20181222

CF01 Termination of patent right due to non-payment of annual fee