CN1174346C - Method for making 3D human face animation - Google Patents

Method for making 3D human face animation Download PDF

Info

Publication number
CN1174346C
CN1174346C CNB011135867A CN01113586A CN1174346C CN 1174346 C CN1174346 C CN 1174346C CN B011135867 A CNB011135867 A CN B011135867A CN 01113586 A CN01113586 A CN 01113586A CN 1174346 C CN1174346 C CN 1174346C
Authority
CN
China
Prior art keywords
motion
human face
reference mark
point
functional areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB011135867A
Other languages
Chinese (zh)
Other versions
CN1383102A (en
Inventor
张青山
陈国良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CNB011135867A priority Critical patent/CN1174346C/en
Publication of CN1383102A publication Critical patent/CN1383102A/en
Application granted granted Critical
Publication of CN1174346C publication Critical patent/CN1174346C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a method for making a three dimensional human face animation, which is characterized in that firstly, a three dimensional human face grid model is obtained; secondly, the function areas on the model are divided. Based on the motion characteristic of each function area and the interaction between the motions of each function area, motion control points are designed. A DFFD algorithm is used to calculate the influence of the control points on the motion of the model. The control points drive the model to move, and thus, the real motion of a human face is simulated. On the basis that the reality sensation of the simulated motion of the human face is guaranteed, the method reduces a workload for designing the model and the amount of calculation of a system. The present invention simplifies the control of the simulated motion and has good real time and scaling properties.

Description

A kind of method for making of 3 D human face animation
The invention belongs to the computer three-dimensional animation technical field.
The 3 D human face animation technology is started in the seventies.Thereafter, a lot of researchists have done number of research projects in this field, in the hope of can producing realistic human face animation, and the various expressions and the action of simulation real human face that can be real-time.But because the complicacy of people's face physiological structure itself, and people make to reach comparatively difficulty of this target to the susceptibility of people's face outward appearance.The work of this respect has at present obtained some progress, and still, as the system of practice, also there is some deficiency in they.
" american computer association computer graphical special interest group ' 95 nd Annual Meeting collection " (ACM SIGGRAPH ' 95Conference Proceedings, pp.55-62) a kind of animation system of Gong Buing based on the three-dimensional face hierarchical model, the design of its system model is very complicated, needs a large amount of manual work; In anthropomorphic dummy's face motion process, required calculated amount is also very big, is difficult to realize real-time; And because calculated amount is with the non-linear accelerated growth of grid model scale (being the quantity of net point in the grid model), make this technology to expand putting property relatively poor.
" american computer association computer graphical special interest group ' 98 nd Annual Meeting collection " (ACM SIGGRAPH ' 98Conference Proceedings, pp.55-66) disclose a kind of first real-time follow-up and be marked at a large amount of monumented point on the real human face in advance, calculated the system of the motion of net point on the 3 d human face mesh model then by the linear combination of these monumented points; The 75-84 page or leaf of this collection of thesis has been described and has a kind ofly been carried out the system that linear interpolation is calculated net point position in the motion process by the character pair point to a large amount of manual markings.In these two kinds of systems, the collection of exercise data (tracking mark point or marker characteristic point) need expend a large amount of calculating or manual work, the control complexity of motion; Because the interpolation method that these two kinds of systems adopt, human face animation will obtain the higher sense of reality, and a large amount of monumented points or unique point need be set, and it is heavy to work.
" american computer association mathematical software journal " (ACM Transactions on Mathematical Software, vol.22, col.4, pp.469-483, December 1996) the Di Likeli Free Transform algorithm (DFFD-Dirichlet Free-Form Deformation) mentioned has preferable performance aspect the distortion of simulation three-dimension curved surface.At present, this algorithm is mainly used in the moulded of three-dimensional geometry entity, and Shang Weijian is used to simulate the motion of three-dimensional face.
The objective of the invention is to propose a kind of method for making of 3 D human face animation,, reduce the calculated amount of system, improve the real-time of system guaranteeing that 3 D human face animation has under the prerequisite than high realism; Alleviate the workload of modelling; Simplify the control of three-dimensional face motion; The raising system expands putting property.
The method for making of this 3 D human face animation is characterized in that: at first obtain 3 d human face mesh model; Then 3 d human face mesh model being carried out functional areas divides; According to influencing each other of moving between the kinetic characteristic of each functional areas and the functional areas, design motion control point; Utilization DFFD algorithm, calculation control point is to the influence of grid model motion; By the motion of drive controlling point, drive the motion of 3 d human face mesh model at last, facial movement that is virtually reality like reality;
Obtaining of described 3 d human face mesh model, promptly gather the data of 3 d human face mesh model, comprise that adopting three-dimensional laser scanner to scan real people's face generates 3 d human face mesh model or read existing 3 d human face mesh model from computer memory device, is plotted in three-dimensional face model on the display with the three-dimensional drawing software package then;
The division of described 3 d human face mesh model functional areas is corresponding to the main locomotive organ of people's face, as nose, face, eyeball, cheek, forehead or the like, the whole three-dimensional face model that is plotted on the display is divided into the several function district; During division, use a computer input equipment such as mouse pick up or the keyboard input, the net point and the dough sheet (being the polygon that is surrounded by line between net point on the grid model) of specifying each functional areas to comprise; Can also further segment each functional areas;
Described according to each functional areas kinetic characteristic and functional areas between the design motion control point that influences each other that moves, be to use computer entry device, pick up and drag or the keyboard input as mouse, specify existing net point as follows, or add new three dimensions point, as the motion control point:
According to the vision visual understanding of people to people's face kinetic characteristic, in functional areas motion amplitude bigger zone or the zone that moves hardly, the reference mark of this functional areas motion of design control is the ACTIVE CONTROL point; According to the relation of influencing each other of moving between the human face is related motion between the organ, the zone of correspondence on grid model, the reference mark of related motion is the Passive Control point between the design control corresponding function district, as the controlled point of the functional areas that apply this motion effects, and the reference mark of the functional areas of this motion effects is accepted in conduct;
Described utilization DFFD algorithm computation reference mark is to the influence of grid model motion, the motion in eyeball, dental functions district is to use the control coefrficient of COMPUTER CALCULATION reference mark: except that can be simulated with simple rigid motion to controlled point, each functional areas adopt in the DFFD algorithm computation functional areas each reference mark to the control coefrficient of controlled point respectively, this calculating can take (1) non real-time to calculate: calculate the control coefrficient of each reference mark to controlled point in advance, in the motion process of 3 d human face mesh model, the retentive control coefficient is constant then; Or calculate in real time (2): calculate the control coefrficient of each reference mark to controlled point in real time, promptly in the motion process of 3 d human face mesh model, after the locus of reference mark in certain functional areas and controlled point changes, recomputate the control coefrficient of reference mark to controlled point;
The motion of described driving 3 d human face mesh model, be at first mobile reference mark,, calculate the moving displacement of controlled point then according to the moving displacement and its control coefrficient at reference mark to controlled point, move controlled point according to result of calculation at last, thereby drive the motion of three-dimensional grid model; ACTIVE CONTROL point move the method can adopt hardware or software: hardware approach is as with the mouse drag reference mark or with the displacement at keyboard input reference mark; The method of software is as reading the displacement at reference mark from disk file, or obtains people's face exercise data from third party software, is converted into the displacement at reference mark according to its rule; The displacement of some ACTIVE CONTROL point also can adopt the level and smooth interpolation of contiguous ACTIVE CONTROL point displacement to obtain; Moving of Passive Control point then is with its controlled point as relevant ACTIVE CONTROL point, through calculating; The concrete computing method of controlled some displacement are, the displacement at reference mark is multiplied each other with its control coefrficient to certain controlled point, as the motion effects of reference mark to this controlled point, all motion effects that certain controlled point is subjected to stack up, as the displacement of this controlled point.
Compared with prior art, the invention has the advantages that:
Because the present invention has adopted functional areas division technology, system can distinguish the motion of each relatively independent functional areas of motion pattern simulating well; Because the present invention has introduced the Passive Control point, what system moved between the analog functuion district well influences each other; To the further segmentation of functional areas, can move by the anthropomorphic face of finer and smoother topotype; Because DFFD is a kind of local deformation technology, and performance is fine aspect the distortion of simulation continuous curve surface, so the present invention adopts the motion of DFFD anthropomorphic dummy face skin, can obtain the higher sense of reality, and under real-time calculation mode, the sense of reality can be further improved.So on the sense of reality of 3 D human face animation, the prior art that the present invention that calculated amount is less and calculated amount are very big has comparability.
The calculated amount of DFFD is nonlinear accelerated growth with the calculating scale, can reduce calculated amount greatly so reduce the calculating scale.Because the present invention adopts 3 d human face mesh model functional areas division technology, has reduced the calculating scale of DFFD, thereby has greatly reduced the calculated amount of system, has improved the real-time of system; And when adopting the non real-time account form, the calculating of a DFFD is only carried out in advance in each functional areas, has further reduced the calculated amount of system, can be applicable to the hardware condition that computing power is more weak well.So compared with prior art, the present invention greatly reduces the calculated amount of system, therefore has good real time performance.
Because the functional areas quantity of grid model of the present invention is few, so the workload in partition functionality district is less; During division, also can be according to the segmentation degree in the scale selection function district of grid model, with calculated amount that reduces system or the fine and smooth degree that improves motion simulation, so the division of functional areas has certain dirigibility.Because with the anthropomorphic dummy's face motion preferably of less reference mark, so the negligible amounts at reference mark, the workload of reference mark design is less; To different models, the relative position of reference mark and functional areas roughly remains unchanged, and has certain versatility; Fine degree according to the simulation of motion details can increase or delete some reference mark, so the design at reference mark has certain dirigibility; At last, the motion of model is only put by the ACTIVE CONTROL of negligible amounts and is controlled, and because some ACTIVE CONTROL points do not move substantially, and also have the motion of some ACTIVE CONTROL points to obtain by the level and smooth interpolation of contiguous ACTIVE CONTROL point motion, this ACTIVE CONTROL number of spots that just makes the present invention need directly to drive further reduces, so motion control is comparatively simple.So, comparing with prior art, the present invention has reduced the workload of modelling, has simplified the motion control of three-dimensional face, and the control of Model Design and motion has certain versatility and dirigibility.
For the 3 d human face mesh model of different scales,, can satisfy requirement to the sense of reality and the real-time of motion simulation by the division of functional areas, the design at reference mark, the selection of account form.So compared with prior art, the present invention has good the putting property that expands.
Accompanying drawing 1 is the existing 3 d human face mesh model exemplary plot that reads from computer memory device.
Fig. 2 is that a kind of of the embodiment of the invention divides and reference mark design exemplary plot Fig. 13 d human face mesh model major function zoning.
Below in conjunction with the description of drawings embodiments of the invention.
Embodiment 1:
Present embodiment is based on a simple embodiment of accompanying drawing 1.
At first can scan real human face and obtain original 3 d human face mesh model by three-dimensional laser scanner, or from computer memory device, read existing 3 d human face mesh model, adopt the three-dimensional drawing software package that 3 d human face mesh model is plotted on the display, obtain accompanying drawing 1;
By computer entry device original 3 d human face mesh model accompanying drawing 1 is carried out the division in major function district and the design at reference mark, functional areas then, obtain accompanying drawing 2.The way of specific design is as follows:
Mark off left eye ball and right eye ball (1), forehead (comprising the upper eyelid) (2), nose (3), left cheek (comprising left lower lid,eye) (4), right cheek (comprising right lower lid,eye) (5), mouth top (6), mouth bottom (comprising chin) (7), maxillary teeth (8), mandibular teeth functional areas such as (9) earlier with being plotted in 3 d human face mesh model on the display, remaining part is as functional areas; For the motion of meticulousr simulation eye, forehead and left and right cheek functional areas further can also be segmented, eyelid is divided, as independent functional from the function corresponding district.
Design the reference mark of each functional areas then: in the functional areas, mouth bottom, choose the part net point on the lower lip upperlip line, the part net point on the chin, the part net point on having a common boundary with the remainder functional areas is as the ACTIVE CONTROL point; In the functional areas, mouth top, the net point of choosing the corners of the mouth is the Passive Control point, and the part net point on the lip line is as the ACTIVE CONTROL point; In the cheek functional areas, the left and right sides, choose the part net point on the palpebra inferior profile, near the point the cheekbone is as the ACTIVE CONTROL point, chooses part net point on having a common boundary with mouth top, functional areas, mouth bottom as the Passive Control point; In the nose functional areas, choose part net point on the bridge of the nose, nose and the wing of nose, choose part net point on having a common boundary with other functional areas as the Passive Control point as the ACTIVE CONTROL point; In the forehead functional areas, choose near the point of part net point, place between the eyebrows and brow ridge on the profile of upper eyelid, the part net point on having a common boundary with the residue functional areas is as the ACTIVE CONTROL point; The net point of choosing on the canthus is the Passive Control point.For the purpose of difference, in the accompanying drawing 2 of present embodiment, adopt soft dot, square hollow point and open diamonds point expression ACTIVE CONTROL point respectively, hollow triangle point expression Passive Control point, black circle is represented controlled point.
Because the quantity at functional areas and reference mark is less, so the workload of above-mentioned modelling is less.
Adopt the DFFD algorithm to calculate in each functional areas the reference mark respectively then to the control coefrficient of controlled point.Because the reference mark and the net point of each functional areas are less, so the system-computed amount is less.
At last, adopt the method for hardware or software, drive the ACTIVE CONTROL point, the displacement of calculating all reference mark and controlled point is then moved these points according to result of calculation, moves with anthropomorphic dummy's face.
In the ACTIVE CONTROL point, some point is roughly motionless, such as the ACTIVE CONTROL point on having a common boundary with the remainder functional areas in the forehead functional areas, the ACTIVE CONTROL point on and for example having a common boundary with the remainder functional areas in the functional areas, mouth bottom, they are the points that adopt square hollow point to represent in accompanying drawing 2.
Also have some ACTIVE CONTROL points, their motion can be calculated by the level and smooth interpolation of relevant reference mark motion, such as the part ACTIVE CONTROL point on the lip line in the functional areas, mouth top, Bezier (Bezier) interpolation that their motion can be moved by the several Control point at the corners of the mouth and lip middle part obtains, part ACTIVE CONTROL point on lower lip lip line and the last palpebra inferior profile is also taked similar processing, and they adopt the expression of open diamonds point in accompanying drawing 2.
When anthropomorphic dummy's face moved, the motion of above-mentioned 2 class ACTIVE CONTROL points need not to consider that system only need drive remaining ACTIVE CONTROL point--in accompanying drawing 2, adopt soft dot to represent, get final product the motion of control mesh model, so the control of motion is comparatively simple.
Come the drive controlling point if adopt the method for mouse drag, its concrete operations are: the ACTIVE CONTROL point that will need to drive with mouse moves to the desired position, calculate the displacement of associated nets lattice point then according to the displacement of these ACTIVE CONTROL points, move these net points at last, the faceform promptly from a change of shape to another shape, thereby obtained the effect of the motion of people's face.
If adopt the method read in disk file to drive the ACTIVE CONTROL point, its concrete operations are: will need the displacement of the ACTIVE CONTROL point that drives to write in the disk file in advance, the displacement of each moment ACTIVE CONTROL point writes some frames as frame data; From this disk file, read in each frame then continuously, move corresponding ACTIVE CONTROL point, drive faceform's motion according to the data of every frame; Handle each frame continuously, drive faceform's continuous motion, thereby obtained the effect of human face animation.
Also can adopt Microsoft's text-to-speech translation engine (Microsoft Text-to-Speech Engine) to drive the ACTIVE CONTROL point, its concrete operations are: to this engine input passage, this engine will produce the pronunciation of this section literal, produces the pairing mouth shape parameter of each phoneme in the phonation simultaneously; Receive these mouth shape parameters continuously from this engine, the mouth shape parameter is converted into the displacement of mouth ACTIVE CONTROL point among the faceform, move corresponding ACTIVE CONTROL point, the motion of driving model mouth then according to its interpretative rule; Handle these mouth shape parameters continuously, drive faceform's continuous motion, thereby obtain the effect that people's face is spoken.
From the last human face animation effect that obtains of present embodiment compared with prior art, under the effect of people of the kind's face motion sense of reality, because the inventive method makes the number of control points of the functional areas divided and design less, so the workload of modelling reduces greatly than prior art; Owing to net point and reference mark that the individual feature district is contained are less, so the calculated amount of system also has bigger reduction than prior art; Owing to need the directly negligible amounts of the ACTIVE CONTROL point of driving, so the control ratio prior art of model sport is simple, greatly reduce the expense of 3 D face animation, be more convenient for practicing.

Claims (4)

1, a kind of method for making of 3 D human face animation is characterized in that: at first gather the data of 3 d human face mesh model, with the three-dimensional drawing software package three-dimensional face model is plotted on the display; Corresponding to the main locomotive organ of people's face, 3 d human face mesh model is divided into the several function district then; In functional areas motion amplitude bigger zone or the zone that moves hardly, the reference mark of this functional areas motion of design control is the ACTIVE CONTROL point; According to the relation of influencing each other of moving between the human face is related motion between the organ, the zone of correspondence on grid model, the reference mark of related motion is the Passive Control point between the design control corresponding function district, as the controlled point of the functional areas that apply this motion effects, and the reference mark of the functional areas of this motion effects is accepted in conduct; Except that the available simple rigid motion of the motion in eyeball, dental functions district is simulated, adopt in the DFFD algorithm computation functional areas each reference mark to the control coefrficient of controlled point respectively to each functional areas; Drive the motion of 3 d human face mesh model at last by the motion of drive controlling point, facial movement that is virtually reality like reality: promptly at first mobile reference mark, then according to the moving displacement at reference mark and it control coefrficient to controlled point, calculate the moving displacement of controlled point, move controlled point according to result of calculation at last, thereby drive the motion of three-dimensional grid model; The concrete computing method of described controlled some displacement are: displacement and its control coefrficient to certain controlled point at reference mark are multiplied each other, as the motion effects of reference mark to this controlled point, all motion effects that certain controlled point is subjected to stack up, as the displacement of this controlled point.
2, the method for making of 3 D human face animation according to claim 1 is characterized in that: the division of described 3 d human face mesh model functional areas, also some functional areas that whole faceform is divided into are further segmented.
3, the method for making of 3 D human face animation according to claim 1, it is characterized in that: described employing DFFD algorithm respectively in the computing function district each reference mark take non real-time to calculate to the control coefrficient of controlled point: promptly calculate the control coefrficient of each reference mark in advance to controlled point, in the motion process of 3 d human face mesh model, the retentive control coefficient is constant.
4, the method for making of 3 D human face animation according to claim 1, it is characterized in that: described employing DFFD algorithm respectively in the computing function district each reference mark the control coefrficient of controlled point is taked real-time calculating respectively: promptly calculate the control coefrficient of each reference mark in real time to controlled point, promptly in the motion process of 3 d human face mesh model, after the locus of reference mark in certain functional areas and controlled point changes, recomputate the control coefrficient of reference mark to controlled point.
CNB011135867A 2001-04-25 2001-04-25 Method for making 3D human face animation Expired - Fee Related CN1174346C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB011135867A CN1174346C (en) 2001-04-25 2001-04-25 Method for making 3D human face animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB011135867A CN1174346C (en) 2001-04-25 2001-04-25 Method for making 3D human face animation

Publications (2)

Publication Number Publication Date
CN1383102A CN1383102A (en) 2002-12-04
CN1174346C true CN1174346C (en) 2004-11-03

Family

ID=4660302

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB011135867A Expired - Fee Related CN1174346C (en) 2001-04-25 2001-04-25 Method for making 3D human face animation

Country Status (1)

Country Link
CN (1) CN1174346C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10761721B2 (en) 2013-02-23 2020-09-01 Qualcomm Incorporated Systems and methods for interactive image caricaturing by an electronic device

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1313979C (en) * 2002-05-03 2007-05-02 三星电子株式会社 Apparatus and method for generating 3-D cartoon
US7333112B2 (en) * 2003-05-14 2008-02-19 Pixar Rig baking
US7224356B2 (en) * 2004-06-08 2007-05-29 Microsoft Corporation Stretch-driven mesh parameterization using spectral analysis
CN100399360C (en) * 2006-08-22 2008-07-02 中国科学院计算技术研究所 Lattice simplified restrain method of three-dimensional human model
CN101324961B (en) * 2008-07-25 2011-07-13 上海久游网络科技有限公司 Human face portion three-dimensional picture pasting method in computer virtual world
CN101789135B (en) * 2009-01-23 2012-05-30 广州市设计院 Portrait presenting method based on historical photograph/picture
CN102075693A (en) * 2009-11-25 2011-05-25 新奥特(北京)视频技术有限公司 Subtitle editing system and plug-in
CN104077798B (en) * 2014-07-01 2017-05-03 中国科学技术大学 High-reality-sense animation synthesis method for deformable object
US10586368B2 (en) * 2017-10-26 2020-03-10 Snap Inc. Joint audio-video facial animation system
CN110689604B (en) * 2019-05-10 2023-03-10 腾讯科技(深圳)有限公司 Personalized face model display method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10761721B2 (en) 2013-02-23 2020-09-01 Qualcomm Incorporated Systems and methods for interactive image caricaturing by an electronic device
US11526272B2 (en) 2013-02-23 2022-12-13 Qualcomm Incorporated Systems and methods for interactive image caricaturing by an electronic device

Also Published As

Publication number Publication date
CN1383102A (en) 2002-12-04

Similar Documents

Publication Publication Date Title
Choe et al. Performance‐driven muscle‐based facial animation
Sifakis et al. Simulating speech with a physics-based facial muscle model
Waters A muscle model for animation three-dimensional facial expression
US7068277B2 (en) System and method for animating a digital facial model
CN1174346C (en) Method for making 3D human face animation
Turner et al. The elastic surface layer model for animated character construction
US7872654B2 (en) Animating hair using pose controllers
CN103208133A (en) Method for adjusting face plumpness in image
US7983882B1 (en) Joint wrinkle and muscle movement simulating software
Wilhelms Animals with anatomy
King et al. A 3D parametric tongue model for animated speech
CN110443872B (en) Expression synthesis method with dynamic texture details
Wang et al. Langwidere: A new facial animation system
Guenter A system for simulating human facial expression
King A facial model and animation techniques for animated speech
Sera et al. Physics-based muscle model for mouth shape control
Wang Langwidere: a hierarchical spline based facial animation system with simulated muscles.
Thalmann et al. Human modeling and animation
Çetinaslan Position manipulation techniques for facial animation
Ma et al. Animating visible speech and facial expressions
Waters The computer synthesis of expressive three-dimensional facial character animation.
Park et al. A feature‐based approach to facial expression cloning
JPH0676044A (en) Device for reproducing and displaying expression code and processing and generating emotion and expression
KR100366210B1 (en) Human Head/Face Modeller Generation Method
Xu et al. Virtual hairy brush for digital painting and calligraphy

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee