CN102087750A - Method for manufacturing cartoon special effect - Google Patents

Method for manufacturing cartoon special effect Download PDF

Info

Publication number
CN102087750A
CN102087750A CN2010102000440A CN201010200044A CN102087750A CN 102087750 A CN102087750 A CN 102087750A CN 2010102000440 A CN2010102000440 A CN 2010102000440A CN 201010200044 A CN201010200044 A CN 201010200044A CN 102087750 A CN102087750 A CN 102087750A
Authority
CN
China
Prior art keywords
module
making
face
point
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010102000440A
Other languages
Chinese (zh)
Inventor
陶胜
余燕
谭兆红
谢军
石猛
袁志勇
蒋立山
韩昱斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hongmeng Information Science & Technology Co Ltd Hunan
Original Assignee
Hongmeng Information Science & Technology Co Ltd Hunan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hongmeng Information Science & Technology Co Ltd Hunan filed Critical Hongmeng Information Science & Technology Co Ltd Hunan
Priority to CN2010102000440A priority Critical patent/CN102087750A/en
Publication of CN102087750A publication Critical patent/CN102087750A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a method for manufacturing a cartoon special effect, which mainly comprises a method for manufacturing a three-dimensional water surface real-time rendering system, a method for manufacturing cartoon ink rendering, a method for manufacturing fluid smoke and flame, a method for programming of a human face generator, a method for dividing a module of water ripple generating plugin, a method for extracting the head model characteristics, and the like. The invention can omit a lot of repeated fussy works, is quite simple and convenient for operation, is rich in humanization, greatly saves labor, materials and cost and can release a cartoon manufacturer from boring model design for concentrating on researching things at a higher level, thereby greatly enhancing the function of the traditional platform.

Description

A kind of method for making of animation special efficacy
Technical field
The present invention relates to animation and make the field, be specially a kind of method for making of animation special efficacy.
Background technology
At present, the animation special efficacy is the core technology of animation and recreation industry, is determining the artistic appeal of creative industry such as digital entertainment, wireless 3G digital entertainment.Domestic enterprise and research unit to animation special efficacy The Research of Relevant Technology have seldom seriously hindered the development of creative industry.In addition, animation making work is a large amount of and loaded down with trivial details, and operating difficulties is wasted time and energy, and makes cartoon making personnel much repeated work, thus the thinking of forfeiture intention.
Summary of the invention
Technical matters solved by the invention is to provide a kind of method for making of animation special efficacy, to solve the shortcoming in the above-mentioned background technology.
Technical matters solved by the invention realizes by the following technical solutions:
A kind of method for making of animation special efficacy, mainly comprise the program composition of the three-dimensional water surface of real-time rendering system method for making, cartoon rendering with water and ink method for making, fluid smog and flame method for making, people's face maker, Module Division that the ripples line generates plug-in unit, head model feature extracting method etc., its implementation is as follows:
(1) the three-dimensional water surface of real-time rendering system method for making:
Defined two two-dimensional array buf1[Pool-Heights corresponding in the program with picture] [PoolWidth] and buf2[PoolHeight] [PoolWidth], be used to preserve the amplitude of forward and backward moment each point.Wherein PoolHeight is the line number of picture pixel, and PoolWidth is the columns of picture pixel.If " pond " water surface is a plane during original state, the amplitude of each point is 0 (with buf1 and buf2 initialize 0).
(2) cartoon rendering with water and ink method for making:
Crisperding method: submarginal triangle is played up black, original quadrilateral is converted to leg-of-mutton object model, make a kind of heavier edge effect to object.In order to determine that those are the triangles that need be colored, the direction of also using camera lens or light source is removed the leg-of-mutton normal vector after the dot product conversion, the result of dot product has represented a submarginal degree of triangle, the edge must be by a front towards triangle and back side towards leg-of-mutton common edge.If the absolute value of the dot product of leg-of-mutton normal vector and lens direction is more little then represent that its isolated edge is near more, if dot product is negative then represents that it is to have deviated from the observer.
The cartoon colorize method: in the scene buffering of a normal direction, each pixel is all stored 3 values, and promptly (nx, ny nz), obtain this buffering by same angle lens is played up this scene to the quadrature value of the some three-dimensional coordinate of this pixel correspondence.In this twice is played up, with a kind of pure white scatterer (surround lighting=black, scattered light=white, do not have reflected light) replace the color and the material of all objects, remove all external light sources then, to make an amendment to the material of object again and with its scattering colour reproduction in surround lighting, it is black that the scattering look is set, and keeps original reflected colour.
(3) fluid smog and flame method for making:
Simplify physical model, and used steady flow method and half Lagrangian method of Stam to come approximate solution N-S system of equations.
The external force item: the projection meeting when finding the solution Poisson equation increases the swirling flow characteristic of fluid, so last external force item is:
Figure DEST_PATH_GSB00000485527500021
Wherein f initially is the external force of initial time, as wind-force etc.So just reduced the extra computation amount that external force and eddy flow are brought.So the external force item just is:
Figure DEST_PATH_GSB00000485527500022
(4) program composition of people's face maker:
Mainly comprise two parts, the 3ds file reads in demonstration and the control with the 3ds object.Described 3ds file read-in programme establishment is for after having set up model in 3DSMAX, save as the .3DS file in the triangular mesh mode, in program, need to set up the data structure of corresponding model, read file data, in 3DSMAX, draw then, export this geometry data at last to body of an instrument, and from body of an instrument, read this geometry data.
(5) the ripples line generates the Module Division of plug-in unit
Execution of program modules: one of this module creation generates the rain expression formula of effect of simulation flow surface at random, and expression formula utilizes machine time as random number seed, and time can not stop to change, and can produce randomness preferably.The size of scene two dimensional surface is set earlier in the expression formula, dropping frequency, size, quantity of virtual raindrop etc. are set, then these parameters are passed to the fluidTextureName object, so that the back module invokes.
Create the mobile texture module of the water surface: the mobile texture of a 3D of this module creation carries out the 2D simulation.By the nodename that returns, connect complementary nodal community, and the attribute on liquid surface is provided with, return liquid surface class at last, so that the back module invokes.
Main program module: this module is checked the scene type that is provided with earlier, and default type is oceanshape.At first check selected object, the title of object in the scene is returned,, obtain the entity type in the current scene by the if-else structure; The mobile texture that will obtain again returns to the execution function module with parametric form.Carry out function module and carry out ripples line effect simulation by call parameters.
(6) head model feature extracting method:
Utilization 3DS MAX for general people's face, chooses 7 unique point A-G, and wherein A, B, C, D are 4 canthus points; E is a nose; F, G are corners of the mouth point.Compare with the further feature point, these 7 points are easier to extract from image and measured.By the extraction of above unique point, determine the zone of eyes, nose, face, and draw relevant face feature parameter and set up database.
Beneficial effect:
The present invention can not only remove the loaded down with trivial details sex work of a large amount of repetitions from, and operate the very easy hommization that is imbued with, manpower, material resources and financial resources have greatly been saved, the cartoon making personnel can be freed from uninteresting modelling, attentively study higher level thing, thereby can strengthen the function of original platform greatly.
Description of drawings
Fig. 1 is that the unique point of head model feature extraction is selected and the hunting zone synoptic diagram;
Fig. 2 is the positioning feature point synoptic diagram of head model feature extraction.
Embodiment
For technological means, creation characteristic that the present invention is realized, reach purpose and effect is easy to understand, below in conjunction with concrete diagram, further set forth the present invention.
Referring to the unique point selection of Fig. 1 head model feature extraction and the positioning feature point synoptic diagram of hunting zone synoptic diagram and the feature extraction of Fig. 2 head model, present embodiment is the embodiment of head model feature extracting method.
In the present embodiment, feature extraction mainly comprises following 5 steps:
(1) selection of unique point
As shown in Figure 1, for general people's face, choose 7 unique point A-G, A, B, C, D are 4 canthus points among the figure; E is a nose; F, G are corners of the mouth point.Compare with the further feature point, these 7 points are easy to extract from image and measured.By the extraction of above unique point, determine the zone of eyes, nose, face, and draw relevant face feature parameter and set up database.
(2) determine the hunting zone of unique point
As shown in Figure 1, the hunting zone is made as a rectangular area, the length of rectangle is the distance of face summit and chin point, and the wide of rectangle is that face is wide.The face summit is meant the point of people's face and hair intersection the top, is the highest point at edge, and in like manner the chin point is the lowest part at edge.Because symmetry in the middle of the 3DS MAX head model, so most cases peak and minimum point be all on the CEM line of symmetry, and the influence of beard, illumination and shade is less, so the length of face can be obtained by summit and chin point.Detect nose bottom margin line and line of symmetry intersection point.The E point left end point of edge line therewith is a starting point, adds certain side-play amount, searches in the scope of contouring head from inside to outside, first frontier point that finds, be the point on the cheek outline line, it is wide that the width of the face mask line at nose place place is considered as face, promptly determined the rectangular area.
(3) location of unique point
As Fig. 1 and shown in Figure 2, nose bottom margin line and left and right sides cheek edge line have been arranged, the position E and the face that just can obtain prenasale are wide.According to the issue of people's face face, left eye and right eye are about 1/3 place that is positioned at face length, and the center of right and left eyes lays respectively at 1/4 wide place of face, searches at the center of outline map eye, and the edge line segment that finds is the outline line of eye.Searching the most left some D and the rightest some C of line segment, is exactly the left and right canthus of eye, and left eye is exactly left eye exterior angle and left eye interior angle relatively.In like manner, can obtain the ectocanthion A and the interior B of angulus oculi medialis of right eye.Affirmation for the left and right sides corners of the mouth.Because mouth is positioned at nasal downside, so directly find the position at lip place from nose search downwards.In like manner, we need only search lip affiliated the most left some G of line segment, the rightest some F, have just found the left corners of the mouth, the right corners of the mouth, thereby have finished feature point extraction work.
(4) feature extraction of eyes, face
On the basis of above-mentioned acquisition unique point, set up the rectangular coordinate system of people's face outline map, by A, B, 4 definite eyes of C, D, and the coordinate E in (i of tail of the eye point A, B or C, D in obtaining, j) and E out (m, n), the distance of inside and outside angle point is the width L eye=|E in (i of eyes, j)-E out (m, n) |.The width of the eyes of obtaining by angle point inside and outside the eyes is as a data item in the database.Be similar to eyes handled, for acquired mouth unique point F, G, obtain equally left and right sides corners of the mouth F, G coordinate M left (i, j) and E right (m, n), distance be L mouth=|M left (i, j)-E right (m, n) |.Set up another data item in the database with this.
(5) feature extraction of lower jaw outline line
In people's face direct picture, the lower jaw outline line is that the influence of being expressed one's feelings of the following part of a kind of metastable shape facility, especially face is very little, and the lower jaw profile has comprised most shape of face information.People's face is divided into round face, sharp face and square face, sets up the lower jaw shape template in view of the above: pointed chin, circle chin peace chin.Point on the lower jaw profile that obtains is in advance carried out template matches, carry out the outline line classification according to matching result again.Test shows that this method classifying quality is good, can effectively improve the speed and the discrimination of database.
More than show and described ultimate principle of the present invention and principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; that describes in the foregoing description and the instructions just illustrates principle of the present invention; without departing from the spirit and scope of the present invention; the present invention also has various changes and modifications, and these changes and improvements all fall in the claimed scope of the invention.The claimed scope of the present invention is defined by appending claims and equivalent thereof.

Claims (1)

1. the method for making of an animation special efficacy, mainly comprise the program composition of the three-dimensional water surface of real-time rendering system method for making, cartoon rendering with water and ink method for making, fluid smog and flame method for making, people's face maker, Module Division, the head model feature extracting method that the ripples line generates plug-in unit, it is characterized in that implementation method is as follows:
(1) the three-dimensional water surface of real-time rendering system method for making:
Defined two two-dimensional array buf1[Pool-Heights corresponding in the program with picture] [PoolWidth] and buf2[PoolHeight] [PoolWidth], be used to preserve the amplitude of forward and backward moment each point.Wherein PoolHeight is the line number of picture pixel, and PoolWidth is the columns of picture pixel.If " pond " water surface is a plane during original state, the amplitude of each point is 0 (with buf1 and buf2 initialize 0);
(2) cartoon rendering with water and ink method for making:
Crisperding method: submarginal triangle is played up black, original quadrilateral is converted to leg-of-mutton object model, make a kind of heavier edge effect to object.In order to determine that those are the triangles that need be colored, the direction of also using camera lens or light source is removed the leg-of-mutton normal vector after the dot product conversion, the result of dot product has represented a submarginal degree of triangle, the edge must be by a front towards triangle and back side towards leg-of-mutton common edge.If the absolute value of the dot product of leg-of-mutton normal vector and lens direction is more little then represent that its isolated edge is near more, if dot product is negative then represents that it is to have deviated from the observer;
The cartoon colorize method: in the scene buffering of a normal direction, each pixel is all stored 3 values, and promptly (nx, ny nz), obtain this buffering by same angle lens is played up this scene to the quadrature value of the some three-dimensional coordinate of this pixel correspondence.In this twice is played up, with a kind of pure white scatterer (surround lighting=black, scattered light=white, do not have reflected light) replace the color and the material of all objects, remove all external light sources then, to make an amendment to the material of object again and with its scattering colour reproduction in surround lighting, it is black that the scattering look is set, and keeps original reflected colour;
(3) fluid smog and flame method for making:
Simplify physical model, and used steady flow method and half Lagrangian method of Stam to come approximate solution N-S system of equations;
(4) program composition of people's face maker:
Mainly comprise two parts, the 3ds file reads in demonstration and the control with the 3ds object.Described 3ds file read-in programme establishment is for after having set up model in 3DSMAX, save as the .3DS file in the triangular mesh mode, in program, need to set up the data structure of corresponding model, read file data, in 3DSMAX, draw then, export this geometry data at last to body of an instrument, and from body of an instrument, read this geometry data;
(5) the ripples line generates the Module Division of plug-in unit
Execution of program modules: one of this module creation generates the rain expression formula of effect of simulation flow surface at random, and expression formula utilizes machine time as random number seed, and time can not stop to change, and can produce randomness preferably.The size of scene two dimensional surface is set earlier in the expression formula, dropping frequency, size, quantity of virtual raindrop etc. are set, then these parameters are passed to the fluidTextureName object, so that the back module invokes;
Create the mobile texture module of the water surface: the mobile texture of a 3D of this module creation carries out the 2D simulation.By the nodename that returns, connect complementary nodal community, and the attribute on liquid surface is provided with, return liquid surface class at last, so that the back module invokes;
Main program module: this module is checked the scene type that is provided with earlier, and default type is oceanshape.At first check selected object, the title of object in the scene is returned,, obtain the entity type in the current scene by the if-else structure; The mobile texture that will obtain again returns to the execution function module with parametric form.Carry out function module and carry out ripples line effect simulation by call parameters;
(6) head model feature extracting method:
Utilization 3DS MAX for general people's face, chooses 7 unique point A-G, and wherein A, B, C, D are 4 canthus points; E is a nose; F, G are corners of the mouth point.Compare with the further feature point, these 7 points are easier to extract from image and measured.By the extraction of above unique point, determine the zone of eyes, nose, face, and draw relevant face feature parameter and set up database.
CN2010102000440A 2010-06-13 2010-06-13 Method for manufacturing cartoon special effect Pending CN102087750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102000440A CN102087750A (en) 2010-06-13 2010-06-13 Method for manufacturing cartoon special effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102000440A CN102087750A (en) 2010-06-13 2010-06-13 Method for manufacturing cartoon special effect

Publications (1)

Publication Number Publication Date
CN102087750A true CN102087750A (en) 2011-06-08

Family

ID=44099537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102000440A Pending CN102087750A (en) 2010-06-13 2010-06-13 Method for manufacturing cartoon special effect

Country Status (1)

Country Link
CN (1) CN102087750A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116403A (en) * 2013-02-16 2013-05-22 广东欧珀移动通信有限公司 Screen switching method and mobile intelligent terminal
WO2013097139A1 (en) * 2011-12-29 2013-07-04 Intel Corporation Communication using avatar
US9357174B2 (en) 2012-04-09 2016-05-31 Intel Corporation System and method for avatar management and selection
US9386268B2 (en) 2012-04-09 2016-07-05 Intel Corporation Communication using interactive avatars
US9460541B2 (en) 2013-03-29 2016-10-04 Intel Corporation Avatar animation, social networking and touch screen applications
CN108399654A (en) * 2018-02-06 2018-08-14 北京市商汤科技开发有限公司 It retouches in the generation of special efficacy program file packet and special efficacy generation method and device when retouching
CN113628328A (en) * 2021-08-12 2021-11-09 深圳须弥云图空间科技有限公司 Model rendering method and device for abutted seam component
US11295502B2 (en) 2014-12-23 2022-04-05 Intel Corporation Augmented facial animation
US11887231B2 (en) 2015-12-18 2024-01-30 Tahoe Research, Ltd. Avatar animation system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020171648A1 (en) * 2001-05-17 2002-11-21 Satoru Inoue Image processing device and method for generating three-dimensional character image and recording medium for storing image processing program
CN1750046A (en) * 2005-10-20 2006-03-22 浙江大学 Three-dimensional ink and wash effect rendering method based on graphic processor
CN101038675A (en) * 2006-03-16 2007-09-19 腾讯科技(深圳)有限公司 Method and apparatus for implementing wash painting style
WO2007118919A1 (en) * 2006-04-19 2007-10-25 Emotique, S.L. Method for generating synthetic-animation images
CN101673409A (en) * 2009-09-11 2010-03-17 广州市八丁动漫网络科技有限公司 Image rendering method applied to computer screen

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020171648A1 (en) * 2001-05-17 2002-11-21 Satoru Inoue Image processing device and method for generating three-dimensional character image and recording medium for storing image processing program
CN1750046A (en) * 2005-10-20 2006-03-22 浙江大学 Three-dimensional ink and wash effect rendering method based on graphic processor
CN101038675A (en) * 2006-03-16 2007-09-19 腾讯科技(深圳)有限公司 Method and apparatus for implementing wash painting style
WO2007118919A1 (en) * 2006-04-19 2007-10-25 Emotique, S.L. Method for generating synthetic-animation images
CN101673409A (en) * 2009-09-11 2010-03-17 广州市八丁动漫网络科技有限公司 Image rendering method applied to computer screen

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴磊,等: "交互式实时水面渲染", 《计算机应用研究》 *
张海嵩,等: "实时绘制3D中国画效果", 《计算机辅助设计与图形学学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9398262B2 (en) 2011-12-29 2016-07-19 Intel Corporation Communication using avatar
WO2013097139A1 (en) * 2011-12-29 2013-07-04 Intel Corporation Communication using avatar
US11303850B2 (en) 2012-04-09 2022-04-12 Intel Corporation Communication using interactive avatars
US9386268B2 (en) 2012-04-09 2016-07-05 Intel Corporation Communication using interactive avatars
US9357174B2 (en) 2012-04-09 2016-05-31 Intel Corporation System and method for avatar management and selection
US11595617B2 (en) 2012-04-09 2023-02-28 Intel Corporation Communication using interactive avatars
CN103116403A (en) * 2013-02-16 2013-05-22 广东欧珀移动通信有限公司 Screen switching method and mobile intelligent terminal
US9460541B2 (en) 2013-03-29 2016-10-04 Intel Corporation Avatar animation, social networking and touch screen applications
US11295502B2 (en) 2014-12-23 2022-04-05 Intel Corporation Augmented facial animation
US11887231B2 (en) 2015-12-18 2024-01-30 Tahoe Research, Ltd. Avatar animation system
CN108399654A (en) * 2018-02-06 2018-08-14 北京市商汤科技开发有限公司 It retouches in the generation of special efficacy program file packet and special efficacy generation method and device when retouching
US11640683B2 (en) 2018-02-06 2023-05-02 Beijing Sensetime Technology Development Co., Ltd. Stroke special effect program file package generating method and apparatus, and stroke special effect generating method and apparatus
CN113628328A (en) * 2021-08-12 2021-11-09 深圳须弥云图空间科技有限公司 Model rendering method and device for abutted seam component

Similar Documents

Publication Publication Date Title
CN102087750A (en) Method for manufacturing cartoon special effect
CN104008569B (en) A kind of 3D scene generating method based on deep video
CN109523603B (en) Drawing method and device based on chap style, terminal equipment and storage medium
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
CN204831219U (en) Handheld three -dimensional scanning device and mobile terminal
CN101339669A (en) Three-dimensional human face modelling approach based on front side image
CN102419868A (en) Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN202662016U (en) Real-time virtual fitting device
CN101930618B (en) Method for producing individual two-dimensional anime
CN103606186A (en) Virtual hair style modeling method of images and videos
CN103606190A (en) Method for automatically converting single face front photo into three-dimensional (3D) face model
CN105068748A (en) User interface interaction method in camera real-time picture of intelligent touch screen equipment
CN103065360A (en) Generation method and generation system of hair style effect pictures
CN103258343A (en) Eye image processing method based on image editing
CN104182970A (en) Souvenir photo portrait position recommendation method based on photography composition rule
CN105809733A (en) SketchUp-based campus three-dimensional hand-drawn map construction method
CN105574914A (en) Manufacturing device and manufacturing method of 3D dynamic scene
CN103258346A (en) Three-dimension shooting and printing system
CN107452049A (en) A kind of three-dimensional head modeling method and device
CN103218846A (en) Ink painting simulation method of three-dimensional tree model
CN107798726A (en) The preparation method and device of 3-D cartoon
CN107469355A (en) Game image creation method and device, terminal device
CN110188600A (en) A kind of drawing evaluation method, system and storage medium
Tresset et al. Generative portrait sketching
CN103679794B (en) The method for drafting of the three-dimensional sketch pencil drawing of simulation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20110608

C20 Patent right or utility model deemed to be abandoned or is abandoned