WO2002013144A1 - 3d facial modeling system and modeling method - Google Patents

3d facial modeling system and modeling method Download PDF

Info

Publication number
WO2002013144A1
WO2002013144A1 PCT/KR2001/000440 KR0100440W WO0213144A1 WO 2002013144 A1 WO2002013144 A1 WO 2002013144A1 KR 0100440 W KR0100440 W KR 0100440W WO 0213144 A1 WO0213144 A1 WO 0213144A1
Authority
WO
WIPO (PCT)
Prior art keywords
standard model
face
modeled object
texture
front image
Prior art date
Application number
PCT/KR2001/000440
Other languages
French (fr)
Inventor
Doo-Won Lee
Do-Im Kang
Suk-Min Song
Original Assignee
Ncubic Corp.
Kim, Jae-Sung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ncubic Corp., Kim, Jae-Sung filed Critical Ncubic Corp.
Priority to AU2001244763A priority Critical patent/AU2001244763A1/en
Priority to JP2002518427A priority patent/JP2004506276A/en
Publication of WO2002013144A1 publication Critical patent/WO2002013144A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/028Multiple view windows (top-side-front-sagittal-orthogonal)

Definitions

  • the present invention relates to a three-dimensional face modeling for converting a sheet of front face image into a three-dimensional stereoscopic image of entire face, and more particularly, to a three-dimensional face modeling in which a user can retouch front, side and rear images of the face on a displayed screen to reproduce an exact three-dimensional stereoscopic image.
  • a face is the most important portion for modeling a human body in a three-dimensional fashion.
  • the face occupies the most important position for representing the identity of a person, and includes a number of important characteristic elements in a small area. Even a slight change in such characteristic elements causes the entire face to look different.
  • the three-dimensional face is coupled with additional three-dimensional elements such as the human body portion and an animation effect is given to the coupled image
  • an avatar having the actual identity of a user in a virtual reality (NR.) environment.
  • NR. virtual reality
  • a three-dimensional face modeling of the user should be preferentially realized.
  • three-dimensional face modeling methods there are a three-dimensional face modeling method utilizing a three-dimensional scanning equipment, and a three-dimensional face modeling method in which a stereo vision is used to combine two dimensional information, which corresponds to a common portion in pictures which are photographed at different angles, and to model a three-dimensional image from the combination of the two-dimensional information.
  • the three-dimensional face modeling method utilizing the scanning equipment can perfectly restore not only a geometric model but also texture information such as complexion.
  • the equipment is expensive, and thus this method is rarely available to general consumers.
  • the method of using the stereo vision to reconstruct the two-dimensional informations into a three-dimensional information it is required that the pictures are photographed at various angles to have the common areas.
  • a three-dimensional face modeling method comprises the steps of: providing a virtual reality three-dimensional face having a geometric configuration expressed in polygon meshes as a standard model; providing a front image of a modeled object on a screen on which the standard model is displayed such that the front image of the modeled object is overlaid on the standard model; transforming the standard model such that the overlaid front image of the modeled object identifies with the standard model; extracting pixels corresponding to complexion and hair color of the front image of the modeled object from portions slightly inner than the boundary of the front and sides in the front image of the modeled object based on the transformed standard model, and generating a front face texture; applying side and rear face textures with a same complexion and hair color as the generated front face texture and editing a hair configuration; and texture-mapping front, side and rear textures of the face with the transformed standard model to restore a three-dimensional face configuration.
  • the step of transforming the standard model comprises: a global transformation step of adjusting head contour-adjusting lines, width-adjusting lines and length-adjusting lines of the standard model such that an outline of the standard model identifies with the front image of the modeled object and linearly retouching positions of entire vertexes; and a local transformation step of transforming characteristic points and characterless points of the global-transformed standard model such that a specific portion of the global-transformed standard model identifies with a specific portion of the front image of the modeled object, and retouching the specific portion in detail.
  • Fig. 1 is a schematic block diagram of a three-dimensional face modeling system in accordance with the invention
  • Fig. 2 is a flow chart for modeling a three-dimensional face in accordance with the invention
  • Fig. 3 shows a virtual three-dimensional standard model displayed on a screen in accordance with an embodiment of the invention
  • Fig. 4 shows an embodiment of the invention, in which a front image of a modeled object is overlaid on a standard model
  • Fig. 5 shows an embodiment of the invention, in which characteristic points of the standard model are adjusted such that the standard model identifies with the front image of the modeled object
  • Fig. 6 shows an embodiment of the invention, in which complexion and hair color of a front face texture are reconstructed; and Fig. 7 shows an embodiment of the invention for editing a rear face texture.
  • Fig. 1 is a schematic block diagram of a system of three-dimensional face modeling of the invention
  • Fig. 2 is a flow chart for modeling a three-dimensional face of the invention
  • Fig. 3 shows a virtual three-dimensional standard model according to an embodiment of the invention, which is displayed on a screen
  • Fig. 4 shows the embodiment of the invention, in which a front image of a modeled object is overlaid on a standard model
  • Fig. 5 shows the embodiment of the invention, in which characteristic points of a standard model are adjusted such that the standard model identifies with the front image of the modeled object
  • Fig. 1 is a schematic block diagram of a system of three-dimensional face modeling of the invention
  • Fig. 2 is a flow chart for modeling a three-dimensional face of the invention
  • Fig. 3 shows a virtual three-dimensional standard model according to an embodiment of the invention, which is displayed on a screen
  • Fig. 4 shows the embodiment of the invention, in which a front image of a modeled object is over
  • FIG. 6 shows the embodiment of the invention, in which complexion and hair color of a front face texture are reconstructed
  • Fig. 7 shows the embodiment of the invention for editing a rear face texture
  • Figs. 3 to 7 are pictures displayed on a screen for describing the embodiment of the present invention, in which the screen is divided into four small regions, i.e., upper left, upper right, lower left and lower right regions: In the upper left region, a standard model is transformed in the front; in the upper right region of the screen, the standard model is transformed in the side; in the lower left region, textures of the side and rear faces are edited; and in the lower right region, a three-dimensionally restored face configuration is rotated stereoscopically so that the restored face can be compared.
  • polygon mesh means a network structure of triangles composed of 500 to 1,000 points for expressing size and configuration of the entire face.
  • the system of three-dimensional face modeling of the invention comprises: a virtual three-dimensional model providing module 102 for providing a three-dimensional virtual face as a standard model in the form of polygon mesh, a modeled object front image providing module 104 for providing a front face image of a modeled object, a standard model transforming module 106 for transforming the standard model such that the standard model identifies with the modeled object after overlaying the front image of the modeled object on the standard model, a front face texture generating module 112 for retouching the front face image of the modeled object to generate a front face texture, side and rear face texture editing modules 114 and 116 for preparing side and rear face textures with complexion and hair color as the same as the front face texture, a texture-mapping module 118 for mapping the side and rear textures to the transformed standard model.
  • a virtual three-dimensional model providing module 102 for providing a three-dimensional virtual face as a standard model in the form of polygon mesh
  • a modeled object front image providing module 104 for providing
  • the virtual three-dimensional model-providing module 102 prepares a three-dimensional virtual standard model on the basis of an average face configuration.
  • the standard model is provided as a geometrical stereoscopic image, in which triangular polygon meshes 10 are combined to provide the entire face.
  • the standard model provided on the screen includes head contour-adjusting lines 2, width-adjusting lines 4, length-adjusting lines 6 and characteristic points 8.
  • the head contour-adjusting lines 2 acts for adjusting the width and length ratios of the entire face such that the standard model identifies with a front face image 1 of a modeled object.
  • the width-adjusting lines 4, the length-adjusting lines 6 and the characteristic points 8 are used for adjusting specific regions of the face.
  • the modeled object front image providing module 104 provides the front face image 1 of the modeled object, which is scanned from a front face picture of the modeled object for three-dimensionally edition, to be overlaid on the screen where the standard model is on display. Therefore, the plurality of adjusting lines 2, 4, 6 and 8 easily notify the overall size of the front image of an actual modeled object and the positional difference between important regions.
  • the standard model-transforming module 106 includes a global transforming module 108 and a local transforming module 110, in general, for transforming the standard module such that the front face image 1 of the standard model identifies with the outer contour of the modeled object.
  • the global transforming module 108 compares the front face image 1 of the overlaid modeled object with the standard model, and adjusts the head contour-adjusting lines 2, the width-adjusting lines 4, the length-adjusting lines 6 and the like such that the standard model identifies with the front image 1 of the modeled object.
  • the local transforming module 110 adjusts the characteristic points 8, which are previously set to specific regions of the face of the standard model such as eyebrow, eyes, nose, mouth and ears, such that the characteristic points 8 identifies with the front face image 1 of the modeled object. Further, these characteristic points 8 are cooperative to the variation of other characteristic points 8, thereby accompanying the variation of characterless points for expressing the overall face contour.
  • the front face texture generating module 112 extracts several pixels corresponding to the complexion and hair color of the front face image of the modeled object from portions slightly inner than the boundary between the front and side of the front face image of the modeled object on the basis of the head contour-adjusting lines which are adjusted by the global transforming module 108, and greates a front face texture in the form of retouching the front face image of the modeled object.
  • the side and rear face texture editing modules 114 and 116 generates and edit side and rear face textures by using the front face texture.
  • the side and rear face textures have the same size as the front face texture.
  • the right quadrangular area is placed onto the left of the rear face texture
  • the left quadrangular area is placed onto the right of the rear face texture, in which the right and left quadrangular areas are retouched on the basis of retouching lines at the eye positions in the front face texture.
  • a middle area fills the both areas through linear compensation.
  • the edited side and rear face textures are retouched by using head contour-adjusting points 12 as shown in Fig. 7.
  • the texture-mapping module 118 maps the generated front, side and rear textures to the transformed standard model.
  • the front face texture is mapped at about 120° in respect to the frontal face, in which texture coordinates are obtained by projecting vertexes each to an x-y plane.
  • the side and rear face textures are mapped about those areas which are not mapped by the front face texture, in which the texture coordinates are obtained through cylindrical mapping unlike the front face texture.
  • the side and rear face textures are erected cylindrically from the side to the rear about the center when seen from over the head, and then mapped.
  • a three-dimensional virtual standard model is displayed on a screen in step S202.
  • the standard model is displayed with the front and side faces on the screen, respectively, as shown in Fig. 3.
  • the displayed standard model is expressed by the triangular polygon meshes 10 across the entire face, thereby realizing the geometric configuration of the entire face.
  • the standard model includes the head contour-adjusting lines 2 for adjusting the width/length ratios of the face, the width-adjusting lines 4 for adjusting the positions of eyes, nose and mouth, the length-adjusting lines 6 for adjusting the positions of both eyes and nose and the characteristic points 8 for schematically expressing the positions of specific regions of the face.
  • the front face image 1 of the modeled object is displayed on the screen together with the standard model, a user globally transforms the standard model in step S206.
  • the front face image 1 is compared with the standard model, and the head contour-adjusting lines 2 and the width-adjusting lines 6 of the standard model are respectively adjusted in such ratios to identify with the corresponding front face image 1.
  • the head contour-adjusting lines 2 are adjusted to the width and length ratios thereof, in which three horizontal lines are adjusted so that the upper width-adjusting line identifies with the positions of eyes, the middle width-adjusting line corresponds to the end of nose, and the lower width-adjusting line identifies with the position of mouth.
  • the whole vertexes are grouped. Three vertical lines are adjusted to identify with the positions of eyes and nose. Further, the whole positions of the vertexes of the standard module are linearly retouched to identify with the variation amount depending on the positions of the transformed adjusting lines.
  • the specific regions are retouched in detail through a local transformation in step S208.
  • the local transformation adjusts the characteristic points 8 respectively specified in the regions of the standard model to correspond to the front face image 1.
  • the specific points 8 can be schematically expressed with the sizes and positions about the specific regions of the face such as face contour, eyebrow, eyes, nose, mouth and ears, and are so indexed to come cooperative with the characterless points under the influence of the surroundings according to the variation of the characteristic points 8.
  • the front characteristic points are related to x and y coordinates of x, y and z coordinates
  • the side characteristic points are related to movement along z coordinates of the x, y and z coordinates.
  • the user can selectively use a symmetric mode and an asymmetric mode.
  • the symmetric mode is based on that a human face is bilaterally symmetric.
  • the asymmetric mode can be used to adjust the adjusting lines 2 and 6 or the characteristic points 8, respectively.
  • the face image of the modeled object is retouched to generate the front face texture.
  • image processing retouches outer areas with complexion and hair color from portions slightly inner than the boundary between the front and the side to generate the front face texture.
  • the side and rear textures of the face are generated and edited in step S212.
  • the right quadrangular area is placed onto the left of the rear face texture
  • the left quadrangular area is placed onto the right of the rear face texture, in which the right and left quadrangular areas are retouched on the basis of retouching lines at the eye positions in the front face texture.
  • the middle area fills the both areas through linear compensation.
  • the user edits unnatural areas of hair in regions of connecting the front texture and the rear texture while confirming the side and rear textures of the corresponding regions.
  • the head configuration in the side and rear textures of the face is retouched with the head contour-adjusting points 12.
  • the head contour-adjusting points 12 are dotted into a shape.
  • the head contour-adjusting points 12 are image processed so that a portion over the line is filled with hair color and another portion under the line is filled with complexion about a line composed of those points.
  • the user can use the symmetric mode in the same fashion as the global and local transformations to apply the variation of the side and rear textures at one side to the other side in the same fashion.
  • the face at about 120° from the frontal face is mapped as the front face texture from the geometric configuration of the three-dimensional face in step S214.
  • the vertexes, which correspond to the front of the geometric configuration, are respectively projected to obtain the x-y plane texture coordinates.
  • texture-mapping is executed to the areas which are not mapped by the front face texture, in which the texture coordinates are obtained through the cylindrical mapping unlike the front face texture.
  • the rear face texture is mapped after cylindrically erected from the side to the rear about the center of the geometric configuration of the entire face when seen from over the head.
  • the invention realizes the whole three-dimensional face with ease by using only one front face picture.
  • the invention can realize the complexion and hair color as well as the overall face configuration, thereby perfecting the three-dimensional stereoscopic image.
  • the invention can realize the three-dimensional stereoscopic image with one front face picture so that the user can realize and apply his/her own avatar to the NR environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

A three-dimensional face modeling is capable of recreating a piece of front face image into an exact three-dimensional stereoscopic image by retouching front, side, and rear faces on a screen where a user is displayed. The method includes the steps of: overlaying a standard model of a virtual three-dimensional face represented in a form of polygon mesh and a front face image of a modeled object on a screen displaying the standard model; transforming the standard model to identify the overlaid standard model with the front face of the modeled object; generating a front face texture based on the transformed standard model; editing a hair style by extracting the same pixels as complexion and hair from the generated front face texture and applying the extracted pixels to side and rear face textures; and reproducing a three-dimensional face model by mapping them on the transformed standard model.

Description

3D FACIAL MODELING SYSTEM AND MODELING METHOD
BACKGROUND OF THE INVENTION
Technical Field
The present invention relates to a three-dimensional face modeling for converting a sheet of front face image into a three-dimensional stereoscopic image of entire face, and more particularly, to a three-dimensional face modeling in which a user can retouch front, side and rear images of the face on a displayed screen to reproduce an exact three-dimensional stereoscopic image.
Background Art
A face is the most important portion for modeling a human body in a three-dimensional fashion. The face occupies the most important position for representing the identity of a person, and includes a number of important characteristic elements in a small area. Even a slight change in such characteristic elements causes the entire face to look different.
Also, when the three-dimensional face is coupled with additional three-dimensional elements such as the human body portion and an animation effect is given to the coupled image, there can appear an avatar having the actual identity of a user in a virtual reality (NR.) environment. In other words, in order to realize the NR environment with the real avatar of the user, a three-dimensional face modeling of the user should be preferentially realized. In general, as three-dimensional face modeling methods, there are a three-dimensional face modeling method utilizing a three-dimensional scanning equipment, and a three-dimensional face modeling method in which a stereo vision is used to combine two dimensional information, which corresponds to a common portion in pictures which are photographed at different angles, and to model a three-dimensional image from the combination of the two-dimensional information.
The three-dimensional face modeling method utilizing the scanning equipment can perfectly restore not only a geometric model but also texture information such as complexion. However, the equipment is expensive, and thus this method is rarely available to general consumers.
Further, in the method of using the stereo vision to reconstruct the two-dimensional informations into a three-dimensional information, it is required that the pictures are photographed at various angles to have the common areas.
Disclosure of the Invention
It is an object of the invention to provide a three-dimensional face modeling system, capable of modeling a human face in a three-dimensional image with only one face picture photographed from the front.
It is another object of the invention to provide a three-dimensional face modeling method, capable of modeling a human face in a three-dimensional image with only one face picture photographed from the front.
To accomplish the above objects, there is a three-dimensional face modeling method. The method comprises the steps of: providing a virtual reality three-dimensional face having a geometric configuration expressed in polygon meshes as a standard model; providing a front image of a modeled object on a screen on which the standard model is displayed such that the front image of the modeled object is overlaid on the standard model; transforming the standard model such that the overlaid front image of the modeled object identifies with the standard model; extracting pixels corresponding to complexion and hair color of the front image of the modeled object from portions slightly inner than the boundary of the front and sides in the front image of the modeled object based on the transformed standard model, and generating a front face texture; applying side and rear face textures with a same complexion and hair color as the generated front face texture and editing a hair configuration; and texture-mapping front, side and rear textures of the face with the transformed standard model to restore a three-dimensional face configuration.
Preferably, the step of transforming the standard model comprises: a global transformation step of adjusting head contour-adjusting lines, width-adjusting lines and length-adjusting lines of the standard model such that an outline of the standard model identifies with the front image of the modeled object and linearly retouching positions of entire vertexes; and a local transformation step of transforming characteristic points and characterless points of the global-transformed standard model such that a specific portion of the global-transformed standard model identifies with a specific portion of the front image of the modeled object, and retouching the specific portion in detail.
Brief Description of the Drawings
Fig. 1 is a schematic block diagram of a three-dimensional face modeling system in accordance with the invention;
Fig. 2 is a flow chart for modeling a three-dimensional face in accordance with the invention; Fig. 3 shows a virtual three-dimensional standard model displayed on a screen in accordance with an embodiment of the invention;
Fig. 4 shows an embodiment of the invention, in which a front image of a modeled object is overlaid on a standard model; Fig. 5 shows an embodiment of the invention, in which characteristic points of the standard model are adjusted such that the standard model identifies with the front image of the modeled object;
Fig. 6 shows an embodiment of the invention, in which complexion and hair color of a front face texture are reconstructed; and Fig. 7 shows an embodiment of the invention for editing a rear face texture.
Best Mode for Carrying out the Invention
Hereinafter, there are in detail described preferred embodiments in accordance with the present invention with reference to the accompanying drawings. Fig. 1 is a schematic block diagram of a system of three-dimensional face modeling of the invention, Fig. 2 is a flow chart for modeling a three-dimensional face of the invention, Fig. 3 shows a virtual three-dimensional standard model according to an embodiment of the invention, which is displayed on a screen, Fig. 4 shows the embodiment of the invention, in which a front image of a modeled object is overlaid on a standard model, Fig. 5 shows the embodiment of the invention, in which characteristic points of a standard model are adjusted such that the standard model identifies with the front image of the modeled object, Fig. 6 shows the embodiment of the invention, in which complexion and hair color of a front face texture are reconstructed, and Fig. 7 shows the embodiment of the invention for editing a rear face texture. Figs. 3 to 7 are pictures displayed on a screen for describing the embodiment of the present invention, in which the screen is divided into four small regions, i.e., upper left, upper right, lower left and lower right regions: In the upper left region, a standard model is transformed in the front; in the upper right region of the screen, the standard model is transformed in the side; in the lower left region, textures of the side and rear faces are edited; and in the lower right region, a three-dimensionally restored face configuration is rotated stereoscopically so that the restored face can be compared.
Further, as a terminology used in the invention, polygon mesh means a network structure of triangles composed of 500 to 1,000 points for expressing size and configuration of the entire face.
The system of three-dimensional face modeling of the invention comprises: a virtual three-dimensional model providing module 102 for providing a three-dimensional virtual face as a standard model in the form of polygon mesh, a modeled object front image providing module 104 for providing a front face image of a modeled object, a standard model transforming module 106 for transforming the standard model such that the standard model identifies with the modeled object after overlaying the front image of the modeled object on the standard model, a front face texture generating module 112 for retouching the front face image of the modeled object to generate a front face texture, side and rear face texture editing modules 114 and 116 for preparing side and rear face textures with complexion and hair color as the same as the front face texture, a texture-mapping module 118 for mapping the side and rear textures to the transformed standard model.
Preferably, the virtual three-dimensional model-providing module 102, as shown in Fig. 3, prepares a three-dimensional virtual standard model on the basis of an average face configuration. The standard model is provided as a geometrical stereoscopic image, in which triangular polygon meshes 10 are combined to provide the entire face. Further, the standard model provided on the screen includes head contour-adjusting lines 2, width-adjusting lines 4, length-adjusting lines 6 and characteristic points 8. The head contour-adjusting lines 2 acts for adjusting the width and length ratios of the entire face such that the standard model identifies with a front face image 1 of a modeled object. The width-adjusting lines 4, the length-adjusting lines 6 and the characteristic points 8 are used for adjusting specific regions of the face.
The modeled object front image providing module 104, as shown in Fig. 4, provides the front face image 1 of the modeled object, which is scanned from a front face picture of the modeled object for three-dimensionally edition, to be overlaid on the screen where the standard model is on display. Therefore, the plurality of adjusting lines 2, 4, 6 and 8 easily notify the overall size of the front image of an actual modeled object and the positional difference between important regions.
The standard model-transforming module 106 includes a global transforming module 108 and a local transforming module 110, in general, for transforming the standard module such that the front face image 1 of the standard model identifies with the outer contour of the modeled object.
The global transforming module 108, as shown in Fig. 5, compares the front face image 1 of the overlaid modeled object with the standard model, and adjusts the head contour-adjusting lines 2, the width-adjusting lines 4, the length-adjusting lines 6 and the like such that the standard model identifies with the front image 1 of the modeled object.
After the global transforming module 108 generally identifies the front face image 1 of the modeled object with the outer contour of the standard model, the local transforming module 110 adjusts the characteristic points 8, which are previously set to specific regions of the face of the standard model such as eyebrow, eyes, nose, mouth and ears, such that the characteristic points 8 identifies with the front face image 1 of the modeled object. Further, these characteristic points 8 are cooperative to the variation of other characteristic points 8, thereby accompanying the variation of characterless points for expressing the overall face contour.
After the front face image 1 of the modeled object is identified with the outer contour of the standard model, the front face texture generating module 112, as shown in Fig. 6, extracts several pixels corresponding to the complexion and hair color of the front face image of the modeled object from portions slightly inner than the boundary between the front and side of the front face image of the modeled object on the basis of the head contour-adjusting lines which are adjusted by the global transforming module 108, and greates a front face texture in the form of retouching the front face image of the modeled object.
Further, the side and rear face texture editing modules 114 and 116 generates and edit side and rear face textures by using the front face texture. In the initial generation stage, the side and rear face textures have the same size as the front face texture. In configuration, the right quadrangular area is placed onto the left of the rear face texture, and the left quadrangular area is placed onto the right of the rear face texture, in which the right and left quadrangular areas are retouched on the basis of retouching lines at the eye positions in the front face texture. Further, a middle area fills the both areas through linear compensation. After the generation, the edited side and rear face textures are retouched by using head contour-adjusting points 12 as shown in Fig. 7.
The texture-mapping module 118 maps the generated front, side and rear textures to the transformed standard model. The front face texture is mapped at about 120° in respect to the frontal face, in which texture coordinates are obtained by projecting vertexes each to an x-y plane. The side and rear face textures are mapped about those areas which are not mapped by the front face texture, in which the texture coordinates are obtained through cylindrical mapping unlike the front face texture. In the cylindrical mapping, the side and rear face textures are erected cylindrically from the side to the rear about the center when seen from over the head, and then mapped.
Hereinafter, as shown in Fig. 2, a process of three-dimensional face modeling of the invention will be described in detail.
First, a three-dimensional virtual standard model is displayed on a screen in step S202. The standard model is displayed with the front and side faces on the screen, respectively, as shown in Fig. 3. The displayed standard model is expressed by the triangular polygon meshes 10 across the entire face, thereby realizing the geometric configuration of the entire face. Further, the standard model includes the head contour-adjusting lines 2 for adjusting the width/length ratios of the face, the width-adjusting lines 4 for adjusting the positions of eyes, nose and mouth, the length-adjusting lines 6 for adjusting the positions of both eyes and nose and the characteristic points 8 for schematically expressing the positions of specific regions of the face.
After the standard module is provided, as shown in Fig. 4, various front face images 1 in color or black and white, which are stored by scanning a front face picture of a modeled object, are overlaid on the screen where the standard model is displayed in step
S204. Then, the provided front face picture is overlaid for comparison with the standard model composed of the polygon meshes 10.
When the front face image 1 of the modeled object is displayed on the screen together with the standard model, a user globally transforms the standard model in step S206. In such the global transformation, the front face image 1 is compared with the standard model, and the head contour-adjusting lines 2 and the width-adjusting lines 6 of the standard model are respectively adjusted in such ratios to identify with the corresponding front face image 1. About the front face image 1, the head contour-adjusting lines 2 are adjusted to the width and length ratios thereof, in which three horizontal lines are adjusted so that the upper width-adjusting line identifies with the positions of eyes, the middle width-adjusting line corresponds to the end of nose, and the lower width-adjusting line identifies with the position of mouth. Then, the whole vertexes are grouped. Three vertical lines are adjusted to identify with the positions of eyes and nose. Further, the whole positions of the vertexes of the standard module are linearly retouched to identify with the variation amount depending on the positions of the transformed adjusting lines.
After the entire face ratio of the standard model is adjusted through the global transformation, the specific regions are retouched in detail through a local transformation in step S208. The local transformation adjusts the characteristic points 8 respectively specified in the regions of the standard model to correspond to the front face image 1. The specific points 8 can be schematically expressed with the sizes and positions about the specific regions of the face such as face contour, eyebrow, eyes, nose, mouth and ears, and are so indexed to come cooperative with the characterless points under the influence of the surroundings according to the variation of the characteristic points 8. In such transformation of the characteristic points 8, the front characteristic points are related to x and y coordinates of x, y and z coordinates, and the side characteristic points are related to movement along z coordinates of the x, y and z coordinates. In the global and local transformations, the user can selectively use a symmetric mode and an asymmetric mode. The symmetric mode is based on that a human face is bilaterally symmetric. In the symmetric mode, when an adjusting line or characteristic point in the right or left is moved, the corresponding adjusting line or characteristic point automatically moves corresponding thereto. Alternatively, the asymmetric mode can be used to adjust the adjusting lines 2 and 6 or the characteristic points 8, respectively.
On the basis of the standard model transformed through such global and local transformations, the face image of the modeled object is retouched to generate the front face texture. As shown in Fig. 6, by using the positions of the adjusting lines in the global transformation from the front image 1 of the modeled object, image processing retouches outer areas with complexion and hair color from portions slightly inner than the boundary between the front and the side to generate the front face texture.
After the front face texture is generated, the side and rear textures of the face are generated and edited in step S212. First, in the initial configuration, the right quadrangular area is placed onto the left of the rear face texture, and the left quadrangular area is placed onto the right of the rear face texture, in which the right and left quadrangular areas are retouched on the basis of retouching lines at the eye positions in the front face texture. Further, the middle area fills the both areas through linear compensation. In texture mapping, the user edits unnatural areas of hair in regions of connecting the front texture and the rear texture while confirming the side and rear textures of the corresponding regions.
As shown in Fig. 7, the head configuration in the side and rear textures of the face, is retouched with the head contour-adjusting points 12. After the head contour is determined in the side and rear faces, the head contour-adjusting points 12 are dotted into a shape. The head contour-adjusting points 12 are image processed so that a portion over the line is filled with hair color and another portion under the line is filled with complexion about a line composed of those points. In this case, the user can use the symmetric mode in the same fashion as the global and local transformations to apply the variation of the side and rear textures at one side to the other side in the same fashion.
When the front, side and rear face textures are completed, the face at about 120° from the frontal face is mapped as the front face texture from the geometric configuration of the three-dimensional face in step S214. The vertexes, which correspond to the front of the geometric configuration, are respectively projected to obtain the x-y plane texture coordinates. In the side and rear face textures, texture-mapping is executed to the areas which are not mapped by the front face texture, in which the texture coordinates are obtained through the cylindrical mapping unlike the front face texture. The rear face texture is mapped after cylindrically erected from the side to the rear about the center of the geometric configuration of the entire face when seen from over the head.
Industrial Applicability
As described in the foregoing embodiment, the invention realizes the whole three-dimensional face with ease by using only one front face picture.
Further, the invention can realize the complexion and hair color as well as the overall face configuration, thereby perfecting the three-dimensional stereoscopic image.
Moreover, the invention can realize the three-dimensional stereoscopic image with one front face picture so that the user can realize and apply his/her own avatar to the NR environment.
While the foregoing description has been made in detail about the system of three-dimensional face modeling according to the preferred embodiment of the invention, it is apparent to those skilled in the art that modifications and variations can be made without- departing from the aspect of the invention defined in the appended claims.

Claims

What is Claimed is:
1. A three-dimensional face modeling system comprising: a virtual three-dimensional standard model providing module for providing a three-dimensional virtual face as a standard model in the form of polygon mesh; a modeled object front image-providing module for providing a front face image of a modeled object; a standard model transforming module for transforming the standard model to identify with the front image of the modeled object after overlaying the front image of the modeled obj ect on the standard model; a front face texture generating module for generating a front face texture through retouching of the front image of the modeled object; a side and rear texture editing module for generating side and rear face textures having a same complexion and hair color as the front texture; and a texture mapping module for mapping the generated front, side and rear face textures to the transformed standard model.
2. The system of claim 1, wherein the side and rear face texture editing module executes a symmetric mode, in which if a hair configuration of any one of left and right sides is varied, the hair configuration of the other side is automatically varied.
3. The system of claim 1, wherein the virtual three-dimensional standard model comprises: a head contour-adjusting line being compared with the front image of the modeled object, to identify width and length ratios of the entire face with the front image of the modeled object; a width-adjusting line being compared with the front image of the modeled object, to identify horizontal positions of eyes, nose and mouth with the front image of the modeled object; a length-adjusting line being compared with the front image of the modeled object, to identify vertical positions of eyes and nose with the front image of the modeled object; and characteristic points being compared with the front image of the modeled object, to identify specific regions of the standard model with the front image of the modeled object.
4. The system of claim 1, wherein the standard model transforming module comprises: a global transforming module for matching head contour-adjusting lines, width-adjusting lines and length-adjusting lines of the standard model to the front image of the modeled object to identify the adjusting lines with an outline of the standard model; and a local transforming module for matching a specific portion of the transformed standard model to the front image of the modeled object to identify the specific portion of the standard model with the front image of the modeled object.
5. The system of claim 4, wherein the global transforming module and the local transforming module execute a symmetric mode, in which if any one of left and right adjusting lines of the standard model and a characteristic point are moved, the remaining adjusting line other than the moved adjusting line and the characteristic point are automatically varied.
6. A three-dimensional face modeling method comprising the steps of: providing a virtual reality three-dimensional face having a geometric configuration expressed in polygon meshes as a standard model; providing a front image of a modeled object on a screen on which the standard model is displayed such that the front image of the modeled object is overlaid on the standard model; transforming the standard model such that the overlaid front image of the modeled object identifies with the standard model; extracting pixels corresponding to complexion and hair color of the front image of the modeled object from portions slightly inner than the boundary of the front and sides in the front image of the modeled object based on the transformed standard model, and generating a front face texture; applying side and rear face textures with a same complexion and hair color as the generated front face texture and editing a hair configuration; and texture-mapping front, side and rear textures of the face with the transformed standard model to restore a three-dimensional face configuration.
7. The method of claim 6, wherein the step of transforming the standard model comprises: a global transformation step of adjusting head contour-adjusting lines, width-adjusting lines and length-adjusting lines of the standard model such that an outline of the standard model identifies with the front image of the modeled object, and linearly retouching positions of entire vertexes; and a local transformation step of transforming characteristic points and characterless points of the global-transformed standard model such that a specific portion of the global-transformed standard model identifies with a specific portion of the front image of the modeled object, and retouching the specific portion in detail.
8. The method of claim 6, wherein the step of generating the front face texture comprises image-processing the pixels corresponding to the extracted complexion in a radial direction about the mouth position and the pixels corresponding to the extracted hair color in a radial direction about the forehead.
9. The method of claim 6, wherein the step of applying the rear face texture with the complexion and hair color is performed by placing a retouched right quadrangular region onto the left of the rear face texture and a retouched left quadrangular region onto the right of the rear face texture about a retouching line at the eye position in the front face texture, and linearly compensating a middle area at both sides thereof.
10. The method of claim 6, wherein the step of texture-mapping is performed by executing a texture-mapping to the front face texture at about 120° from the frontal face and executing a cylindrical mapping to the side and rear face textures.
11. The method of claim 6, wherein the step of retouching the hair configuration in the side and rear faces comprises the steps of: dotting head contour-adjusting points according to head contour of the side and rear faces, and based upon a line composed of the head contour-adjusting points, applying the first portion over the line with the hair color and the second portion under the line with the complexion.
PCT/KR2001/000440 2000-08-10 2001-03-20 3d facial modeling system and modeling method WO2002013144A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001244763A AU2001244763A1 (en) 2000-08-10 2001-03-20 3d facial modeling system and modeling method
JP2002518427A JP2004506276A (en) 2000-08-10 2001-03-20 Three-dimensional face modeling system and modeling method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR2000/46448 2000-08-10
KR1020000046448A KR100327541B1 (en) 2000-08-10 2000-08-10 3D facial modeling system and modeling method

Publications (1)

Publication Number Publication Date
WO2002013144A1 true WO2002013144A1 (en) 2002-02-14

Family

ID=19682716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2001/000440 WO2002013144A1 (en) 2000-08-10 2001-03-20 3d facial modeling system and modeling method

Country Status (4)

Country Link
JP (1) JP2004506276A (en)
KR (1) KR100327541B1 (en)
AU (1) AU2001244763A1 (en)
WO (1) WO2002013144A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2838846A1 (en) * 2002-04-22 2003-10-24 Delphine Software Internat Real time 3D image generation system uses detailed projected two dimensional image on solid model with moving parts having erased relief
WO2005116932A1 (en) * 2004-05-26 2005-12-08 Gameware Europe Limited Animation systems
WO2008096099A1 (en) * 2007-02-05 2008-08-14 Amegoworld Ltd A communication network and devices for text to speech and text to facial animation conversion
CN100430963C (en) * 2005-09-29 2008-11-05 中国科学院自动化研究所 Method for modeling personalized human face basedon orthogonal image
WO2008144843A1 (en) * 2007-05-31 2008-12-04 Depth Analysis Pty Ltd Systems and methods for applying a 3d scan of a physical target object to a virtual environment
GB2480173A (en) * 2007-02-05 2011-11-09 Amegoworld Ltd A data structure for representing an animated model of a head/face wherein hair overlies a flat peripheral region of a partial 3D map
CN110223374A (en) * 2019-05-05 2019-09-10 太平洋未来科技(深圳)有限公司 A kind of pre-set criteria face and head 3D model method
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN113327277A (en) * 2020-02-29 2021-08-31 华为技术有限公司 Three-dimensional reconstruction method and device for half-body image
CN117058329A (en) * 2023-10-11 2023-11-14 湖南马栏山视频先进技术研究院有限公司 Face rapid three-dimensional modeling method and system

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100422471B1 (en) * 2001-02-08 2004-03-11 비쥬텍쓰리디(주) Apparatus and method for creation personal photo avatar
KR100422470B1 (en) * 2001-02-15 2004-03-11 비쥬텍쓰리디(주) Method and apparatus for replacing a model face of moving image
KR20030082160A (en) * 2002-04-17 2003-10-22 백수곤 Real Time Sprite Modeling
KR100722229B1 (en) 2005-12-02 2007-05-29 한국전자통신연구원 Apparatus and method for immediately creating and controlling virtual reality interaction human model for user centric interface
GB2450757A (en) * 2007-07-06 2009-01-07 Sony Comp Entertainment Europe Avatar customisation, transmission and reception
KR100965622B1 (en) * 2008-10-31 2010-06-23 김영자 Method and Apparatus for making sensitive character and animation
KR101216614B1 (en) * 2010-09-14 2012-12-31 현대엠엔소프트 주식회사 A navigation apparatus and method for displaying topography thereof
JP5842541B2 (en) * 2011-11-01 2016-01-13 大日本印刷株式会社 3D portrait creation device
US10155168B2 (en) 2012-05-08 2018-12-18 Snap Inc. System and method for adaptable avatars
US10326972B2 (en) 2014-12-31 2019-06-18 Samsung Electronics Co., Ltd. Three-dimensional image generation method and apparatus
US10339365B2 (en) * 2016-03-31 2019-07-02 Snap Inc. Automated avatar generation
KR101829334B1 (en) 2016-05-31 2018-02-19 주식회사 코어라인소프트 System and method for displaying medical images providing user interface three dimensional volume based three dimensional mesh adaptation
US10432559B2 (en) 2016-10-24 2019-10-01 Snap Inc. Generating and displaying customized avatars in electronic messages
KR102152078B1 (en) 2016-12-07 2020-09-04 한국전자통신연구원 Apparatus and method for generating 3d face model
KR102024598B1 (en) * 2017-11-03 2019-09-24 울산대학교 산학협력단 Method and apparatus for generating 3d model data for manufacturing of implant
KR102104889B1 (en) * 2019-09-30 2020-04-27 이명학 Method of generating 3-dimensional model data based on vertual solid surface models and system thereof
CN112669447B (en) * 2020-12-30 2023-06-30 网易(杭州)网络有限公司 Model head portrait creation method and device, electronic equipment and storage medium
CN112884638A (en) * 2021-02-02 2021-06-01 北京东方国信科技股份有限公司 Virtual fitting method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113193A (en) * 1998-10-08 2000-04-21 Minolta Co Ltd Synthesizing method for multi-viewpoint three- dimensional data and recording medium
JP2000149060A (en) * 1998-11-05 2000-05-30 Ricoh Co Ltd Method and device for preparing polygon mesh, method and device for presenting shape feature of polygonal mesh, preparation program for polygonal mesh and storage medium with shape feature presentation program stored therein

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100281965B1 (en) * 1998-04-24 2001-02-15 전주범 Face Texture Mapping Method of Model-based Coding System
KR100317138B1 (en) * 1999-01-19 2001-12-22 윤덕용 Three-dimensional face synthesis method using facial texture image from several views

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113193A (en) * 1998-10-08 2000-04-21 Minolta Co Ltd Synthesizing method for multi-viewpoint three- dimensional data and recording medium
JP2000149060A (en) * 1998-11-05 2000-05-30 Ricoh Co Ltd Method and device for preparing polygon mesh, method and device for presenting shape feature of polygonal mesh, preparation program for polygonal mesh and storage medium with shape feature presentation program stored therein

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2838846A1 (en) * 2002-04-22 2003-10-24 Delphine Software Internat Real time 3D image generation system uses detailed projected two dimensional image on solid model with moving parts having erased relief
WO2005116932A1 (en) * 2004-05-26 2005-12-08 Gameware Europe Limited Animation systems
CN100430963C (en) * 2005-09-29 2008-11-05 中国科学院自动化研究所 Method for modeling personalized human face basedon orthogonal image
GB2459073B (en) * 2007-02-05 2011-10-12 Amegoworld Ltd A communication network and devices
GB2459073A (en) * 2007-02-05 2009-10-14 Amegoworld Ltd A communication network and devices for text to speech and text to facial animation conversion
WO2008096099A1 (en) * 2007-02-05 2008-08-14 Amegoworld Ltd A communication network and devices for text to speech and text to facial animation conversion
GB2480173A (en) * 2007-02-05 2011-11-09 Amegoworld Ltd A data structure for representing an animated model of a head/face wherein hair overlies a flat peripheral region of a partial 3D map
RU2488232C2 (en) * 2007-02-05 2013-07-20 Амеговорлд Лтд Communication network and devices for text to speech and text to facial animation conversion
WO2008144843A1 (en) * 2007-05-31 2008-12-04 Depth Analysis Pty Ltd Systems and methods for applying a 3d scan of a physical target object to a virtual environment
CN110223374A (en) * 2019-05-05 2019-09-10 太平洋未来科技(深圳)有限公司 A kind of pre-set criteria face and head 3D model method
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN111179210B (en) * 2019-12-27 2023-10-20 浙江工业大学之江学院 Face texture map generation method and system and electronic equipment
CN113327277A (en) * 2020-02-29 2021-08-31 华为技术有限公司 Three-dimensional reconstruction method and device for half-body image
CN117058329A (en) * 2023-10-11 2023-11-14 湖南马栏山视频先进技术研究院有限公司 Face rapid three-dimensional modeling method and system
CN117058329B (en) * 2023-10-11 2023-12-26 湖南马栏山视频先进技术研究院有限公司 Face rapid three-dimensional modeling method and system

Also Published As

Publication number Publication date
KR100327541B1 (en) 2002-03-08
KR20000063919A (en) 2000-11-06
AU2001244763A1 (en) 2002-02-18
JP2004506276A (en) 2004-02-26

Similar Documents

Publication Publication Date Title
WO2002013144A1 (en) 3d facial modeling system and modeling method
EP1424655B1 (en) A method of creating 3-D facial models starting from facial images
CN107274493B (en) Three-dimensional virtual trial type face reconstruction method based on mobile platform
Kurihara et al. A transformation method for modeling and animation of the human face from photographs
JP3030485B2 (en) Three-dimensional shape extraction method and apparatus
US6549200B1 (en) Generating an image of a three-dimensional object
US20100328307A1 (en) Image processing apparatus and method
JP2007265396A (en) Method and system for generating face model
CN106652015B (en) Virtual character head portrait generation method and device
CN113744374B (en) Expression-driven 3D virtual image generation method
CN106652037B (en) Face mapping processing method and device
TWI750710B (en) Image processing method and apparatus, image processing device and storage medium
CN114821675B (en) Object processing method and system and processor
KR20190040746A (en) System and method for restoring three-dimensional interest region
KR100317138B1 (en) Three-dimensional face synthesis method using facial texture image from several views
CN113808272B (en) Texture mapping method in three-dimensional virtual human head and face modeling
JPH06118349A (en) Spectacles fitting simulation device
CN112561784B (en) Image synthesis method, device, electronic equipment and storage medium
JP2739447B2 (en) 3D image generator capable of expressing wrinkles
JP3850080B2 (en) Image generation and display device
JPH08329278A (en) Picture processor
KR20020079268A (en) The System and Method composing 3D contents with 3D human body in Virtual Reality with real time.
JP3648099B2 (en) Image composition display method and apparatus, and recording medium on which image composition display program is recorded
Weerasinghe et al. 2D to pseudo-3D conversion of" head and shoulder" images using feature based parametric disparity maps
JP2001222725A (en) Image processor

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2002518427

Country of ref document: JP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase