3D FACIAL MODELING SYSTEM AND MODELING METHOD
BACKGROUND OF THE INVENTION
Technical Field
The present invention relates to a three-dimensional face modeling for converting a sheet of front face image into a three-dimensional stereoscopic image of entire face, and more particularly, to a three-dimensional face modeling in which a user can retouch front, side and rear images of the face on a displayed screen to reproduce an exact three-dimensional stereoscopic image.
Background Art
A face is the most important portion for modeling a human body in a three-dimensional fashion. The face occupies the most important position for representing the identity of a person, and includes a number of important characteristic elements in a small area. Even a slight change in such characteristic elements causes the entire face to look different.
Also, when the three-dimensional face is coupled with additional three-dimensional elements such as the human body portion and an animation effect is given to the coupled image, there can appear an avatar having the actual identity of a user in a virtual reality (NR.) environment. In other words, in order to realize the NR environment with the real avatar of the user, a three-dimensional face modeling of the user should be preferentially realized.
In general, as three-dimensional face modeling methods, there are a three-dimensional face modeling method utilizing a three-dimensional scanning equipment, and a three-dimensional face modeling method in which a stereo vision is used to combine two dimensional information, which corresponds to a common portion in pictures which are photographed at different angles, and to model a three-dimensional image from the combination of the two-dimensional information.
The three-dimensional face modeling method utilizing the scanning equipment can perfectly restore not only a geometric model but also texture information such as complexion. However, the equipment is expensive, and thus this method is rarely available to general consumers.
Further, in the method of using the stereo vision to reconstruct the two-dimensional informations into a three-dimensional information, it is required that the pictures are photographed at various angles to have the common areas.
Disclosure of the Invention
It is an object of the invention to provide a three-dimensional face modeling system, capable of modeling a human face in a three-dimensional image with only one face picture photographed from the front.
It is another object of the invention to provide a three-dimensional face modeling method, capable of modeling a human face in a three-dimensional image with only one face picture photographed from the front.
To accomplish the above objects, there is a three-dimensional face modeling method. The method comprises the steps of: providing a virtual reality three-dimensional face having a geometric configuration expressed in polygon meshes as a standard model;
providing a front image of a modeled object on a screen on which the standard model is displayed such that the front image of the modeled object is overlaid on the standard model; transforming the standard model such that the overlaid front image of the modeled object identifies with the standard model; extracting pixels corresponding to complexion and hair color of the front image of the modeled object from portions slightly inner than the boundary of the front and sides in the front image of the modeled object based on the transformed standard model, and generating a front face texture; applying side and rear face textures with a same complexion and hair color as the generated front face texture and editing a hair configuration; and texture-mapping front, side and rear textures of the face with the transformed standard model to restore a three-dimensional face configuration.
Preferably, the step of transforming the standard model comprises: a global transformation step of adjusting head contour-adjusting lines, width-adjusting lines and length-adjusting lines of the standard model such that an outline of the standard model identifies with the front image of the modeled object and linearly retouching positions of entire vertexes; and a local transformation step of transforming characteristic points and characterless points of the global-transformed standard model such that a specific portion of the global-transformed standard model identifies with a specific portion of the front image of the modeled object, and retouching the specific portion in detail.
Brief Description of the Drawings
Fig. 1 is a schematic block diagram of a three-dimensional face modeling system in accordance with the invention;
Fig. 2 is a flow chart for modeling a three-dimensional face in accordance with the invention;
Fig. 3 shows a virtual three-dimensional standard model displayed on a screen in accordance with an embodiment of the invention;
Fig. 4 shows an embodiment of the invention, in which a front image of a modeled object is overlaid on a standard model; Fig. 5 shows an embodiment of the invention, in which characteristic points of the standard model are adjusted such that the standard model identifies with the front image of the modeled object;
Fig. 6 shows an embodiment of the invention, in which complexion and hair color of a front face texture are reconstructed; and Fig. 7 shows an embodiment of the invention for editing a rear face texture.
Best Mode for Carrying out the Invention
Hereinafter, there are in detail described preferred embodiments in accordance with the present invention with reference to the accompanying drawings. Fig. 1 is a schematic block diagram of a system of three-dimensional face modeling of the invention, Fig. 2 is a flow chart for modeling a three-dimensional face of the invention, Fig. 3 shows a virtual three-dimensional standard model according to an embodiment of the invention, which is displayed on a screen, Fig. 4 shows the embodiment of the invention, in which a front image of a modeled object is overlaid on a standard model, Fig. 5 shows the embodiment of the invention, in which characteristic points of a standard model are adjusted such that the standard model identifies with the front image of the modeled object, Fig. 6 shows the embodiment of the invention, in which complexion and hair color of a front face texture are reconstructed, and Fig. 7 shows the embodiment of the invention for editing a rear face texture.
Figs. 3 to 7 are pictures displayed on a screen for describing the embodiment of the present invention, in which the screen is divided into four small regions, i.e., upper left, upper right, lower left and lower right regions: In the upper left region, a standard model is transformed in the front; in the upper right region of the screen, the standard model is transformed in the side; in the lower left region, textures of the side and rear faces are edited; and in the lower right region, a three-dimensionally restored face configuration is rotated stereoscopically so that the restored face can be compared.
Further, as a terminology used in the invention, polygon mesh means a network structure of triangles composed of 500 to 1,000 points for expressing size and configuration of the entire face.
The system of three-dimensional face modeling of the invention comprises: a virtual three-dimensional model providing module 102 for providing a three-dimensional virtual face as a standard model in the form of polygon mesh, a modeled object front image providing module 104 for providing a front face image of a modeled object, a standard model transforming module 106 for transforming the standard model such that the standard model identifies with the modeled object after overlaying the front image of the modeled object on the standard model, a front face texture generating module 112 for retouching the front face image of the modeled object to generate a front face texture, side and rear face texture editing modules 114 and 116 for preparing side and rear face textures with complexion and hair color as the same as the front face texture, a texture-mapping module 118 for mapping the side and rear textures to the transformed standard model.
Preferably, the virtual three-dimensional model-providing module 102, as shown in Fig. 3, prepares a three-dimensional virtual standard model on the basis of an average face configuration. The standard model is provided as a geometrical stereoscopic image,
in which triangular polygon meshes 10 are combined to provide the entire face. Further, the standard model provided on the screen includes head contour-adjusting lines 2, width-adjusting lines 4, length-adjusting lines 6 and characteristic points 8. The head contour-adjusting lines 2 acts for adjusting the width and length ratios of the entire face such that the standard model identifies with a front face image 1 of a modeled object. The width-adjusting lines 4, the length-adjusting lines 6 and the characteristic points 8 are used for adjusting specific regions of the face.
The modeled object front image providing module 104, as shown in Fig. 4, provides the front face image 1 of the modeled object, which is scanned from a front face picture of the modeled object for three-dimensionally edition, to be overlaid on the screen where the standard model is on display. Therefore, the plurality of adjusting lines 2, 4, 6 and 8 easily notify the overall size of the front image of an actual modeled object and the positional difference between important regions.
The standard model-transforming module 106 includes a global transforming module 108 and a local transforming module 110, in general, for transforming the standard module such that the front face image 1 of the standard model identifies with the outer contour of the modeled object.
The global transforming module 108, as shown in Fig. 5, compares the front face image 1 of the overlaid modeled object with the standard model, and adjusts the head contour-adjusting lines 2, the width-adjusting lines 4, the length-adjusting lines 6 and the like such that the standard model identifies with the front image 1 of the modeled object.
After the global transforming module 108 generally identifies the front face image 1 of the modeled object with the outer contour of the standard model, the local transforming module 110 adjusts the characteristic points 8, which are previously set to
specific regions of the face of the standard model such as eyebrow, eyes, nose, mouth and ears, such that the characteristic points 8 identifies with the front face image 1 of the modeled object. Further, these characteristic points 8 are cooperative to the variation of other characteristic points 8, thereby accompanying the variation of characterless points for expressing the overall face contour.
After the front face image 1 of the modeled object is identified with the outer contour of the standard model, the front face texture generating module 112, as shown in Fig. 6, extracts several pixels corresponding to the complexion and hair color of the front face image of the modeled object from portions slightly inner than the boundary between the front and side of the front face image of the modeled object on the basis of the head contour-adjusting lines which are adjusted by the global transforming module 108, and greates a front face texture in the form of retouching the front face image of the modeled object.
Further, the side and rear face texture editing modules 114 and 116 generates and edit side and rear face textures by using the front face texture. In the initial generation stage, the side and rear face textures have the same size as the front face texture. In configuration, the right quadrangular area is placed onto the left of the rear face texture, and the left quadrangular area is placed onto the right of the rear face texture, in which the right and left quadrangular areas are retouched on the basis of retouching lines at the eye positions in the front face texture. Further, a middle area fills the both areas through linear compensation. After the generation, the edited side and rear face textures are retouched by using head contour-adjusting points 12 as shown in Fig. 7.
The texture-mapping module 118 maps the generated front, side and rear textures to the transformed standard model. The front face texture is mapped at about 120° in
respect to the frontal face, in which texture coordinates are obtained by projecting vertexes each to an x-y plane. The side and rear face textures are mapped about those areas which are not mapped by the front face texture, in which the texture coordinates are obtained through cylindrical mapping unlike the front face texture. In the cylindrical mapping, the side and rear face textures are erected cylindrically from the side to the rear about the center when seen from over the head, and then mapped.
Hereinafter, as shown in Fig. 2, a process of three-dimensional face modeling of the invention will be described in detail.
First, a three-dimensional virtual standard model is displayed on a screen in step S202. The standard model is displayed with the front and side faces on the screen, respectively, as shown in Fig. 3. The displayed standard model is expressed by the triangular polygon meshes 10 across the entire face, thereby realizing the geometric configuration of the entire face. Further, the standard model includes the head contour-adjusting lines 2 for adjusting the width/length ratios of the face, the width-adjusting lines 4 for adjusting the positions of eyes, nose and mouth, the length-adjusting lines 6 for adjusting the positions of both eyes and nose and the characteristic points 8 for schematically expressing the positions of specific regions of the face.
After the standard module is provided, as shown in Fig. 4, various front face images 1 in color or black and white, which are stored by scanning a front face picture of a modeled object, are overlaid on the screen where the standard model is displayed in step
S204. Then, the provided front face picture is overlaid for comparison with the standard model composed of the polygon meshes 10.
When the front face image 1 of the modeled object is displayed on the screen
together with the standard model, a user globally transforms the standard model in step S206. In such the global transformation, the front face image 1 is compared with the standard model, and the head contour-adjusting lines 2 and the width-adjusting lines 6 of the standard model are respectively adjusted in such ratios to identify with the corresponding front face image 1. About the front face image 1, the head contour-adjusting lines 2 are adjusted to the width and length ratios thereof, in which three horizontal lines are adjusted so that the upper width-adjusting line identifies with the positions of eyes, the middle width-adjusting line corresponds to the end of nose, and the lower width-adjusting line identifies with the position of mouth. Then, the whole vertexes are grouped. Three vertical lines are adjusted to identify with the positions of eyes and nose. Further, the whole positions of the vertexes of the standard module are linearly retouched to identify with the variation amount depending on the positions of the transformed adjusting lines.
After the entire face ratio of the standard model is adjusted through the global transformation, the specific regions are retouched in detail through a local transformation in step S208. The local transformation adjusts the characteristic points 8 respectively specified in the regions of the standard model to correspond to the front face image 1. The specific points 8 can be schematically expressed with the sizes and positions about the specific regions of the face such as face contour, eyebrow, eyes, nose, mouth and ears, and are so indexed to come cooperative with the characterless points under the influence of the surroundings according to the variation of the characteristic points 8. In such transformation of the characteristic points 8, the front characteristic points are related to x and y coordinates of x, y and z coordinates, and the side characteristic points are related to movement along z coordinates of the x, y and z coordinates.
In the global and local transformations, the user can selectively use a symmetric mode and an asymmetric mode. The symmetric mode is based on that a human face is bilaterally symmetric. In the symmetric mode, when an adjusting line or characteristic point in the right or left is moved, the corresponding adjusting line or characteristic point automatically moves corresponding thereto. Alternatively, the asymmetric mode can be used to adjust the adjusting lines 2 and 6 or the characteristic points 8, respectively.
On the basis of the standard model transformed through such global and local transformations, the face image of the modeled object is retouched to generate the front face texture. As shown in Fig. 6, by using the positions of the adjusting lines in the global transformation from the front image 1 of the modeled object, image processing retouches outer areas with complexion and hair color from portions slightly inner than the boundary between the front and the side to generate the front face texture.
After the front face texture is generated, the side and rear textures of the face are generated and edited in step S212. First, in the initial configuration, the right quadrangular area is placed onto the left of the rear face texture, and the left quadrangular area is placed onto the right of the rear face texture, in which the right and left quadrangular areas are retouched on the basis of retouching lines at the eye positions in the front face texture. Further, the middle area fills the both areas through linear compensation. In texture mapping, the user edits unnatural areas of hair in regions of connecting the front texture and the rear texture while confirming the side and rear textures of the corresponding regions.
As shown in Fig. 7, the head configuration in the side and rear textures of the face, is retouched with the head contour-adjusting points 12. After the head contour is determined in the side and rear faces, the head contour-adjusting points 12 are dotted into a
shape. The head contour-adjusting points 12 are image processed so that a portion over the line is filled with hair color and another portion under the line is filled with complexion about a line composed of those points. In this case, the user can use the symmetric mode in the same fashion as the global and local transformations to apply the variation of the side and rear textures at one side to the other side in the same fashion.
When the front, side and rear face textures are completed, the face at about 120° from the frontal face is mapped as the front face texture from the geometric configuration of the three-dimensional face in step S214. The vertexes, which correspond to the front of the geometric configuration, are respectively projected to obtain the x-y plane texture coordinates. In the side and rear face textures, texture-mapping is executed to the areas which are not mapped by the front face texture, in which the texture coordinates are obtained through the cylindrical mapping unlike the front face texture. The rear face texture is mapped after cylindrically erected from the side to the rear about the center of the geometric configuration of the entire face when seen from over the head.
Industrial Applicability
As described in the foregoing embodiment, the invention realizes the whole three-dimensional face with ease by using only one front face picture.
Further, the invention can realize the complexion and hair color as well as the overall face configuration, thereby perfecting the three-dimensional stereoscopic image.
Moreover, the invention can realize the three-dimensional stereoscopic image with one front face picture so that the user can realize and apply his/her own avatar to the NR environment.
While the foregoing description has been made in detail about the system of
three-dimensional face modeling according to the preferred embodiment of the invention, it is apparent to those skilled in the art that modifications and variations can be made without- departing from the aspect of the invention defined in the appended claims.