GB2336735A - Face image mapping - Google Patents

Face image mapping Download PDF

Info

Publication number
GB2336735A
GB2336735A GB9822138A GB9822138A GB2336735A GB 2336735 A GB2336735 A GB 2336735A GB 9822138 A GB9822138 A GB 9822138A GB 9822138 A GB9822138 A GB 9822138A GB 2336735 A GB2336735 A GB 2336735A
Authority
GB
United Kingdom
Prior art keywords
face image
texture
feature
standard face
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9822138A
Other versions
GB9822138D0 (en
Inventor
Seok-Hyun Ryu
Kang-Sik Yoon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WiniaDaewoo Co Ltd
Original Assignee
Daewoo Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1019980014640A external-priority patent/KR100281965B1/en
Priority claimed from KR1019980019414A external-priority patent/KR100292238B1/en
Application filed by Daewoo Electronics Co Ltd filed Critical Daewoo Electronics Co Ltd
Publication of GB9822138D0 publication Critical patent/GB9822138D0/en
Publication of GB2336735A publication Critical patent/GB2336735A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Image Generation (AREA)

Abstract

A 3 dimensional standard face image is resized until a contour of the resized 3 dimensional standard face image coincides with that of A texture face image, Figure 2A, to thereby generate a contour matched standard face image. Resizing uses a scale factor derived from the horizontal to vertical ratio of the texture face image and a diagonal ratio of the texture face image represented by angle P4 P5 P6. Then, positions of feature elements of the texture face image are determined based on a knowledge based feature extraction algorithm and positions of feature elements of the contour matched standard face image are detected. The feature elements of the contour matched standard face image are repositioned based on the positions of those of the texture face image to thereby generate a feature matched standard face image and texture data of the texture face image is mapped on the feature matched standard face image.

Description

2336735 FACE IMAGE MAPPING METHOD AND APPARATUS FOR USE IN A MODEL BASED
CODING SYSTEM The present invention relates to a face image mapping method and apparatus; and, more particularly, to a method and apparatus for mapping a texture face image to a 3 dimensional standard face image for use in a 3 dimensional model based coding system.
In digitally televised systems such as video-telephone, teleconf erence and high definition television systems, a large amount of digital data is needed to define each video frame signal since a video line signal in the video frame signal comprises a sequence of digital data referred to as pixel values. Since, however, the available frequency bandwidth of a conventional transmission channel is limited, in order to transmit the large amount of digital data therethrough, it is necessary to compress or reduce the volume of data through the use of various data compression techniques, especially for such low bitrate video signal encoders as videotelephone and teleconf erence systems employed f or transmitting, e. g., a picture of a person therethrough.
In the video-telephone and teleconference systems, the video images are primarily comprised of head- and- shoulder, - 1 is i.e., an upper body, of a person. Furthermore, a most likely main object of interest to a viewer will be the face of the person and the viewer will focus his/her attention on moving parts, i.e., the person's mouth area including the lips and the chin that are moving, especially when the person is talking in a video scene. Therefore, if only general information on the shape of the face is transmitted, the amount of digital data can be substantially reduced.
Thus, in a conventional 3 dimensional model-based coding system, particular motion parameters are extracted from the face images and transmitted to a receiving end instead of transmitting all the pixel values which vary continuously. At the receiving end, in order to reconstruct the face images, the received motion parameters are combined with a basic face image of a person transmitted thereto in advance, wherein the basic face image of the person is obtained by mapping a face image of the person to a 3 dimensional standard face image.
A first step to map the face image of the person to the 3 dimensional standard face image, according to a conventional 3 dimensional modelbased coding technique, is overlapping the two face images, wherein the 3 dimensional standard face image is implemented by connecting a multiplicity of polygons and positions of vertices of the polygons are offered as 2 dimensional positions with their z positions being set 11011. And then, a vertical -to-hori zontal ratio of the face image is calculated. Based on the calculated aspect ratio, the 3 is dimensional standard face image is expanded or contracted to be matched with the face image.
Thereafter, positions of respective feature elements, e.g., eyes, nose and mouth, in the face image, are matched to those of the corresponding feature elements in the expanded or contracted 3 dimensional standard face image. And, texture data of the face image is projected to the 3 dimensional standard face image.
In the conventional 3 dimensional model-based coding system illustrated above, however, the 3 dimensional standard face image is expanded or contracted based only on the vertical -to-horizontal ratio of the texture face image, which may result in an inaccurate matching of the two face images. Furthermore, the positions of the feature elements of the texture face image are not known except for some feature points while the positions of the feature elements of the 3 dimensional standard face image are known beforehand, wherein the process to detect the precise positions of the feature elements of the texture face image is complicated and takes long time. Accordingly, it is necessary to develop a face image mapping scheme capable of precisely mapping the two face images onto each other and efficiently finding the positions of the feature elements of the texture face image.
It is, therefore, a primary object of the present invention to provide a method and apparatus for efficiently mapping a texture face image to a standard face image for use in a 3-dimensional model based coding system.
In accordance with one aspect of the present invention, there is provided a method, for use in a 3 dimensional model based coding system, for mapping a 3 dimensional standard face imace to a texture face image, wherein a face picture is provided to the 3 dimensional model based coding system, comprising the steps of: (a) obtaining the texture face image from the face picture; (b) providing the 3 dimensional standard face image; (c) resizing the 3 dimensional standard face image based on a scale factor of the texture face image to thereby generate a contour matched standard face image and contour matched feature elements, wherein the scale factor is determined in consideration of a horizontal -to- vertical ratio and a diagonal ratio of the texture face image; (d) repositioning the contour matched feature elements of the contour matched standard face image based on feature elements of the texture face image to thereby generate a feature matched standard face image; and (e) mapping texture data of the texture face image on the feature matched standard face image.
In accordance with another aspect of the present invention, there is provided an apparatus, for use in a 3 dimensional model based coding system, for mapping a 3 dimensional standard face image to a texture face image, 4 wherein a face picture is provided to the 3 dimensional model based coding system, comprising: means for extracting the texture face image from the face picture; means for determining of feature elements of the texture face image from the face picture based on a knowledge based feature extraction algorithm; means for providing the 3 dimensional standard face image; means for matching a contour of the 3 dimensional standard face image to that of the texture face image based on a scale factor of the texture face image to thereby generate a contour matched standard face image and contour matched feature elements, wherein the scale factor is determined in consideration of a horizontal -to -vertical ratio and a diagonal ratio of the texture face image; means for matching feature elements of the contour matched standard face image to feature elements of the texture face image to thereby generate a feature matched standard face image; and means for mapping texture data of the texture face image on the feature matched standard face image.
The above and other objects and features of the present invention will become apparent from the following description of preferred embodiments given in conjunction with the accompanying drawings, in which;
Fig. 1 shows a block diagram of a face image mapping apparatus in accordance with a preferred embodiment of the present invention; Fig. 2A represents a texture face image of a person; Fig. 2B provides a 3 dimensional standard face image; Figs. 3A to 3C illustrate a face contour matching process by using a vertical -to-horizontal and a diagonal ratios of the texture face image; and Fig. 4 shows a feature elements extraction process of a texture face picture.
Referring to Fig. 1, there is provided a block diagram of a face image mapping apparatus 100 in accordance with a preferred embodiment of the present invention, wherein a face image mapping process is performed at a receiving end of a 3 dimensional model based coding system in order to generate a texture mapped face image by mapping a 3 dimensional standard face image to a texture face image of a person. The face image mapping apparatus 100 comprises a feature point extraction block 110, a feature element extraction block 120, a face con-Lour matching block 130, a standard face image provision block 140, a feature element matching block 150 and a texture mapping block 160.
A 'Lace picture of a person taken with, e.g., a CCD (charge coupled device) digital camera(not shown), is applied to the feature point extraction block 110 and the feature element extraction block 120, wherein a background part of the face
6 picture is represented by 11011 and a face part of the face picture is represented by pixel values ranging from 11111 to 1125511.
The feature point extraction block 110 extracts texture face image 210 shown in Fig. 2A from the face picture. The texture face image 210 contains texture data represented by pixel values ranging from 11111 to 1125511, a contour of the face 2102 and a plurality of feature points, wherein 6 exemplary feature points P1 to P6 are depicted in Fig. 2A. To be more specific, P1 is an upper-most point, P2 is a left-most point P3 is a lower-most point, P4 is a right-most point, PS is a center point and P6 is a farthest point from PS, wherein the center point PS is determined by averaging positions of all pixels on the contour of the face 210-2 and the farthest point P6 is determined in an upper-right region of the face.
Information on the contour of the face 210-2 and the feature points P1 to P6 are provided from the feature point extraction block 110 to the face contour matching block 130; and the texture data is provided to the texture mapping block 160. And, the standard face image provision block 140 provides a standard face image 220 shown in Fig. 2B, its feature points Q1 to Q6 and positions of feature elements stored therein to the face contour matching block 130, wherein Q1 to Q6 respectively correspond to P1 to P6.
The standard face image 220 is stored in the form of 3 dimensional computer graphics implemented by connecting a - 7 is multiplicity of polygons. In order to match the standard face image 220 with the contour of the face 210-2, positions of vertices of the polygons are provided by 2 dimensional position (x, y) with z position being set 11011, whereas each vertex of the polygons has a 3 dimensional position (x, y, z).
The face contour matching block 130 performs a face contour matching process on the contour of the face 210-2 and the standard face image 220 as shown in Figs. 3A to 3C.
Referring to Fig. 3A, the standard face image 220 is overlapped with the contour of the face 210-2. The face contour matching block 130 determines a displacement between PS of the contour of the face 210-2 and Q5 on the standard face image 220 and displaces the standard face image 220 by the determined displacement as shown in Fig. 3B. As a result, the position of PS of the contour of the face 210-2 coincides with that of QS on the standard face image 220, to thereby generate a displaced standard face image 220-2, displaced feature points Q1' to Q61 and positions of displaced feature elements.
Thereafter, a scale factor SF is calculated by using a vertical tohorizontal ratio R1 and a diagonal ratio R2 of the texture face image 210,,Is f lows:
+ 2 Ea. (1) 1P5 = t 7P4 PS + tan (,d-P4P5P6) 2 8 wherein P4pS denotes a distance between P4 and PS; p_1p5, a distance between P1 and PS; and tan (,-P4P5P6), a slope of a line connecting PS to P6, as is shown in Fig. 2A.
By scaling with the scale factor SF, the standard face image 220 is resized so that the shape thereof becomes same as that of the texture face image 210. And, the face contour matching block 130 moves the respective displaced feature points Q1' to Q41 and Q6' on the displaced standard face image 220-2 to positions of the corresponding feature points P1 to P4 and P6 on the contour of the face 210-2 so that the contour oil the resized standard face image is completely matched with the contour of the face 210-2, as is shown in Fig. 3C, to thereby generate a contour matched standard face image 220-4.
Furthermore, the face contour matching block 130 updates the positions of the feature elements of the contour matched standard face image 220-4 based on SF and the displacement to thereby generate positions of mod-i-f ied f eature elements of the contour matched standard face image 220-i. The contour matched standard face image 220-4 and the modified positions_ of the feature elements are provided to the feature element matching block 1-50.
Meanwhile, the feature element extraction block 120 receives the face picture and extracts locations of feature elements of the face picture by using a knowledge based feature extraction algorithm. First, edges of the face 9 - picture are extracted based on a predetermined edge extraction scheme to thereby generate an edge-detected face image 210-4 shown in Fig. 4.
Then, the feature element extraction block 120 scans the edge-detected face image 210- on a pixel by pixel basis from a most upper-left pixel to a most lower-right pixel. While scanning the edge-detected face image 210-4, the feature element extraction block 120 detects feature regions, wherein pixels included in a feature region have considerably different pixel values from those of adjacent pixels.
Thereafter, representative horizontal and vertical positions of the feature regions are detected, wherein a representative horizontal position of a feature region is defined as a central horizontal position of the feature region and a representative vertical position of a feature region is defined as a central vertical position of the feature region, in accordance with the present invention. Referring to Fig.
4, 3 representative horizontal positions, i.e., X1 to X3, and representative vertical positions, i.e., Y1 to YS, are detected.
The knowledge based feature extraction algorithm is as follows:
Knowledge 1: A row having a largest sum of pixel values corresponds to eyebrows or eyes. (Y1 or Y2 corresponds to the eyebrows or the eyes.) Knowledge 2: A vertical distance from the eyes to a nose - 10 is is longer than a vertical distance from the nose to a mouth.
Knowledge 3: An edge-detected face image is horizontally symmetrical centering on the nose. ( 3 l- _X2 - Y2 M3) Knowledge 4 The left eye lies left to the nose and the nose lies left to the right eye. X1 < X2 < X3) Knowledge 5 The eyebrows lie above the eyes, the eyes lie above the nose, the nose lies above the mouth and the mouth lies above the chin.
Y1 < Y2 < Y3 < Y4 < Y5) Knowledge 6: The ratio of the horizontal distance from one eye to the nose to the vertical distance from the eyes to the mouth is approximately 1: 1.5. TIM2: Y2 Y4 9, 1: 1. 5) By using the above knowledge, XI to X3 of Fig. 4 are selected to correspond to the positions of a left eye, a nose and a right eye, respectively; and Y1 to YS, to those of eye brows, eyes, nose, mouth and a chin, respectively. Positions of the determined feature elements of the edge-detected face image 2104 are provided to the feature element matching block 150 as positions of the feature elements of the face.
At the feature element matching block 150, the feature elements of the contour matched standard face image 2204 is matched with the corresponding feature elements of the face. This feature element matching process is performed based on the positions of the feature elements on of the face and the positions of the contour matched feature elements on the contour matched standard face image 220-4. As a result, the positions of the contour matched feature elements on the contour matched standard face image 220-4 can be slightly adjusted in order to exactly be matched with the corresponding feature elements of the face, to thereby generate a feature matched standard face image.
The feature element matching block 150 provides the feature matched standard face image to the texture mapping block 160. The texture mapping block 160 maps the polygons of the contour matched standard face image to corresponding regions of the texture face image 210 provided from the feature point extraction block 110, wherein a polygon and its corresponding region lie at a same position. Then, texture data of the regions of the texture face image 210 is mapped on correspondi. ng polygons of the feature matched standard face image to generate a texture mapped face image.
As described above, the face contours of the texture face image and the standard face image are matched with each other by using a diagonal ratio of the texture face image as well as a vertical -to-horizontal ratio thereof. Moreover, the positions of the feature elements of the texture face image is detected based on the knowledge based feature extraction algorithm. Accordingly, the face contour and feature element matching process is performed more efficiently in accordanc e with the present invention.
while the present invention has been shown and described 12 with reference to the particular embodiments, it will be apparent to those skilled in the art that many changes and modifications may be made without departing from the scope of the invention as defined in the appended claims.
13

Claims (21)

Claims:
1. A method, for use in a 3 dimensional model based coding system, for mapping a 3 dimensional standard face image to a texture face image, wherein a face picture is provided to the 3 dimensional model based coding system, comprising the steps of:
(a) obtaining the texture face image from the face picture; (b) providing the 3 dimensional standard face image; (c) resizing the 3 dimensional standard face image based on a scale factor of the texture face image to thereby generate a contour matched standard face image and contour matched feature elements, wherein the scale factor is determined in consideration of a horizontal -to-vertical ratio and a diagonal ratio of the texture face image; (d) repositioning the contour matched feature elements of the contour matched standard face image based on feature elements of the texture face image to thereby generate a feature matched standard face image; and (e) mapping texture data of the texture face image on the feature matched standard face image.
2. The method of claim 1, wherein the step (a) includes the steps of:
(al) extracting the texture face image from the face - 14 picture, wherein the texture face image contains the texture data and a contour of the texture face image; and (a2) providing a plurality of feature points of the texture face image, wherein the feature points are an upper- s most point, a left-most point, a lower-most point, a rightmost point, a center point and a farthest point from the center point.
3. The method of claim 2, wherein the center point is obtained by averaging positions of the all pixels on the contour of the texture face image and the farthest point is obtained on the upper-right region of the texture face image.
4. The method of claim 3, wherein the step (b) includes the steps of:
(bl) offering the 3 dimensional standard face image, wherein the 3 dimensional standard face image is implemented by connecting a multiplicity of polygons and positions of vertices of the polygons are offered as 2 dimensional positions with their z positions being set 1,011; (b2) providing a plurality of feature points of the 3 dimensional standard face image, wherein each of the feature points of the 3 dimensional standard face image corresponds to a feature point of the texture face image; and (b3) providing positions of feature elements of the 3 dimensional standard face image.
1
5. The method of claim 4, wherein the step (c) includes the steps of:
(cl) overlapping the 3 dimensional standard face image with the texture face image; - (c2) determining a displacement between a center point of the texture face image and that of the 3 dimensional standard face image; (c3) displacing the 3 dimensional standard face image by the determined displacement, thereby generating a displaced standard face image, displaced feature elements and displaced feature points; and (c4) resizing the displaced standard face image until a contour of the resized standard face image coincides with that of the texture face image to thereby generate the contour matched standard face image and the contour matched feature elements.
6. The method of claim 5, wherein the step (c4) contains the steps of:
(c41) calculating the vertical -to-horizontal ratio of the texture face image; (c42) calculating the diagonal ratio of the texture face image; (c43) determining the scale factor by averaging the vertical-tohorizontal ratio and the diagonal ratio; (c44) scaling the displaced standard face image based on 16 the scale factor to thereby generate a scaled standard face image, scaled feature elements and -scaled feature points; and (c45) resizing the scaled standard face image by matching each of the scaled feature points to a corresponding feature point of the texture face image to thereby generate the contour matched standard face image and the contour matched feature elements thereof.
7. The method of claim 8, wherein the vertical -to-horizontal ratio is obtained by dividing a distance from the center point of the texture face image to the upper-most point of the texture face image by a distance from the center point of the texture face image to the right-most point of the texture face image and the diagonal ratio is a slope of a line connecting the center point of the texture face image to the farthest point of the texture face image.
8. The method of claim 7, wherein the step (d) includes the steps of:
(di) determining the feature elements of the texture face image from the face picture based on a knowledge based feature extraction algorithm; (d2) detecting positions of the contour matched feature elements of the contour matched standard face image; and (d3) repositioning each of the contour matched feature elements of the contour matched standard face image to a 17 - position of a corresponding feature element of the texture face image to thereby generate the feature matched standard face image.
9. The method of claim 8, wherein the step (di) includes the steps of:
(dil) detecting edges of the texture face image, thereby generating an edge detected face image; (d12) scanning the edge detected face image on a pixel by pixel basis; (d13) finding feature regions while executing the step (d12); (d14) determining each of the feature regions as a corresponding feature element based on the knowledge based feature extraction algorithm; and (d15) obtaining a position of each corresponding feature element.
10. The method of claim 9, wherein the feature regions in the texture face image include pixels having preeminently different pixel values from those of adjacent pixels.
11. An apparatus, for use in a 3 dimensional model based coding system, for mapping a 3 dimensional standard face image to a texture face image, wherein a face picture is provided to the 3 dimensional model based coding system, comprising:
means for extracting the texture face image from the face picture; means for determining of feature elements of the texture face image from the face picture based on a knowledge based feature extraction algorithm; means for providing the 3 dimensional standard face image; means for matching a contour of the 3 dimensional standard face image to that of the texture face image based on a scale factor of the texture face image to thereby generate a contour matched standard face image and contour matched feature elements, wherein the scale factoris determined in consideration of a horizontal -to -vertical ratio and a diagonal ratio of the texture face image; means for matching feature elements of the contour matched standard face image to feature elements of the texture face image to thereby generate a feature matched standard face image; and means for mapping texture data of the texture face image on the feature matched standard face image.
is
12. The apparatus of claim 11, wherein the extracting means includes: means for extracting the texture face image from the face picture, wherein the texture face image contains the texture data and the contour of the texture face image; and 19 - means for providing a plurality of feature points of the texture face image, wherein the feature points are an uppermost point, a left-most point, a lower-most point, a rightmost point, a center point and a farthest point from the center point.
is
13. The apparatus of claim 12, wherein the center point is obtained by averaging positions of the all pixels on the contour of the texture face image and the farthest point is obtained on the upper-right region of the texture face image.
14. The apparatus of claim 13, wherein the determining means includes: means for detecting edges of the texture face image, thereby generating an edge detected face image; means for scanning the edge detected face image on a pixel by pixel basis; means for finding feature regions while executing scanning; means for determining each of the feature regions as a corresponding feature element based on the knowledge based feature extraction algorithm; and means for obtaining a position of each corresponding feature element.
is. The apparatus of claim 14, wherein the 3 dimensional - 20 standard face image providing means includes: means for offering the 3 dimensional standard face image, wherein the 3 dimensional standard face image is implemented by connecting a multiplicity of polygons and positions of vertices of the polygons are offered as 2 dimensional positions with their z positions being set 11011; means for providing a plurality of feature points of the 3 dimensional standard face image, wherein each of the feature points of the 3 dimensional standard face image corresponds to a feature point of the texture face image; and means for providing positions of feature elements of the 3 dimensional standard face image.
16. The apparatus of claim 15, wherein the contour matching means includes: means for overlapping the 3 dimensional standard face image with the texture face image; means for determining a displacement between a center point of the texture face image and that of the 3 dimensional standard face image; means for displacing the 3 dimensional standard face image by the determined displacement, thereby generating a displaced standard face image, displaced feature elements and displaced feature points; and means for resizing the displaced standard face image until a contour of the resized standard face image coincides 21 with that of the texture face image to thereby generate the contour matched standard face image and the contour matched feature elements.
17. The apparatus of claim 16, wherein resizing means contains: means for calculating the vertical -to-horizontal ratio of the texture face image; means for calculating the diagonal ratio of the texture face image; means for determining the scale factor by averaging the vertical-to-horizontal ratio and the diagonal ratio.: means for scaling the displaced standard face image based on the scale factor to thereby generate a scaled standard face image, the reformed feature elements and reformed feature points; and means for resizing the reformed standard face image by matching each of the reformed feature points to a corresponding feature points of the texture face image to thereby generate the contour matched standard face image and update positions of the feature elements thereof.
18. The apparatus of claim 17, wherein the vertical-tohorizontal ratio is obtained by dividing a distance from the center point of the texture face image to the upper-most point of the texture face image by a distance from the center point - 22 of the texture face image to the right-most point of the texture face image and the diagonal ratio is a sl.ope of a line connecting the center point of the texture face image to the farthest point of the texture face image.
19. The apparatus of claim 19, wherein the feature element matching means repositions each of the feature elements of the contour matched standard face image to a position of a corresponding feature element of the texture face image to thereby generate the feature matched standard face image.
20. A method, for use in a 3 dimensional model based coding system, for mapping a 3 dimensional standard face image to a texture face image, substantially as herein described with reference to or as shown in Figures 1 to 4 of the accompanying drawings.
21. An apparatus, for use in a 3 dimensional model based coding system, for mapping a 3 dimensional standard face image to a texture face image, constructed and arranged substantially as herein described with reference to or as shown in Figures 1 to 4 of the accompanying drawings.
23
GB9822138A 1998-04-24 1998-10-09 Face image mapping Withdrawn GB2336735A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1019980014640A KR100281965B1 (en) 1998-04-24 1998-04-24 Face Texture Mapping Method of Model-based Coding System
KR1019980019414A KR100292238B1 (en) 1998-05-28 1998-05-28 Method for matching of components in facial texture and model images

Publications (2)

Publication Number Publication Date
GB9822138D0 GB9822138D0 (en) 1998-12-02
GB2336735A true GB2336735A (en) 1999-10-27

Family

ID=26633599

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9822138A Withdrawn GB2336735A (en) 1998-04-24 1998-10-09 Face image mapping

Country Status (1)

Country Link
GB (1) GB2336735A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1320074A2 (en) * 2001-12-13 2003-06-18 Samsung Electronics Co., Ltd. Method and apparatus for generating texture for 3D facial model

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992002000A1 (en) * 1990-07-17 1992-02-06 British Telecommunications Public Limited Company A method of processing an image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992002000A1 (en) * 1990-07-17 1992-02-06 British Telecommunications Public Limited Company A method of processing an image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1320074A2 (en) * 2001-12-13 2003-06-18 Samsung Electronics Co., Ltd. Method and apparatus for generating texture for 3D facial model
EP1320074A3 (en) * 2001-12-13 2004-02-11 Samsung Electronics Co., Ltd. Method and apparatus for generating texture for 3D facial model
US7139439B2 (en) 2001-12-13 2006-11-21 Samsung Electronics Co., Ltd. Method and apparatus for generating texture for 3D facial model

Also Published As

Publication number Publication date
GB9822138D0 (en) 1998-12-02

Similar Documents

Publication Publication Date Title
US7894633B1 (en) Image conversion and encoding techniques
US8928729B2 (en) Systems and methods for converting video
JP3148045B2 (en) 3D object CG creation device
EP0930585B1 (en) Image processing apparatus
US20060120712A1 (en) Method and apparatus for processing image
KR101538947B1 (en) The apparatus and method of hemispheric freeviewpoint image service technology
JP3524147B2 (en) 3D image display device
WO2019065536A1 (en) Reconfiguration method and reconfiguration device
WO2018225518A1 (en) Image processing device, image processing method, program, and telecommunication system
WO2019050038A1 (en) Image generation method and image generation device
US7139439B2 (en) Method and apparatus for generating texture for 3D facial model
Shimamura et al. Construction of an immersive mixed environment using an omnidirectional stereo image sensor
US10827159B2 (en) Method and apparatus of signalling syntax for immersive video coding
US11043019B2 (en) Method of displaying a wide-format augmented reality object
WO2019037656A1 (en) Method and apparatus of signalling syntax for immersive video coding
JPH11250273A (en) Image synthesizing device
US20210304494A1 (en) Image processing apparatus, 3d data generation apparatus, control program, and recording medium
JPH0981746A (en) Two-dimensional display image generating method
GB2336735A (en) Face image mapping
KR100289702B1 (en) Face Image Matching Method Using Panorama Image in Model-based Coding System
KR100408829B1 (en) Face Image Segmentation Method in Model-Based Coding System
KR100289703B1 (en) Face Image Matching Method in Model-Based Coding System
Shimamura et al. Construction and presentation of a virtual environment using panoramic stereo images of a real scene and computer graphics models
WO2020066008A1 (en) Image data output device, content creation device, content reproduction device, image data output method, content creation method, and content reproduction method
KR100281970B1 (en) Face Image Matching Method Using Panorama Image in Model-based Coding System

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)