WO2003017206A9 - Systeme et procede de modelisation tridimensionnelle automatique - Google Patents

Systeme et procede de modelisation tridimensionnelle automatique

Info

Publication number
WO2003017206A9
WO2003017206A9 PCT/US2002/025933 US0225933W WO03017206A9 WO 2003017206 A9 WO2003017206 A9 WO 2003017206A9 US 0225933 W US0225933 W US 0225933W WO 03017206 A9 WO03017206 A9 WO 03017206A9
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
ofthe
model
image
determining
Prior art date
Application number
PCT/US2002/025933
Other languages
English (en)
Other versions
WO2003017206A1 (fr
Inventor
Harvill Young
Original Assignee
Pulse Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/219,041 external-priority patent/US7123263B2/en
Application filed by Pulse Entertainment Inc filed Critical Pulse Entertainment Inc
Priority to CA2457839A priority Critical patent/CA2457839C/fr
Priority to EP02757127A priority patent/EP1425720A1/fr
Priority to CN028203321A priority patent/CN1628327B/zh
Priority to MXPA04001429A priority patent/MXPA04001429A/es
Priority to KR1020047002201A priority patent/KR100720309B1/ko
Priority to JP2003522039A priority patent/JP2005523488A/ja
Publication of WO2003017206A1 publication Critical patent/WO2003017206A1/fr
Publication of WO2003017206A9 publication Critical patent/WO2003017206A9/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present invention is related to 3D modeling systems and methods and more particularly, to a system and method that merges automatic image-based model generation techniques with interactive real-time character orientation techniques to provide rapid creation of virtual 3D personalities.
  • Typical systems also required that, once a model was created by the skilled animator, the same animator was required to animate the various gestures that you might want to provide for the model. For example, the animator would create the animation of a smile, a hand wave or speaking which would then be incorporated into the model to provide the model with the desired gestures.
  • the process to generate the behavior/gesture data is slow and expensive and requires a skilled animator. It is desirable to provide an automatic mechanism for generating gestures and behaviors for models without the assistance of a skilled animator. It is to these ends that the present invention is directed.
  • the invention utilizes image processing techniques, statistical analysis and 3D geometry deformation to allow photo-realistic 3D models of objects, such as the human face, to be automatically generated from an image (or from multiple images). For example, for the human face, facial proportions and feature details from a photograph (or series of photographs) are identified and used to generate an appropriate 3D model. Image processing and texture mapping techniques also optimize how the photograph(s) is used as detailed, photo-realistic texture for the 3D model.
  • a gesture of the person may be captured and abstracted so that it can be applied to any other model.
  • the animated smile of a particular person may be captured.
  • the smile may then be converted into feature space to provide an abstraction ofthe gesture.
  • the abstraction ofthe gesture e.g., the movements ofthe different portions ofthe model
  • the gesture may then be used for any other model.
  • the system permits the generation of a gesture model that may be used with other models.
  • a method for generating a three dimensional model of an object from an image comprises determining the boundary ofthe object to be modeled and determining the location of one or more landmarks on the object to be modeled. The method further comprises determining the scale and orientation ofthe object in the image based on the location ofthe landmarks, aligning the image ofthe object with the landmarks with a deformation grid, and generating a 3D model ofthe object based on the mapping ofthe image ofthe object to the deformation grid.
  • a computer implemented system for generating a three dimension model of an image is provided.
  • the system comprises a three dimensional model generation module further comprising instructions that receive an image of an object and instructions that automatically generate a three dimensional model ofthe object.
  • the system further comprises a gesture generation module further comprising instructions for generating a feature space and instructions for generating a gesture object corresponding to a gesture ofthe object so that the gesture behavior may be applied to another model of an object.
  • a method for automatically generating an automatic gesture model comprises receiving an image of an object performing a particular gesture and determining the movements associated with the gesture from the movement ofthe object to generate a gesture object wherein the gesture object further comprises a coloration change variable storing the change of coloration that occur during the gesture, a two dimensional change variable storing the change ofthe surface that occur during the gesture and a three dimensional change variable storing the change ofthe vertices associated with the object during the gesture.
  • a gesture object data structure that stores data associated with a gesture for an object is provided.
  • the gesture object comprises a texture change variable storing changes in coloration of a model during a gesture, a texture map change variable storing changes in the surface ofthe model during the gesture, and a vertices change variable storing changes in the vertices ofthe model during the gesture wherein the texture change variable, the texture map change variable and the vertices change variable permit the gesture to be applied to another model having a texture and vertices.
  • the gesture object data structure stored its data in a vector space where coloration, surface motion and 3D motion may be used by many individual instances ofthe model.
  • Figure 1 is a flowchart describing a method for generating a 3D model of a human face
  • Figure 2 is a diagram illustrating an example of a computer system which may be used to implement the 3D modeling method in accordance with the invention
  • Figure 3 is a block diagram illustrating more details ofthe 3D model generation system in accordance with the invention.
  • Figure 4 is an exemplary image of a person's head that may be loaded into the memory of a computer during an image acquisition process
  • Figure 5 illustrates the exemplary image of Figure 4 with an opaque background after having processed the image with a "seed fill" operation
  • Figure 6 illustrates the exemplary image of Figure 5 having dashed lines indicating particular bound areas about the locations of the eyes;
  • Figure 7 illustrates the exemplary image of Figure 6 with the high contrast luminance portion ofthe eyes identified by dashed lines;
  • Figure 8 is an exemplary diagram illustrating various landmark location points for a human head
  • Figure 9 illustrates an example of a human face 3D model in accordance with the invention.
  • Figures 10A-10D illustrate respective deformation grids that can be used to generate a 3D model of a human head
  • Figure 10E illustrates the deformation grids overlaid upon one another
  • Figure 11 is a flowchart illustrating the automatic gesture behavior generation method in accordance with the invention.
  • Figure 12A and 12B illustrate an exemplary psuedo-code for performing the image processing techniques ofthe invention
  • Figures 13A and 13B illustrate an exemplary work flow process for automatically generating a 3D model in accordance with the invention
  • Figures 14A and 14B illustrate an exemplary pseudo-code for performing the automatic gesture behavior model in accordance with the invention
  • Figure 15 illustrates an example of a base 3D model for a first model, Kristen
  • Figure 16 illustrates an example of a base 3D model for a second model, Ellie
  • Figure 17 is an example ofthe first model in a neutral gesture
  • Figure 18 is an example ofthe first model in a smile gesture
  • Figure 19 is an example of a smile gesture map generated from the neutral gesture and the smile gesture ofthe first model
  • Figure 20 is an example ofthe feature space with both the models overlaid over each other;
  • Figure 21 is an example of a neutral gesture for the second model.
  • Figure 22 is an example ofthe smile gesture, generated from the first model, being applied to the second model to generate a smile gesture in the second model.
  • 3D model ofthe human face and gestures associated with the human fact can be generated using the principles and techniques described herein, and that the following is merely exemplary of a particular application ofthe invention and the invention is not limited to the facial models described herein.
  • Step 1 is a flow chart describing a preferred algorithm for generating a 3D model of a human face.
  • an image acquisition process (Step 1) is used to load a photograph(s) (or other image) of a human face (for example, a "head shot") into the memory of a computer.
  • images may be loaded as JPEG images, however, other image type formats may be used without departing from the invention.
  • Images can be loaded from a diskette, downloaded from the Internet, or otherwise loaded into memory using known techniques so that the image processing techniques ofthe invention can be performed on the image in order to generate a 3D model.
  • the proper orientation ofthe image should be determined by locating and grading appropriate landmark points 10. Determining the image orientation allows a more realistic rendering ofthe image onto the deformation grids. Locating the appropriate landmark points 10 will now be described in detail.
  • a "seed fill” operation may preferably be performed (Step 2) on the image to eliminate the variable background ofthe image so that the boundary ofthe head (in the case of a face) can be isolated on the image.
  • FIG 4 is an exemplary image 20 of a person's head that may be loaded into the memory ofthe computer during the image acquisition process (Step 1, Figure 1).
  • a "seed fill" operation (Step 2, Figure 1) is a well-known recursive paintfill operation that is accomplished by identifying one or more points 22 in the background 24 ofthe image 20 based on, for example, color and luminosity ofthe point(s) 22 and expand a paintfill zone 26 outwardly from the point(s) 22 where the color and luminosity are similar.
  • the "seed fill” operation successfully replaces the color and luminescent background 24 ofthe image with an opaque background so that, the boundary ofthe head can be more easily determined.
  • the boundary ofthe head 30 can be determined (Step 3), for example, by locating the vertical center ofthe image (line 32) and integrating across a horizontal area 34 from the centerline 32 (using a non-fill operation) to determine the width ofthe head 30, and by locating the horizontal center ofthe image (line 36) and integrating across a vertical area 38 from the centerline 36 (using a non-fill operation) to determine the height ofthe head 30.
  • Step 3 statistically directed linear integration of a field of pixels whose values differ based on the presence of an object or the presence of a background is performed. This is shown in Figure 5 which shows the exemplary image 20 of Figure 4 with an opaque background 24.
  • the bounds ofthe head 30 can be determined by using statistical properties ofthe height ofthe head 30 and the known properties ofthe integrated horizontal area 34 and top ofthe head 30.
  • the height ofthe head will be approximately 2/3 ofthe image height and the width of the head will be approximately 1/3 ofthe image width.
  • the height ofthe head may also be 1.5 times the width ofthe head which is used as a first approximation.
  • the location ofthe eyes 40 can be determined (Step 4). Since the eyes 40 are typically located on the upper half of the head 30, a statistical calculation can be used and the head bounds can be divided into an upper half 42 and a lower half 44 to isolate the eye bound areas 46a, 46b. The upper half of the head bounds 42 can be further divided into right and left portions 46a, 46b to isolate the left and right eyes 40a, 40b, respectively. This is shown in detail in Figure 6 which shows the exemplary image 20 of Figure 4 with dashed lines indicating the particular bound areas.
  • each eye 40a, 40b can be located
  • Step 5 by identifying a circular region 48 of high contrast luminance within the respective eye bounds 46a, 46b.
  • This operation can be recursively performed outwardly from the centermost point 48 over the bounded area 46a, 46b and the results can be graded to determine the proper bounds ofthe eyes 40a, 40b.
  • Figure 7 shows the exemplary image of Figure 6 with the high contrast luminance portion of the eyes identified by dashed lines.
  • the scale and orientation ofthe head 30 can be determined (Step 6) by analyzing a line 50 connecting the eyes 40a, 40b to determine the angular offset ofthe line 50 from a horizontal axis ofthe screen.
  • the scale ofthe head 30 can be derived from the width ofthe bounds according to the following formula: width of bound/width of model.
  • Preferred landmark points 10 include a) outer head bounds 60a, 60b, 60c; b) inner head bounds 62a, 62b, 62c, 62d; c) right and left eye bounds 64a-d, 64w- z, respectively; d) corners of nose 66a, 66b; and e) corners of mouth 68a, 68b (mouth line), however, those skilled in the art recognize that other landmark points may be used without departing from the invention.
  • Figure 8 is an exemplary representation ofthe above landmark points shown for the image of Figure 4.
  • the image can be properly aligned with one or more deformation grids (described below) that define the 3D model 70 ofthe head (Step 7).
  • deformation grids that may be used to define the 3D model 70
  • Figure 9 illustrates an example of a 3D model of a human face generated using the 3D model generation method in accordance with the invention. Now, more details ofthe 3D model generation system will be described.
  • Figure 2 illustrates an example of a computer system 70 in which the 3D model generation method and gesture model generation method may be implemented.
  • the 3D model generation method and gesture model generation method may be implemented as one or more pieces of software code (or compiled software code) which are executed by a computer system.
  • the methods in accordance with the invention may also be implemented on a hardware device in which the method in accordance with the invention are programmed into a hardware device.
  • the computer system 70 shown is a personal computer system.
  • the invention may be implemented on a variety of different computer systems, such as client/server systems, server systems, workstations, etc... and the invention is not limited to implementation on any particular computer system.
  • the illustrated computer system may include a display device 72, such as a cathode ray tube or LCD, a chassis 74 and one or more input/output devices, such as a keyboard 76 and a mouse 78 as shown, which permit the user to interact with the computer system.
  • a display device 72 such as a cathode ray tube or LCD
  • a chassis 74 and one or more input/output devices, such as a keyboard 76 and a mouse 78 as shown, which permit the user to interact with the computer system.
  • the user may enter data or commands into the computer system using the keyboard or mouse and may receive output data from the computer system using the display device (visual data) or a printer (not shown), etc.
  • the chassis 74 may house the computing resources ofthe computer system and may include one or more central processing units (CPU) 80 which control the operation ofthe computer system as is well known, a persistent storage device 82, such as a hard disk drive, an optical disk drive, a tape drive and the like, that stores the data and instructions executed by the CPU even when the computer system is not supplied with power and a memory 84, such as DRAM, which temporarily stores data and instructions currently being executed by the CPU and loses its data when the computer system is not being powered as is well known.
  • the memory may store a 3D modeler 86 which is a series of instructions and data being executed by the CPU 80 to implement the 3D model and gesture generation methods described above. Now, more details ofthe 3D modeler will be described.
  • Figure 3 is a diagram illustrating more details ofthe 3D modeler 86 shown in Figure 2.
  • the 3D modeler includes a 3D model generation module 88 and a gesture generator module 90 which are each implemented using one or more computer program instructions.
  • the pseudo-code that may be used to implement each of these modules is shown in Figures 12A - 12B and Figures 14A and 14B.
  • an image of an object such as a human face is input into the system as shown.
  • the image is fed into the 3D model generation module as well as the gesture generation module as shown.
  • the output from the 3D model generation module is a 3D model ofthe image which has been automatically generated as described above.
  • the output from the gesture generation module is one or more gesture models which may then be applied to and used for any 3D model including any model generate by the 3D model generation module.
  • the gesture generator is described in more detail below with reference to Figure 11. In this manner, the system permits 3D models of any object to be rapidly generated and implemented. Furthermore, the gesture generator permits one or more gesture models, such as a smile gesture, a hand wave, etc%) to be automatically generated from a particular image. The advantage ofthe gesture generator is that the gesture models may then be applied to any 3D model. The gesture generator also eliminates the need for a skilled animator to implement a gesture. Now, the deformation grids for the 3D model generation will be described.
  • Figures 10A-10D illustrate exemplary deformation grids that may be used to define a 3D model 70 of a human head.
  • Figure 10A illustrates a bounds space deformation grid 72 which is preferably the innermost deformation grid. Overlaying the bounds space deformation grid 72 is a feature space deformation grid 74 (shown in Figure 10B).
  • An edge space deformation grid 76 (show in Figure 10C) preferably overlays the feature space deformation grid 74.
  • Figure 10D illustrates a detail deformation grid 7D that is preferably the outermost deformation grid.
  • the grids are preferably aligned in accordance with the landmark locations 10 (shown in Figure 10E) such that the head image 30 will be appropriately aligned with the deformation grids when its landmark locations 10 are aligned with the landmark locations 10 ofthe deformation grids.
  • a user may manually refine the landmark location precision on the head image (Step 8), for example by using the mouse or other input device to "drag" a particular landmark to a different area on the image 30.
  • the image 30 may be modified with respect to the deformation grids as appropriate (Step 9) in order to properly align the head image 30 with the deformation grids.
  • a new model state can then be calculated, the detail grid 78 can then be detached (Step 10), behaviors can be scaled for the resulting 3D model (Step 11), and the model can be saved (Step 12) for use as a virtual personality.
  • FIG 11 is a flowchart illustrating an automatic gesture generation method 100 in accordance with the invention.
  • the automatic gesture generation results in a gesture object which may then be applied to any 3D model so that a gesture behavior may be rapidly generated and reused with other models.
  • a separate gesture model for different types of 3D models.
  • a smile gesture may need to be automatically generated for a human male, a human female, a human male child and a human female child in order to make the gesture more realistic.
  • the method begins is step 102 in which a common feature space is generated.
  • the feature space is common space that is used to store and represent an object image, such as a face, movements ofthe object during a gesture and object scalars which capture the differences between different objects.
  • the gesture object to be generated using this method also stores a scalar field variable that stores the mapping between a model space and the feature space that permits transformation of motion and geometry data.
  • the automatic gesture generation method involves using a particular image of an object, such as a face, to generate an abstraction of a gesture ofthe object, such as a smile, which is then stored as a gesture object so that the gesture object may then be applied to any 3D model.
  • step 104 the method determines the correlation between the feature space and the image space to determine the texture map changes which represent changes to the surface movements ofthe image during the gesture.
  • step 106 the method updates the texture map from the image (to check the correlation) and applies the resultant texture map to the feature space and generates a variable "stDeltaChange" as shown in the exemplary pseudo-code shown in Figures 14A and 14B which stores the texture map changes.
  • step 108 the method determines the changes in the 3D vertices ofthe image model during the gesture which captures the 3D movement that occurs during the gesture.
  • step 110 the vertex changes are applied to the feature space and are captured in the gesture object in a variable "VertDeltaChange” as shown in Figure 14A and 14B.
  • step 112 the method determines the texture coloration that occurs during the gesture and applies it to the feature space.
  • the texture coloration is captured in the "DeltaMap” variable in the gesture object.
  • step 114 the gesture object is generated that includes the "stDeltaChange", “VertDeltaChange” and “DeltaMap” variables which contain the coloration, 2D and 3D movement that occurs during the gesture.
  • the variables represent only the movement and color changes that occurs during a gesture so that the gesture object may then be applied to any 3D model.
  • the gesture object distills the gesture that exists in a particular image model into an abstract object that contains the essential elements ofthe gesture so that the gesture may then be applied to any 3D model.
  • the gesture object also includes a scalar field variable storing the mapping between a feature space ofthe gesture and a model space of a model to permit transformation ofthe geometry and motion data.
  • the scalerArray has an entry for each geometry vertex in the Gesture object. Each entry is a 3 dimensional vector that holds the change in scale for that vertex ofthe Feature level from its undeformed state to the deformed state.
  • the scale is computed by vertex in Feature space by evaluating the sealer change in distance from that vert to connected verticies.
  • the sealer for a given Gesture vertex is computed by weighted interpolation of that Vertex's postion when mapped to UV space of a polygon in the Feature Level.
  • the shape and size of polygons in the feature level are chosen to match areas of similarly scaled movement. This was determined by analyzing visual flow of typical facial gestures. The above method is shown in greater detail in the pseudo-code shown in Figure 14A and 14B.
  • Figures 12 A- B and Figures 13 A and B respectively, contain a sample pseudo code algorithm and exemplary work flow process for automatically generating a 3D model in accordance with the invention.
  • the automatically generated model can incorporate built-in behavior animation and interactivity.
  • behavior animation and interactivity For example, for the human face, such expressions include gestures, mouth positions for lip syncing (visemes), and head movements.
  • Such behaviors can be integrated with technologies such as automatic lip syncing, text-to-speech, natural language processing, and speech recognition and can trigger or be triggered by user or data driven events.
  • real-time lip syncing of automatically generated models may be associated with audio tracks.
  • real-time analysis ofthe audio spoken by an intelligent agent can be provided and synchronized head and facial gestures initiated to provide automatic, lifelike movements to accompany speech delivery.
  • virtual personas can be deployed to serve as an intelligent agent that may be used as an interactive, responsive front-end to information contained within knowledge bases, customer resource management systems, and learning management systems, as well as entertainment applications and communications via chat, instant messaging, and e-mail.
  • a gesture being generated from an image of a 3D model and then applied to another model in accordance with the invention will now described.
  • Figure 15 illustrates an example of a base 3D model for a first model, Kristen.
  • the 3D model shown in Figure 15 has been previously generated as described above using the 3D model generation process.
  • Figure 16 illustrates a second 3D model generated as described above. These two models will be used to illustrate the automatic generation of a smile gesture from an existing model to generate a gesture object and then the application of that generated gesture object to another 3D model.
  • Figure 17 show an example ofthe first model in a neutral gesture while Figure 18 shows an example ofthe first model in a smile gesture.
  • the smile gesture ofthe first model is then captured as described above.
  • Figure 19 illustrates an example ofthe smile gesture map (the graphical version ofthe gesture object described above) that is generated from the first model based on the neutral gesture and the smile gesture.
  • the gesture map abstracts the gesture behavior ofthe first model into a series of coloration changes, texture map changes and 3D vertices changes which can then be applied to any other 3D model that has texture maps and 3D vertices. Then, using this gesture map (which includes the variables described above), the gesture object may be applied to another model in accordance with the invention. In this manner, the automatic gesture generation process permits various gestures for a 3D model to be abstracted and then applied to other 3D models.
  • Figure 20 is an example ofthe feature space with both the models overlaid over each other to illustrate that the feature space ofthe first and second models are consistent with each other.
  • Figure 21 illustrates the neutral gesture of the second model.
  • Figure 22 illustrates the smile gesture (from the gesture map generated by from the first model) applied to the second model to provide a smile gesture to that second model even when the second model does not actually show a smile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

La présente invention concerne un système et un procédé de modélisation tridimensionnelle automatique permettant la génération d'un modèle tridimensionnel à partir d'une image ou analogue. Par exemple, il permet la génération automatique d'un modèle tridimensionnel pour le visage d'une personne. Le système et le procédé permettent également la génération automatique de gestes/comportements associés à un modèle tridimensionnel de sorte les gestes/comportements soient applicables à d'autres modèles tridimensionnels quelconques.
PCT/US2002/025933 2001-08-14 2002-08-14 Systeme et procede de modelisation tridimensionnelle automatique WO2003017206A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CA2457839A CA2457839C (fr) 2001-08-14 2002-08-14 Systeme et procede de modelisation tridimensionnelle automatique
EP02757127A EP1425720A1 (fr) 2001-08-14 2002-08-14 Systeme et procede de modelisation tridimensionnelle automatique
CN028203321A CN1628327B (zh) 2001-08-14 2002-08-14 自动三维建模系统和方法
MXPA04001429A MXPA04001429A (es) 2001-08-14 2002-08-14 Sistema y metodo para el modelado tridimensional automatico.
KR1020047002201A KR100720309B1 (ko) 2001-08-14 2002-08-14 자동 3차원 모델링 시스템 및 방법
JP2003522039A JP2005523488A (ja) 2001-08-14 2002-08-14 自動3dモデリングシステム及び方法

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US31238401P 2001-08-14 2001-08-14
US60/312,384 2001-08-14
US21911902A 2002-08-13 2002-08-13
US10/219,041 US7123263B2 (en) 2001-08-14 2002-08-13 Automatic 3D modeling system and method
US10/219,119 2002-08-13
US10/219,041 2002-08-13

Publications (2)

Publication Number Publication Date
WO2003017206A1 WO2003017206A1 (fr) 2003-02-27
WO2003017206A9 true WO2003017206A9 (fr) 2003-10-30

Family

ID=27396614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/025933 WO2003017206A1 (fr) 2001-08-14 2002-08-14 Systeme et procede de modelisation tridimensionnelle automatique

Country Status (6)

Country Link
EP (1) EP1425720A1 (fr)
JP (3) JP2005523488A (fr)
CN (1) CN1628327B (fr)
CA (2) CA2457839C (fr)
MX (1) MXPA04001429A (fr)
WO (1) WO2003017206A1 (fr)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2875043B1 (fr) * 2004-09-06 2007-02-09 Innothera Sa Lab Dispositif pour etablir une representation tridimensionnelle complete d'un membre d'un patient a partir d'un nombre reduit de mesures prises sur ce membre
ES2284391B1 (es) * 2006-04-19 2008-09-16 Emotique, S.L. Procedimiento para la generacion de imagenes de animacion sintetica.
US20110298799A1 (en) * 2008-06-03 2011-12-08 Xid Technologies Pte Ltd Method for replacing objects in images
CN101609564B (zh) * 2009-07-09 2011-06-15 杭州力孚信息科技有限公司 一种草图式输入的三维网格模型制作方法
CN102496184B (zh) * 2011-12-12 2013-07-31 南京大学 一种基于贝叶斯和面元模型的增量三维重建方法
CN103207745B (zh) * 2012-01-16 2016-04-13 上海那里信息科技有限公司 虚拟化身交互系统和方法
CN105321147B (zh) * 2014-06-25 2019-04-12 腾讯科技(深圳)有限公司 图像处理的方法及装置
JP6489726B1 (ja) * 2017-09-08 2019-03-27 株式会社Vrc 3dデータシステム及び3dデータ処理方法
US10586368B2 (en) 2017-10-26 2020-03-10 Snap Inc. Joint audio-video facial animation system
CN108062785A (zh) * 2018-02-12 2018-05-22 北京奇虎科技有限公司 面部图像的处理方法及装置、计算设备
CN111553983A (zh) * 2020-03-27 2020-08-18 中铁十九局集团第三工程有限公司 还原爆炸现场的三维空间建模方法、装置、设备和介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09305798A (ja) * 1996-05-10 1997-11-28 Oki Electric Ind Co Ltd 画像表示装置
JP2915846B2 (ja) * 1996-06-28 1999-07-05 株式会社エイ・ティ・アール通信システム研究所 3次元映像作成装置
US5978519A (en) * 1996-08-06 1999-11-02 Xerox Corporation Automatic image cropping
US6222553B1 (en) * 1997-08-04 2001-04-24 Pixar Animation Studios Hybrid subdivision in computer graphics
JPH11175223A (ja) * 1997-12-11 1999-07-02 Alpine Electron Inc アニメーション作成方法、アニメーション作成装置及び記憶媒体
JPH11219422A (ja) * 1998-02-02 1999-08-10 Hitachi Ltd 顔による個人同定通信方法
US6072496A (en) * 1998-06-08 2000-06-06 Microsoft Corporation Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects
JP3639475B2 (ja) * 1999-10-04 2005-04-20 シャープ株式会社 3次元モデル生成装置および3次元モデル生成方法ならびに3次元モデル生成プログラムを記録した記録媒体

Also Published As

Publication number Publication date
CN1628327A (zh) 2005-06-15
CA2690826C (fr) 2012-07-17
CA2457839C (fr) 2010-04-27
JP2011159329A (ja) 2011-08-18
CA2690826A1 (fr) 2003-02-27
JP2008102972A (ja) 2008-05-01
WO2003017206A1 (fr) 2003-02-27
JP2005523488A (ja) 2005-08-04
EP1425720A1 (fr) 2004-06-09
CN1628327B (zh) 2010-05-26
MXPA04001429A (es) 2004-06-03
CA2457839A1 (fr) 2003-02-27

Similar Documents

Publication Publication Date Title
US7123263B2 (en) Automatic 3D modeling system and method
US20210174072A1 (en) Microexpression-based image recognition method and apparatus, and related device
US7920144B2 (en) Method and system for visualization of dynamic three-dimensional virtual objects
JP2008102972A (ja) 自動3dモデリングシステム及び方法
US6999084B2 (en) Method and apparatus for computer graphics animation utilizing element groups with associated motions
Longhurst et al. A gpu based saliency map for high-fidelity selective rendering
US7200281B2 (en) System and method for image-based surface detail transfer
US7307633B2 (en) Statistical dynamic collisions method and apparatus utilizing skin collision points to create a skin collision response
US20060256112A1 (en) Statistical rendering acceleration
US20070035547A1 (en) Statistical dynamic modeling method and apparatus
JP4842242B2 (ja) キャラクタアニメーション時の皮膚のしわのリアルタイム表現方法及び装置
KR100900823B1 (ko) 캐릭터 애니메이션 시 피부의 주름 실시간 표현 방법 및장치
JP2000268188A (ja) オクルージョンカリングを行う3次元グラフィックス描画装置および方法
AU2002323162A1 (en) Automatic 3D modeling system and method
US20230326137A1 (en) Garment rendering techniques
CN115494958A (zh) 手物交互图像生成方法、系统、设备及存储介质
Vicar et al. 3D performance capture for facial animation
CN116980680A (zh) 电子铭牌显示方法、终端设备及计算机存储介质
Lewis Siggraph 2005 course notes-Digital Face Cloning Audience Perception of Clone Realism
Sousa et al. An Advanced Color Representation for Lossy
Duce et al. A Formal Specification of a Graphics System in the
Kim et al. A feature-preserved simplification for autonomous facial animation from 3D scan data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZM ZW

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
COP Corrected version of pamphlet

Free format text: PAGES 1/28-28/28, DRAWINGS, REPLACED BY NEW PAGES 1/25-25/25; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

WWE Wipo information: entry into national phase

Ref document number: PA/a/2004/001429

Country of ref document: MX

Ref document number: 2457839

Country of ref document: CA

Ref document number: 1020047002201

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2003522039

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2002323162

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2002757127

Country of ref document: EP

Ref document number: 547/CHENP/2004

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 20028203321

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2002757127

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642