EP1082706A1 - 3d image processing method and apparatus - Google Patents

3d image processing method and apparatus

Info

Publication number
EP1082706A1
EP1082706A1 EP99955353A EP99955353A EP1082706A1 EP 1082706 A1 EP1082706 A1 EP 1082706A1 EP 99955353 A EP99955353 A EP 99955353A EP 99955353 A EP99955353 A EP 99955353A EP 1082706 A1 EP1082706 A1 EP 1082706A1
Authority
EP
European Patent Office
Prior art keywords
character
representation
dimensional
generic
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99955353A
Other languages
German (de)
English (en)
French (fr)
Inventor
Christopher Peter Flockhart
Duncan Hughes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tricorder Technology PLC
Original Assignee
Tricorder Technology PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tricorder Technology PLC filed Critical Tricorder Technology PLC
Publication of EP1082706A1 publication Critical patent/EP1082706A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to a method and apparatus for processing facial images, particularly but not exclusively for use in digital animation eg computer games.
  • Photogrammetric techniques are known for converting two or more overlapping 2D images acquired from different viewpoints into a common 3D representation and in principle such techniques can be applied to the human face to generate a 3D representation which can be animated using known digital techniques.
  • Suitable algorithms for correlating image regions of corresponding images are already known - eg Gruen's algorithm (see Gruen, A W “Adaptive least squares correlation: a powerful image matching technique”S Afr J of Photogrammetry, remote sensing and Cartography Vol 14 No 3 (1985) and Gruen, A W and Baltsavias, E P "High precision image matching for digital terrain model generation” Int Arch photogrammetry Vol 25 No 3 (1986) p254) and particularly the "region-growing” modification thereto which is described in Otto and Chau “Region-growing algorithm for matching terrain images” Image and Vision Computing Vol 7 No 2 May 1989 p83.
  • Gruen's algorithm is an adaptive least squares correlation algorithm in which two image patches of typically 15 x 15 to 30 x 30 pixels are correlated (ie selected from larger left and right images in such a manner as to give the most consistent match between patches) by allowing an affine geometric distortion between coordinates in the images (ie stretching or compression in which originally parallel lines remain parallel in the transformation) and allowing an additive radiometric distortion between the grey levels of the pixels in the image patches, generating an over-constrained set of linear equations representing the discrepancies between the correlated pixels and finding a least squares solution which minimises the discrepancies.
  • the Gruen algorithm is essentially an iterative algorithm and requires a reasonable approximation for the correlation to be fed in before it will converge to the correct solution.
  • the Otto and Chau region-growing algorithm begins with an approximate match between a point in one image and a point in the other, utilises Gruen's algorithm to produce a more accurate match and to generate the geometric and radiometric distortion parameters, and uses the distortion parameters to predict _ approximate matches for points in the region of the neighbourhood of the initial matching point.
  • the neighbouring points are selected by choosing the four adjacent points on a grid having a grid spacing of eg 5 or 10 pixels in order to avoid running
  • a candidate matched point moves by more than a certain amount (eg 3 pixels) per iteration then it is not a valid matched point and should be rejected;
  • One object of the present invention is to overcome or alleviate such disadvantages.
  • the present invention provides a method of providing a three- dimensional representation of an object wherein two or more two-dimensional images of the object are photogrammetrically processed to generate an incomplete three-dimensional representation thereof and the incomplete three-dimensional representation is combined with a generic representation of such objects to provide the three-dimensional representation.
  • the object is a human or animal body or a part thereof.
  • the three-dimensional representation derived from the combination with the generic representation is provided in the format of an animatable character.
  • the resulting three-dimensional representation is converted to the file format of a computer game character and loaded into the computer game.
  • the invention provides a method of personalising a computer game character wherein at least one image of a player of the game is digitally processed at a location remote from the player's computer, converted to an animatable character file and loaded onto the player's computer.
  • the image can be processed on an Internet server computer and downloaded over the Internet.
  • a fully automated system whereby Quake, Doom, Descent and other popular games users, can be provided with a custom game character with their own face (preferably a 3D face) inserted into the character. This would enable them to use a visualisation of themselves in a game.
  • This service could be provided via the Internet with little or no human intervention by the operator of the server.
  • a low-resolution model is required for gaming, as the game will have to support the manipulation of the character in the gaming environment, in real time, on a variety of PCs.
  • the games user would be required to take a set of images of himself/herself using a digital camera or scanned photographs, under specified guidelines.
  • the server will then schedule an image processing job to perform the following tasks: * Determine 3D facial geometry from the supplied image files
  • the completed character would be sent as an attachment to the specified email address, and a micro transaction performed to bill the user's credit card.
  • Figure 1 is a schematic flow diagram of an image processing method in accordance with one aspect of the invention.
  • Figure 2A is a schematic plan view showing one camera arrangement for acquiring the images utilised in the method of Figure 1 ;
  • Figure 2B is a schematic plan view of another camera arrangement for acquiring the ima *goe"s utilised in the method of Figure 1;
  • Figure 2C is a schematic plan view of yet another camera arrangement for acquiring the images utilised in the method of Figure 1, and
  • Figure 3 is a schematic representation of an Internet-based arrangement for providing an animated games character by a method in accordance with the second aspect of the invention.
  • left and right images II and 12 are acquired eg by a digital camera and processed by standard photogrammetric techniques to provide an incomplete 3D representatation 100 of the game player's head.
  • the determination of the game player's facial geometry can involve Gruens type area matching, facial feature correlation, and facial feature recognition via a statistical model of the human face.
  • Gruens type area matching suffers from the problem of having no projected texture, and is thus highly susceptible to the texture in the face of the subject, the ambient lighting conditions, and the difference in colour hues and intensities between images. It is also susceptible to the lack of camera model or optical geometry of the captured images.
  • Facial feature correlation suffers from the problem that any facial feature that is incorrectly detected will cause a very poor model to be generated. Facial feature recognition via a statistical model prevents gross inaccuracies from occurring and should lead to a more robust solution. It is possible that part of the image submission process could involve the user in specifying certain key points on the images.
  • a 3D representation of a generic head 200 is provided. Given geometric information derived from the preceding stage, the generic head model can be distorted to fit the subject's roughly calculated geometry. This head could be in one of two forms, a NURBS (Non-Uniform Rational B Spline) model, or a polygon model.
  • the NURBS Model has the advantage of being easily deformable to the subject's geometry, but suffers from the drawback of higher processing overhead, and having to convert to polygons for subsequent processing stages.
  • a texture map is derived (400) from the 3D head (100) and attached to the representation resulting from step 300 (step 500) (ie used to render the modified generic head) and the resulting realistic character representation is then integrated with or attached to the body of the games character (step 600).
  • step 700 If necessary the resulting model is converted to polygon form (step 700).
  • the number of polygons may have to be reduced (step 800).
  • the completed model may be reduced to quite a low polygon count, possibly 100 or so, in order to produce a relatively small model to transmit and use within the game.
  • polygon-reduced representation is converted to a games file format which can be handled by the game (step 900).
  • This last step may require liaison and co-operation with the games manufacturers, or it is conceivable that this task could be performed completely independently.
  • FIG. 2A, 2B and 2C Each of these Figures shows different camera arrangements which could be provided as fixed stereoscopic camera arrangements in a dedicated booth provided in (say a gaming arcade) or could be set up by the games player.
  • a camera C acquires an image from one viewpoint and the same or a different camera C acquires an overlapping image from a different viewpoint.
  • the fields of view V must overlap in the region of the face of the subject 1.
  • the cameras are diagonally disposed at right angles
  • the cameras are parallel
  • the cameras are orthogonal, so that one camera has a front view and the other camera has a profile view of the subject 1.
  • the arrangement of Figure 2C is particularly preferred because the front view and profile are acquired independently.
  • the front image and profile image can be analysed to determine the size and location of features and the resulting data can be used to select one of a range of generic heads or to adjust variable parameters of the generic head as shown, prior to step 300.
  • the exact camera locations and orientations can be determined and the remaining points correlated relatively easily to enable a 3D representation of the subject 1 to be generated, essentially be projecting ray lines from pairs of correlated points by virtual projectors having the same location, orientation and optical parameters as the cameras.
  • a known algorithm eg the Gruen algorithm
  • the above correlation process between the generic image G and the image I of the character 1 provided by digital camera C can be performed by a server computer S on the Internet and the 2D images acquired by the camera C can either be posted by the games player (eg as photographic prints) or uploaded (eg as email attachments) onto the server from the user's computer PC via a communications link CL provided by the Internet.
  • server computer S has stored on its hard disc HD:
  • the use's computer PC would have stored on its hard disc HD one or more games programs, Internet access software, graphics software for handling the images 0 provided by camera C and a conventional operating system, eg Windows 95® or
  • Both computer PC and computer S are provided with a standard microprocessor ⁇ P eg an Intel Pentium® processor as well as RAM and ROM and appropriate input/output circuitry I/O connected to standard modems M or other 5 comunication devices.
  • ⁇ P eg an Intel Pentium® processor
  • RAM and ROM appropriate input/output circuitry I/O connected to standard modems M or other 5 comunication devices.
  • the WWW submission form F would be based on a Java Applet to allow the validation of the quality of the submitted images, and the selection of body Q types. It is likely that the server operator would want to test images for their size, resolution, and possibly their contrast ratio, before accepting them for processing. If this can be done by an applet before accepting any credit card transaction, then it will help to reduce bad conversions. By weeding out potential failures at an early _ stage, this will reduce wasted processing time, and will reduce customer frustration by not having to wait a few hours to find out that the images were not of sufficient quality to produce a character.
  • the operator can request other information of the user for his own uses, namely:
  • the invention can also be implemented in a purpose-built games booth at which the images II and 12 are acquired, and the processing can be carried out either locally in the booth or remotely eg in a server computer linked to a number of such booths in a network.
  • more than two cameras could be used to acquire the 3D surface of the character.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
EP99955353A 1998-06-01 1999-06-01 3d image processing method and apparatus Withdrawn EP1082706A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB9811695 1998-06-01
GBGB9811695.7A GB9811695D0 (en) 1998-06-01 1998-06-01 Facial image processing method and apparatus
PCT/GB1999/001744 WO1999063490A1 (en) 1998-06-01 1999-06-01 3d image processing method and apparatus

Publications (1)

Publication Number Publication Date
EP1082706A1 true EP1082706A1 (en) 2001-03-14

Family

ID=10832987

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99955353A Withdrawn EP1082706A1 (en) 1998-06-01 1999-06-01 3d image processing method and apparatus

Country Status (6)

Country Link
EP (1) EP1082706A1 (ko)
JP (1) JP2002517840A (ko)
KR (1) KR20010074504A (ko)
AU (1) AU4275899A (ko)
GB (2) GB9811695D0 (ko)
WO (1) WO1999063490A1 (ko)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2342026B (en) * 1998-09-22 2003-06-11 Luvvy Ltd Graphics and image processing system
US7359748B1 (en) 2000-07-26 2008-04-15 Rhett Drugge Apparatus for total immersion photography
JP4677661B2 (ja) * 2000-10-16 2011-04-27 ソニー株式会社 動画像受注システム及び方法
US6980333B2 (en) * 2001-04-11 2005-12-27 Eastman Kodak Company Personalized motion imaging system
US7257236B2 (en) 2002-05-22 2007-08-14 A4Vision Methods and systems for detecting and recognizing objects in a controlled wide area
US7174033B2 (en) 2002-05-22 2007-02-06 A4Vision Methods and systems for detecting and recognizing an object based on 3D image data
JP4521012B2 (ja) * 2007-04-13 2010-08-11 株式会社ソフイア 遊技機
JP5627860B2 (ja) 2009-04-27 2014-11-19 三菱電機株式会社 立体映像配信システム、立体映像配信方法、立体映像配信装置、立体映像視聴システム、立体映像視聴方法、立体映像視聴装置
KR101050364B1 (ko) * 2009-09-30 2011-07-20 주식회사 한울네오텍 3차원 객체를 제공하는 사진 장치 및 그 제공방법
JP2013535726A (ja) * 2010-07-23 2013-09-12 アルカテル−ルーセント 仮想環境のユーザを視覚化するための方法
JP5603452B1 (ja) 2013-04-11 2014-10-08 株式会社スクウェア・エニックス ビデオゲーム処理装置、及びビデオゲーム処理プログラム
JP6219791B2 (ja) * 2014-08-21 2017-10-25 株式会社スクウェア・エニックス ビデオゲーム処理装置、及びビデオゲーム処理プログラム

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9013983D0 (en) * 1990-06-22 1990-08-15 Nat Res Dev Automatic carcass grading apparatus and method
US5550960A (en) * 1993-08-02 1996-08-27 Sun Microsystems, Inc. Method and apparatus for performing dynamic texture mapping for complex surfaces
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
GB9610212D0 (en) * 1996-05-16 1996-07-24 Cyberglass Limited Method and apparatus for generating moving characters
DE19626096C1 (de) * 1996-06-28 1997-06-19 Siemens Nixdorf Inf Syst Verfahren zur dreidimensionalen Bilddarstellung auf einer Großbildprojektionsfläche mittels eines Laser-Projektors
US6016148A (en) * 1997-06-06 2000-01-18 Digital Equipment Corporation Automated mapping of facial images to animation wireframes topologies
IL121178A (en) * 1997-06-27 2003-11-23 Nds Ltd Interactive game system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9963490A1 *

Also Published As

Publication number Publication date
GB2350511A (en) 2000-11-29
AU4275899A (en) 1999-12-20
GB2350511B (en) 2003-11-19
GB9811695D0 (en) 1998-07-29
KR20010074504A (ko) 2001-08-04
GB9912707D0 (en) 1999-08-04
JP2002517840A (ja) 2002-06-18
WO1999063490A1 (en) 1999-12-09

Similar Documents

Publication Publication Date Title
CN107484428B (zh) 用于显示对象的方法
US6219444B1 (en) Synthesizing virtual two dimensional images of three dimensional space from a collection of real two dimensional images
Bernardini et al. Building a digital model of Michelangelo's Florentine Pieta
Pollefeys et al. From images to 3D models
US7224357B2 (en) Three-dimensional modeling based on photographic images
US20200057831A1 (en) Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
CN109410256A (zh) 基于互信息的点云与影像自动高精度配准方法
US11182945B2 (en) Automatically generating an animatable object from various types of user input
US20040095385A1 (en) System and method for embodying virtual reality
WO2019035155A1 (ja) 画像処理システム、画像処理方法、及びプログラム
KR20130138247A (ko) 신속 3d 모델링
EP1082706A1 (en) 3d image processing method and apparatus
CN101996416A (zh) 3d人脸捕获方法和设备
CN107622526A (zh) 一种基于手机面部识别组件进行三维扫描建模的方法
Jaw et al. Registration of ground‐based LiDAR point clouds by means of 3D line features
US7280685B2 (en) Object segmentation from images acquired by handheld cameras
Wang et al. Dynamic human body reconstruction and motion tracking with low-cost depth cameras
US11645800B2 (en) Advanced systems and methods for automatically generating an animatable object from various types of user input
Furferi et al. A RGB-D based instant body-scanning solution for compact box installation
US11080920B2 (en) Method of displaying an object
Frisky et al. Acquisition Evaluation on Outdoor Scanning for Archaeological Artifact Digitalization.
Gallardo et al. Using Shading and a 3D Template to Reconstruct Complex Surface Deformations.
JP2001012922A (ja) 3次元データ処理装置
EP3779878A1 (en) Method and device for combining a texture with an artificial object
JP2002135807A (ja) 3次元入力のためのキャリブレーション方法および装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20001130

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE GB SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 20020411