GB2350511A - Stereogrammetry; personalizing computer game character - Google Patents

Stereogrammetry; personalizing computer game character Download PDF

Info

Publication number
GB2350511A
GB2350511A GB9912707A GB9912707A GB2350511A GB 2350511 A GB2350511 A GB 2350511A GB 9912707 A GB9912707 A GB 9912707A GB 9912707 A GB9912707 A GB 9912707A GB 2350511 A GB2350511 A GB 2350511A
Authority
GB
United Kingdom
Prior art keywords
representation
character
dimensional
images
generic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9912707A
Other versions
GB2350511B (en
GB9912707D0 (en
Inventor
Duncan Hughes
Christopher Peter Flockhart
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tricorder Technology PLC
Original Assignee
Tricorder Technology PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tricorder Technology PLC filed Critical Tricorder Technology PLC
Publication of GB9912707D0 publication Critical patent/GB9912707D0/en
Publication of GB2350511A publication Critical patent/GB2350511A/en
Application granted granted Critical
Publication of GB2350511B publication Critical patent/GB2350511B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

Two images of a user's head are scanned and sent to a central processing station via the Internet. The central station processes the two images stereogrammetrically to provide a partial three-dimensional image of the user's head, and then matches this with a stored generic head image, distorts the stored image if necessary, and combines the two to provide a complete three-dimensional image. The three-dimensional image is then used to provide animated characters, having the head of the user, which are sent back to the user's computer via the Internet, for use in conventional computer games.

Description

2350511 Image Processing Method and AI?12aratus The present invention
relates to a method and apparatus for processing facial images, 0 particularly but not exclusively for use in digital animation eg computer games.
zn V Photo grammetric techniques are known for converting two or more overlapping 2D Ic, 0 4:
images acquired from different viewpoints into a common 3D representation and in t principle such techniques can be applied to the human face to generate a 3D Z representation which can be animated using known digital techniques.
ID 0 Suitable al-orithms for correlatino, imaae regions of corresponding images (eg 41) 4=1 C) 0 C 0 photographs taken during airborne surveys) are already known - eg Gruen's 0 t 0 algorithm (see Gruen, A W "Adaptive least squares correlation: a powerful image matchina technique"S Afi- J of Photogrammetry, remote sensing and Cartography Im Vol 14 No 3 (1985) and Gruen, A W and Baltsavias, E P "High precision image 0 0 matching for digital terrain model generation" Int Arch photogrammetry Vol 25 No 3 1:1 Zl (1986) p254) and particularly the "region-growing" modification thereto which is 0 C, C1 described in Otto and Chau "Region-growing algorithm for matching terrain images" 0 4=1 Z:. 0 Image and Vision Computing Vol 7 No 2 May 1989 p83.
Essentially, Gruen's algorithm is an adaptive least squares correlation algorithm in en which two image patches of typically 15 x 15 to 30 x 30 pixels are correlated (ie selected from larger left and right images in such a manner as to give the most r t:. 0 consistent match between patches) by allowing an affine geometric distortion 0 between coordinates in the images (ie stretching or compression in which originally 0 0 parallel lines remain parallel in the transformation) and allowing an additive radiometric distortion between the grey levels of the pixels in the image patches, generating an over-constrained set of linear equations representing the discrepancies Z. 0 between the correlated pixels and finding a least squares solution which minimises the discrepancies.
The Gruen algorithm is essentially an iterative algorithm and requires a reasonable approximation for the correlation to be fed in before it will converge to the correct solution. The Otto and Chau region-growing algorithm begins with an approximate 0 0 match between a point in one image and a point in the other, utilises Gruen's algorithm to produce a more accurate match and to generate the geometric and 2 radiometric distortion parameters, and uses the distortion parameters to predict approximate matches for points in the region of the neighbourhood of the initial matching point. The neighbouring points are selected by choosing the four adjacent c) 0 points on a grid having a grid spacing of eg 5 or 10 pixels in order to avoid running 0 & 0 Gruen's algorithm for every pixel.
Hu et al "Matching Point Features with ordered Geometric, Rigidity and Disparity Z> Constraints" IEEE Transactions on Pattern Analysis and Machine Intelligence Vol 0 16 No 10, 1994 pp1041-1049 (and references cited therein) discloses further methods for correlating features of overlapping images.
0 0 Our co-pending patent applications disclose a number of improvements to the Gruen 0 algorithm, as follows:
is i) the additive radiometric shift employed in the algorithm can be dispensed with; ii) if during successive iterations, a candidate matched point moves by more than a 0 certain amount (eg 3 pixels) per iteration then it is not a valid matched point and should be rejected; iii) during the growing of a matched region it is useful to check for sufficient 0 C5 contrast at at least three of the four sides of the region in order to ensure that there is 0 sufficient data for a stable convergence - in order to facilitate this it is desirable to make the algorithm configurable to enable the parameters (eg required contrast) to 0 CI CI be optimised for different environments, and iv) in order to quantify the validity of the correspondences between respective patches of one image and points in the other image it has been fokind useful to re- 0 0 derive the original grid point in the starting image by applying the algorithm to the 0 0 0 matched point in the other image (ie reversing the stereo matching process) and 0 0 measuring the distance between the original grid point and the new grid point found in the starting image from the reverse stereo matching. The smaller the distance the c) 0 better the correspondence.
However the known photogrammetric techniques still require correlations between high quality overlapping images and, in cases where there is little texture 3 information in the subject (which is true of large regions of the human face) it is difficult or impossible to correlate all the regions, which results in holes in the 3D reconstruction. Such difficulties can be overcome by projecting an optical (particularly infra-red) pattern (particularly a speckle pattern) onto the subject but the requirement for pattern projection increases the expense of an already sophisticated apparatus.
One object of the present invention is to overcome or alleviate such disadvantages.
C In one aspect the present invention provides a method of providing a threedimensional representation of an object wherein two or more twodimensional images of the object are photogrammetrically processed to generate an incomplete C. r> three-dimensional representation thereof and the incomplete three- dimensional 15 representation is combined with a generic representation of such objects to provide C the three-dimensional representation.
In one embodiment the object is a human or animal body or a part thereof.
In a preferred embodiment the three-dimensional representation derived from the combination with the generic representation is provided in the format of an animatable character.
In a preferred embodiment the resulting three-dimensional representation is 0 converted to the file format of a computer game character and loaded into the computer game.
C In another aspect the invention provides a method of personalising a computer game Z> character wherein at least one image of a player of the game is digitally processed at a location remote from the player's computer, converted to an animatable character file and loaded onto the player's computer. For example the image can be processed 0 on an Internet server computer and downloaded over the Internet.
Further preferred features are defined in the dependent claims.
In a preferred embodiment a fully automated system is provided whereby Quake, Doom, Descent and other popular games users, can be provided with a custom game 4 character with their own face (preferably a 3D face) inserted into the character. This would enable them to use a visualisation of themselves in a game. This service could be provided via the Internet with little or no human intervention by the operator of the server.
Nothing similar currently exists with this level of accessibility by the gaming public.
Z> Z' By using a generic head during image processing, a relatively low quality of 3D In 0 0 surface is required in order to get an acceptable result, and the problems of holes in 10 3D data-sets can be eliminated.
A low-resolution model is required for gaming, as the game will have to support the manipulation of the character in the gaming environment, in real time, on a variety =1 IM of PCs.
It is assumed that if necessary, a user would tolerate a few hours turnaround time between submitting their images and receiving a model either on a data medium 0 0 such as floppy disk or by email.
In one embodiment the games user would be required to take a set of images of himself/herself using a digital camera or scanned photographs, under specified 0 CP C, guidelines.
He/she will then access aWeb page hosted by the server, which will provide a form, requiring the user to enter the following information:
0 Name Email address Which aame he/she wants the model for.
ID The images which are to be submitted.
Select a body on which he/she wants the face inserted.
Credit Card details The server will then schedule an image processing job to perform the following tasks:
Determine 3D facial geometry from the supplied image files Modify a generic head to this geometry Apply a texture map from the supplied images Polygon reduce the head model Integrate the head model with a body Convert to complete model to the required format for the specified game.
After appropriate processing, the completed character would be sent as an 0 attachment to the specified email address, and a micro transaction perforTned to bill the user's credit card.
A preferred embodiment of the invention is described below by way of example only with reference to Figures 1 to 3 of the accompanying drawings, wherein:
Fiaure 1 is a schematic flow diag g gram of an image processing method in accordance with one aspect of the invention; Figure 2A is a schematic plan view showing one camera arranaement for acquiring C> 0 0 the images utilised in the method of Figure 1; r.51 0 FioUre 2B is a schematic plan view of another camera arrangement for acquiring the 0 C 0 images utilised in the method of Figure 1; C Figure 2C is a schematic plan view of yet another camera arrangement for acquiring 0 C1 05 the images utilised in the method of FigUre 1, and 30 Figure 3 is a schematic representation of an Internet-based -arrangement for providing an animated games character by a method in accordance with the second 0 aspect of the invention. 35 Referring to Figure 1, left and right images 11 and 12 are acquired eg by a digital 0 camera and processed by standard photogrammetric techniques to provide an incomplete 3D representatation 100 of the game player's head.
0 The determination of the game player's facial geometry can involve Gruens type 6 area matching, facial feature correlation, and facial feature recognition via a statistical model of the human face. Gruens type area matching suffers from the problem of having no projected texture, and is thus highly susceptible to the texture in the face of the subject, the ambient lighting conditions, and the difference in colour hues and intensities between images. It is also susceptible to the lack of 0 camera model or optical geometry of the captured images. Facial feature correlation C.
suffers from the problem that any facial feature that is incorrectly detected will cause a very poor model to be generated. Facial feature recognition via a statistical model prevents gross inaccuracies from occurring and should lead to a more robust solution. It is possible that part of the image submission process could involve the user in specifying certain key points on the images.
0 0 In order to alleviate the above problems, a 3D representation of a generic head 200 is provided. Given geometric information derived from the preceding stage, the generic head model can be distorted to fit the subject's roughly calculated geometry. This C 0 head could be in one of two forms, a NURBS (Non-Uniform Rational B Spline) model, or a polygon model. The NURBS Model has the advantage of being easily 0 0 deformable to the subjects geometry, but suffers from the drawback of higher 0 C processing overhead, and having to convert to polygons for subsequent processing r> 0 stax),es.
r> At this stage of processing (modified generic head 300) there should already be a 0 0 correlation between certain points in each image, and points on the 3D model, greatly simplifying the task of texture mapping. There remains the problem of 4= texture merging arising from the use of multiple images.
A texture map is derived (400) from the 31) head (100) and attached to the representation resulting from step 300 (step 500) (ie used to render the modified 0 generic head) and the resulting realistic character representation is then integrated with or attached to the body of the games character (step 600).
If necessary the resulting model is converted to polygon form (step 700).
0 If the modified generic head is represented in polygon form, the number of polygons may have to be reduced (step 800). There are plenty of algorithms and commercially available code for polygon reduction. The completed model may be reduced to quite 7 a low polygon count, possibly 100 or so, in order to produce a relatively small model to transmit and use within the game.
Finally the polygon-reduced representation is converted to a games file format which 5 can be handled by the game (step 900).
C This last step may require liaison and co-operation with the games manufacturers, or it is conceivable that this task could be performed completely independently.
The acquisition of the 21) images I l and 12 will now be described with reference to Figures 2A, 2B and 2C. Each of these FigUres shows different camera arrangements which could be provided as fixed stereoscopic camera arrangements in a dedicated c booth provided in (say a gaming arcade) or could be set up by the games player. In each case a camera C acquires an image from one viewpoint and the same or a different camera C acquires an overlapping image from a different viewpoint. The C 0 fields of view V must overlap in the region of the face of the subject 1.
In Figure 2A the cameras are diagonally disposed at right angles, in Figure 2B the 0 0 C 0 cameras are parallel and in Figure 2C the cameras are orthogonal, so that one camera 0 has a front view and the other camera has a profile view of the subject 1. The arrangement of Figure 2C is particularly preferred because the front view and profile 23 are acquired independently. The front image and profile image can be analysed to 0 determine the size and location of features and the resulting data can be used to select one of a range of generic heads or to adjust variable parameters of the generic C head as shown, prior to step 300.
By correlating a small number of points of the digitised images by means of a known C C algorithm (eg the Gruen algorithm) the exact camera locations and orientations can be determined and the remaining points correlated relatively easily to enable a 31) Z representation of the subject 1 to be generated, essentially be projecting ray lines from pairs of correlated points by virtual projectors having the same location, orientation and optical parameters as the cameras.
Referring to Figure 3, the above correlation process between the generic image G C C and the image 1 of the character 1 provided by digital camera C can be performed by a server computer S on the Internet and the 21) images acquired by the camera C can 8 either be posted by the games player (eg as photographic prints) or uploaded (eg as t:1 C) Im email attachments) onto the server from the user's computer PC via a communications link CL provided by the Internet.
To this end the server computer S has stored on its hard disc HD:
i) the software required to implement the processes outlined in Figure 1 including file format conversion software for the common computer games, graphics software 0 image correlation software (eg based on the Gruen algorithm or a variant thereof) C:1 r t:1 and stereoscopic image processing software; t a ii) software to generate a VVWW submission form F on the user's computer screen Z:p and to process the personal information entered therein by the user, eg credit card details and the required game format of the character; iii) appropriate Internet server software including appropriate security software, and iv) an appropriate operating system.
Since items iii) and iv) are well known per se and items i) and ii) have already been described in sufficient detail for programmers of reasonable skill to write the necessary code, no further description is necessary.
The use's computer PC would have stored on its hard disc HD one or more games programs, Internet access software, graphics software for handling the images C, provided by camera C and a conventional operating system, eg Windows 958 or 4:1 Windows 980. Both computer PC and computer S are provided with a standard microprocessor jAP eg an Intel Pentium@ processor as well as RAM and ROM and appropriate input/output circuitry 110 connected to standard modems M or other comunication devices.
It is expected that the WWW submission form F would be based on a Java Applet to allow the validation of the quality of the submitted imag ,,es, and the selection of body types. It is likely that the server operator would want to test images for their size, resolution, and possibly their contrast ratio, before accepting them for processing. If 0 this can be done by an applet before accepting any credit card transaction, then it 0 9 will help to reduce bad conversions. By weeding out potential failures at an early stage, this will reduce wasted processing time, and will reduce customer frustration by not havino to wait a few hours to find out that the images were not of sufficient quality to produce a character.
Given the correct design of the web pagelform, a valuable database could be constructed of games users, which may be sold or used for mail-shots for future 0 developments.
In addition to the basic information requested on the Intemet submission form F, the operator can request other information of the user for his own uses, namely:
is Which games he plays C Age r> How he found out about us.
It would also be possible to take a single frontal photograph of the subject, detect facial features, and map the image onto a generic model. As a large percentage of the 0 0 0 brain is dedicated to the task of facial recognition, the model may be very 0 approximate indeed to the actual geometry of the subject's face, and the texture map need only be very low resolution. This may not be acceptable for higher resolution C models that may be required for games such as Tomb Raider.
C Although the preferred embodiment is based on an Internet server, the invention can C also be implemented in a purpose-built games booth at which the images 11 and 12 are acquired, and the processing can be carried out either locally in the booth or 0 remotely eg in a server computer linked to a number of such booths in a network.
In a variant, more than two cameras could be used to acquire the 3D surface of the character.

Claims (16)

Claims
1. A method of providing a three-dimensional representation of an object wherein two or more two-dimensional images of the object are photogrammetrically C, 0 processed to generate an incomplete three-dimensional representation thereof and the incomplete three-dimensional representation is combined with a generic representation of such objects to provide the three-dimensional representation.
2. A method as claimed in claim 1 wherein the object is a human or animal body or a part thereof.
3. A method as claimed in claim I or claim 2 wherein the threedimensional representation derived from the combination with the generic representation is provided in the format of an animatable character.
4. A method as claimed in claim 3 wherein the file of the animatable character is loaded into a computer game.
C
5. A method of personalising a computer game character wherein at least one two Z:I dimensional image of a character of the game is digitally processed at a location C 0 remote from a player's computer, converted to an animatable character file and loaded onto the player's computer.
6. A method as claimed in claim 5 wherein at least two images of the character are stereoscopically processed at said remote location to provide a three- dimensional animatable character file.
7. A method as claimed in any of claims 3 to 6 wherein the animatzible character file is downloaded from a remote server via a communications link.
8. A method as claimed in claim 7 wherein the animatable character file is downloaded over the Internet.
9. A method as claimed in any preceding claim wherein at least one twodimensional image is uploaded to a remote server at which it is stereoscopically processed.
0 11
10. A method as claimed in any of claims I to 4 or claim 6 or any of claims 7 to 9 as dependent on claim 6 wherein a three-dimensional geometric representation of the object or character is combined with a generic three-dimensional geometric representation of such objects or characters to generate a modified generic geometric representation and a texture map is combined with the generic geometric representation.
11. A method as claimed in claim 10 wherein the generic three-dimensional representation is in the form of a NURBS (Non-Uniform Rational B-Spline) model and is distorted in dependence upon detected features of the object or character.
12. A method as claimed in any of claims 1 to 4 or claim 6 or any of claims 7 to 9 as dependent on claim 6 wherein a two-dimensional image of the object or character is C' analysed to detect features thereof and a generic representation of the object or C, character is distorted to match its features with those of said two- dimensional image.
13. A method as claimed in claim 13 wherein two such dimensional images acquired rn, from substantially orthogonal viewpoints are analysed to detect features thereof and said generic representation is distorted in three dimensions to match its features with those of both two-dimensional images.
t5
14. A method as claimed in any preceding claim wherein said twodimensional images are images of a human face or head.
r>
15. A method as claimed in any preceding claim wherein a threedimensional representation of an object or character is provided in the form of a polygon model and the polygon model is polygon-reduced.
16. A method of providing a three-dimensional representation of an object, the C) method being substantially as described hereinabove with reference to Figures 1 to 3 r t.
of the accompanying drawings.
GB9912707A 1998-06-01 1999-06-01 Image processing method and apparatus Expired - Fee Related GB2350511B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB9811695.7A GB9811695D0 (en) 1998-06-01 1998-06-01 Facial image processing method and apparatus

Publications (3)

Publication Number Publication Date
GB9912707D0 GB9912707D0 (en) 1999-08-04
GB2350511A true GB2350511A (en) 2000-11-29
GB2350511B GB2350511B (en) 2003-11-19

Family

ID=10832987

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB9811695.7A Ceased GB9811695D0 (en) 1998-06-01 1998-06-01 Facial image processing method and apparatus
GB9912707A Expired - Fee Related GB2350511B (en) 1998-06-01 1999-06-01 Image processing method and apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB9811695.7A Ceased GB9811695D0 (en) 1998-06-01 1998-06-01 Facial image processing method and apparatus

Country Status (6)

Country Link
EP (1) EP1082706A1 (en)
JP (1) JP2002517840A (en)
KR (1) KR20010074504A (en)
AU (1) AU4275899A (en)
GB (2) GB9811695D0 (en)
WO (1) WO1999063490A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2342026B (en) * 1998-09-22 2003-06-11 Luvvy Ltd Graphics and image processing system
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4677661B2 (en) * 2000-10-16 2011-04-27 ソニー株式会社 Video ordering system and method
US6980333B2 (en) * 2001-04-11 2005-12-27 Eastman Kodak Company Personalized motion imaging system
US7174033B2 (en) 2002-05-22 2007-02-06 A4Vision Methods and systems for detecting and recognizing an object based on 3D image data
US7257236B2 (en) 2002-05-22 2007-08-14 A4Vision Methods and systems for detecting and recognizing objects in a controlled wide area
JP4521012B2 (en) * 2007-04-13 2010-08-11 株式会社ソフイア Game machine
JP5627860B2 (en) 2009-04-27 2014-11-19 三菱電機株式会社 3D image distribution system, 3D image distribution method, 3D image distribution device, 3D image viewing system, 3D image viewing method, 3D image viewing device
WO2011040653A1 (en) * 2009-09-30 2011-04-07 주식회사 한울네오텍 Photography apparatus and method for providing a 3d object
US20130300731A1 (en) * 2010-07-23 2013-11-14 Alcatel Lucent Method for visualizing a user of a virtual environment
JP5603452B1 (en) * 2013-04-11 2014-10-08 株式会社スクウェア・エニックス Video game processing apparatus and video game processing program
JP6219791B2 (en) * 2014-08-21 2017-10-25 株式会社スクウェア・エニックス Video game processing apparatus and video game processing program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2247524A (en) * 1990-06-22 1992-03-04 Nat Res Dev Automatic carcass grading apparatus and method
EP0637814A2 (en) * 1993-08-02 1995-02-08 Sun Microsystems, Inc. Method and apparatus for performing dynamic texture mapping for complex surfaces
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
EP0883088A2 (en) * 1997-06-06 1998-12-09 Digital Equipment Corporation Automated mapping of facial images to wireframe topologies
EP0907902A1 (en) * 1996-06-28 1999-04-14 Siemens Nixdorf Informationssysteme AG Method of three-dimensional imaging on a large-screen projection surface using a laser projector
GB2331686A (en) * 1997-06-27 1999-05-26 Nds Ltd Interactive game system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9610212D0 (en) * 1996-05-16 1996-07-24 Cyberglass Limited Method and apparatus for generating moving characters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2247524A (en) * 1990-06-22 1992-03-04 Nat Res Dev Automatic carcass grading apparatus and method
EP0637814A2 (en) * 1993-08-02 1995-02-08 Sun Microsystems, Inc. Method and apparatus for performing dynamic texture mapping for complex surfaces
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
EP0907902A1 (en) * 1996-06-28 1999-04-14 Siemens Nixdorf Informationssysteme AG Method of three-dimensional imaging on a large-screen projection surface using a laser projector
EP0883088A2 (en) * 1997-06-06 1998-12-09 Digital Equipment Corporation Automated mapping of facial images to wireframe topologies
GB2331686A (en) * 1997-06-27 1999-05-26 Nds Ltd Interactive game system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Benford s et al, "Embodiments, avatars, clones and agents for multiusers, multi-sensory virtual *
Int. Conf. Image Processing, vol.3, 16/09/1996 *
Lavagetto F et al, "Synthetic and hybrid imaging in the hunanoid and vidas projects", proc. *
worlds", Multi media systems, March 1997, germany, vol. 5 No. 2, pp93/104. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2342026B (en) * 1998-09-22 2003-06-11 Luvvy Ltd Graphics and image processing system
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography

Also Published As

Publication number Publication date
GB2350511B (en) 2003-11-19
GB9912707D0 (en) 1999-08-04
JP2002517840A (en) 2002-06-18
GB9811695D0 (en) 1998-07-29
EP1082706A1 (en) 2001-03-14
WO1999063490A1 (en) 1999-12-09
AU4275899A (en) 1999-12-20
KR20010074504A (en) 2001-08-04

Similar Documents

Publication Publication Date Title
Bernardini et al. Building a digital model of Michelangelo's Florentine Pieta
Aldrian et al. Inverse rendering of faces with a 3D morphable model
KR102120046B1 (en) How to display objects
US20200057831A1 (en) Real-time generation of synthetic data from multi-shot structured light sensors for three-dimensional object pose estimation
US20150347833A1 (en) Noncontact Biometrics with Small Footprint
US6930685B1 (en) Image processing method and apparatus
US11182945B2 (en) Automatically generating an animatable object from various types of user input
US20100295854A1 (en) Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery
CN109711472B (en) Training data generation method and device
GB2350511A (en) Stereogrammetry; personalizing computer game character
CN107622526A (en) A kind of method that 3-D scanning modeling is carried out based on mobile phone facial recognition component
Jaw et al. Registration of ground‐based LiDAR point clouds by means of 3D line features
Wang et al. Dynamic human body reconstruction and motion tracking with low-cost depth cameras
US7280685B2 (en) Object segmentation from images acquired by handheld cameras
US11645800B2 (en) Advanced systems and methods for automatically generating an animatable object from various types of user input
WO2002009024A1 (en) Identity systems
Williams et al. Automatic image alignment for 3D environment modeling
US11080920B2 (en) Method of displaying an object
Frisky et al. Acquisition Evaluation on Outdoor Scanning for Archaeological Artifact Digitalization.
JP2001012922A (en) Three-dimensional data-processing device
Paar et al. Photogrammetric fingerprint unwrapping
JP2002216114A (en) Three-dimensional model generating method
Dedieu et al. Reality: An Interactive Reconstructiuon Tool of 3D Objects from Photographs.
Lanitis et al. Reconstructing 3d faces in cultural heritage applications
Wendelin Combining multiple depth cameras for reconstruction

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20040219